Test Report: KVM_Linux_crio 19011

                    
                      86685d02f89d02484c16ac75ab0cd1e5f6c63d49:2024-06-03:34742
                    
                

Test fail (31/312)

Order failed test Duration
30 TestAddons/parallel/Ingress 155.09
32 TestAddons/parallel/MetricsServer 328.63
38 TestAddons/parallel/LocalPath 13.37
45 TestAddons/StoppedEnableDisable 154.24
164 TestMultiControlPlane/serial/StopSecondaryNode 142.03
166 TestMultiControlPlane/serial/RestartSecondaryNode 55.03
168 TestMultiControlPlane/serial/RestartClusterKeepsNodes 385.22
171 TestMultiControlPlane/serial/StopCluster 141.72
231 TestMultiNode/serial/RestartKeepsNodes 299.4
233 TestMultiNode/serial/StopMultiNode 141.39
240 TestPreload 283.82
248 TestKubernetesUpgrade 380.04
285 TestPause/serial/SecondStartNoReconfiguration 94.53
314 TestStartStop/group/old-k8s-version/serial/FirstStart 284.27
339 TestStartStop/group/no-preload/serial/Stop 139.3
342 TestStartStop/group/embed-certs/serial/Stop 139.04
345 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.18
346 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
347 TestStartStop/group/old-k8s-version/serial/DeployApp 0.49
348 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 89.46
350 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
352 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
356 TestStartStop/group/old-k8s-version/serial/SecondStart 744.2
357 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.58
358 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.66
359 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.72
360 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.73
361 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 439.94
362 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 448.89
363 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 321.14
364 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 139.12
x
+
TestAddons/parallel/Ingress (155.09s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-699562 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-699562 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-699562 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [22eac9e0-47f1-46a1-9745-87ca515de64e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [22eac9e0-47f1-46a1-9745-87ca515de64e] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004672157s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-699562 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-699562 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.15892694s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-699562 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-699562 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.241
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-699562 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-699562 addons disable ingress-dns --alsologtostderr -v=1: (2.261683793s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-699562 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-699562 addons disable ingress --alsologtostderr -v=1: (7.686695231s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-699562 -n addons-699562
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-699562 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-699562 logs -n 25: (1.282446529s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-640021 | jenkins | v1.33.1 | 03 Jun 24 12:24 UTC |                     |
	|         | -p download-only-640021                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.1 | 03 Jun 24 12:24 UTC | 03 Jun 24 12:24 UTC |
	| delete  | -p download-only-640021                                                                     | download-only-640021 | jenkins | v1.33.1 | 03 Jun 24 12:24 UTC | 03 Jun 24 12:24 UTC |
	| delete  | -p download-only-979896                                                                     | download-only-979896 | jenkins | v1.33.1 | 03 Jun 24 12:24 UTC | 03 Jun 24 12:24 UTC |
	| delete  | -p download-only-640021                                                                     | download-only-640021 | jenkins | v1.33.1 | 03 Jun 24 12:24 UTC | 03 Jun 24 12:24 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-778765 | jenkins | v1.33.1 | 03 Jun 24 12:24 UTC |                     |
	|         | binary-mirror-778765                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:35769                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-778765                                                                     | binary-mirror-778765 | jenkins | v1.33.1 | 03 Jun 24 12:24 UTC | 03 Jun 24 12:24 UTC |
	| addons  | enable dashboard -p                                                                         | addons-699562        | jenkins | v1.33.1 | 03 Jun 24 12:24 UTC |                     |
	|         | addons-699562                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-699562        | jenkins | v1.33.1 | 03 Jun 24 12:24 UTC |                     |
	|         | addons-699562                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-699562 --wait=true                                                                | addons-699562        | jenkins | v1.33.1 | 03 Jun 24 12:24 UTC | 03 Jun 24 12:27 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-699562        | jenkins | v1.33.1 | 03 Jun 24 12:27 UTC | 03 Jun 24 12:27 UTC |
	|         | -p addons-699562                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-699562 addons disable                                                                | addons-699562        | jenkins | v1.33.1 | 03 Jun 24 12:27 UTC | 03 Jun 24 12:27 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-699562 ssh cat                                                                       | addons-699562        | jenkins | v1.33.1 | 03 Jun 24 12:27 UTC | 03 Jun 24 12:27 UTC |
	|         | /opt/local-path-provisioner/pvc-322948b5-f737-472a-a023-d147f813616b_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-699562 addons disable                                                                | addons-699562        | jenkins | v1.33.1 | 03 Jun 24 12:27 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-699562 ip                                                                            | addons-699562        | jenkins | v1.33.1 | 03 Jun 24 12:28 UTC | 03 Jun 24 12:28 UTC |
	| addons  | addons-699562 addons disable                                                                | addons-699562        | jenkins | v1.33.1 | 03 Jun 24 12:28 UTC | 03 Jun 24 12:28 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-699562        | jenkins | v1.33.1 | 03 Jun 24 12:28 UTC | 03 Jun 24 12:28 UTC |
	|         | addons-699562                                                                               |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-699562        | jenkins | v1.33.1 | 03 Jun 24 12:28 UTC | 03 Jun 24 12:28 UTC |
	|         | addons-699562                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-699562 ssh curl -s                                                                   | addons-699562        | jenkins | v1.33.1 | 03 Jun 24 12:28 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-699562        | jenkins | v1.33.1 | 03 Jun 24 12:28 UTC | 03 Jun 24 12:28 UTC |
	|         | -p addons-699562                                                                            |                      |         |         |                     |                     |
	| addons  | addons-699562 addons                                                                        | addons-699562        | jenkins | v1.33.1 | 03 Jun 24 12:28 UTC | 03 Jun 24 12:28 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-699562 addons                                                                        | addons-699562        | jenkins | v1.33.1 | 03 Jun 24 12:28 UTC | 03 Jun 24 12:28 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-699562 ip                                                                            | addons-699562        | jenkins | v1.33.1 | 03 Jun 24 12:30 UTC | 03 Jun 24 12:30 UTC |
	| addons  | addons-699562 addons disable                                                                | addons-699562        | jenkins | v1.33.1 | 03 Jun 24 12:30 UTC | 03 Jun 24 12:30 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-699562 addons disable                                                                | addons-699562        | jenkins | v1.33.1 | 03 Jun 24 12:30 UTC | 03 Jun 24 12:30 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 12:24:24
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 12:24:24.395017 1086826 out.go:291] Setting OutFile to fd 1 ...
	I0603 12:24:24.395285 1086826 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:24:24.395295 1086826 out.go:304] Setting ErrFile to fd 2...
	I0603 12:24:24.395299 1086826 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:24:24.395564 1086826 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
	I0603 12:24:24.396217 1086826 out.go:298] Setting JSON to false
	I0603 12:24:24.397840 1086826 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":11211,"bootTime":1717406253,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 12:24:24.397980 1086826 start.go:139] virtualization: kvm guest
	I0603 12:24:24.400088 1086826 out.go:177] * [addons-699562] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 12:24:24.401631 1086826 out.go:177]   - MINIKUBE_LOCATION=19011
	I0603 12:24:24.401592 1086826 notify.go:220] Checking for updates...
	I0603 12:24:24.403113 1086826 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 12:24:24.404638 1086826 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 12:24:24.406028 1086826 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 12:24:24.407381 1086826 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 12:24:24.408703 1086826 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 12:24:24.410378 1086826 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 12:24:24.441869 1086826 out.go:177] * Using the kvm2 driver based on user configuration
	I0603 12:24:24.443443 1086826 start.go:297] selected driver: kvm2
	I0603 12:24:24.443462 1086826 start.go:901] validating driver "kvm2" against <nil>
	I0603 12:24:24.443474 1086826 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 12:24:24.444153 1086826 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 12:24:24.444232 1086826 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19011-1078924/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0603 12:24:24.459337 1086826 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0603 12:24:24.459391 1086826 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0603 12:24:24.459645 1086826 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 12:24:24.459722 1086826 cni.go:84] Creating CNI manager for ""
	I0603 12:24:24.459739 1086826 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:24:24.459752 1086826 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0603 12:24:24.459835 1086826 start.go:340] cluster config:
	{Name:addons-699562 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-699562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:24:24.459949 1086826 iso.go:125] acquiring lock: {Name:mka26d6a83f88b83737ccc78b57cc462fbe70fe1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 12:24:24.461749 1086826 out.go:177] * Starting "addons-699562" primary control-plane node in "addons-699562" cluster
	I0603 12:24:24.462982 1086826 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 12:24:24.463022 1086826 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0603 12:24:24.463036 1086826 cache.go:56] Caching tarball of preloaded images
	I0603 12:24:24.463123 1086826 preload.go:173] Found /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 12:24:24.463134 1086826 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0603 12:24:24.463498 1086826 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/config.json ...
	I0603 12:24:24.463531 1086826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/config.json: {Name:mka3fc11f119399ce4f1970b76b906c714896655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:24:24.463710 1086826 start.go:360] acquireMachinesLock for addons-699562: {Name:mk20baaab39609d00406b78ad309423511e633ec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 12:24:24.463793 1086826 start.go:364] duration metric: took 57.075µs to acquireMachinesLock for "addons-699562"
	I0603 12:24:24.463819 1086826 start.go:93] Provisioning new machine with config: &{Name:addons-699562 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:addons-699562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 12:24:24.463894 1086826 start.go:125] createHost starting for "" (driver="kvm2")
	I0603 12:24:24.465633 1086826 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0603 12:24:24.465788 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:24:24.465842 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:24:24.480246 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33161
	I0603 12:24:24.480702 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:24:24.481265 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:24:24.481286 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:24:24.481607 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:24:24.481786 1086826 main.go:141] libmachine: (addons-699562) Calling .GetMachineName
	I0603 12:24:24.481950 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
	I0603 12:24:24.482080 1086826 start.go:159] libmachine.API.Create for "addons-699562" (driver="kvm2")
	I0603 12:24:24.482118 1086826 client.go:168] LocalClient.Create starting
	I0603 12:24:24.482153 1086826 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem
	I0603 12:24:24.830722 1086826 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem
	I0603 12:24:25.061334 1086826 main.go:141] libmachine: Running pre-create checks...
	I0603 12:24:25.061363 1086826 main.go:141] libmachine: (addons-699562) Calling .PreCreateCheck
	I0603 12:24:25.061875 1086826 main.go:141] libmachine: (addons-699562) Calling .GetConfigRaw
	I0603 12:24:25.062377 1086826 main.go:141] libmachine: Creating machine...
	I0603 12:24:25.062395 1086826 main.go:141] libmachine: (addons-699562) Calling .Create
	I0603 12:24:25.062542 1086826 main.go:141] libmachine: (addons-699562) Creating KVM machine...
	I0603 12:24:25.063695 1086826 main.go:141] libmachine: (addons-699562) DBG | found existing default KVM network
	I0603 12:24:25.064418 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:25.064281 1086848 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015340}
	I0603 12:24:25.064476 1086826 main.go:141] libmachine: (addons-699562) DBG | created network xml: 
	I0603 12:24:25.064498 1086826 main.go:141] libmachine: (addons-699562) DBG | <network>
	I0603 12:24:25.064510 1086826 main.go:141] libmachine: (addons-699562) DBG |   <name>mk-addons-699562</name>
	I0603 12:24:25.064522 1086826 main.go:141] libmachine: (addons-699562) DBG |   <dns enable='no'/>
	I0603 12:24:25.064532 1086826 main.go:141] libmachine: (addons-699562) DBG |   
	I0603 12:24:25.064546 1086826 main.go:141] libmachine: (addons-699562) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0603 12:24:25.064557 1086826 main.go:141] libmachine: (addons-699562) DBG |     <dhcp>
	I0603 12:24:25.064570 1086826 main.go:141] libmachine: (addons-699562) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0603 12:24:25.064619 1086826 main.go:141] libmachine: (addons-699562) DBG |     </dhcp>
	I0603 12:24:25.064645 1086826 main.go:141] libmachine: (addons-699562) DBG |   </ip>
	I0603 12:24:25.064652 1086826 main.go:141] libmachine: (addons-699562) DBG |   
	I0603 12:24:25.064657 1086826 main.go:141] libmachine: (addons-699562) DBG | </network>
	I0603 12:24:25.064665 1086826 main.go:141] libmachine: (addons-699562) DBG | 
	I0603 12:24:25.069891 1086826 main.go:141] libmachine: (addons-699562) DBG | trying to create private KVM network mk-addons-699562 192.168.39.0/24...
	I0603 12:24:25.134687 1086826 main.go:141] libmachine: (addons-699562) DBG | private KVM network mk-addons-699562 192.168.39.0/24 created
	I0603 12:24:25.134727 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:25.134642 1086848 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 12:24:25.134742 1086826 main.go:141] libmachine: (addons-699562) Setting up store path in /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562 ...
	I0603 12:24:25.134761 1086826 main.go:141] libmachine: (addons-699562) Building disk image from file:///home/jenkins/minikube-integration/19011-1078924/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso
	I0603 12:24:25.134778 1086826 main.go:141] libmachine: (addons-699562) Downloading /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19011-1078924/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0603 12:24:25.382531 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:25.382373 1086848 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa...
	I0603 12:24:25.538612 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:25.538462 1086848 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/addons-699562.rawdisk...
	I0603 12:24:25.538652 1086826 main.go:141] libmachine: (addons-699562) DBG | Writing magic tar header
	I0603 12:24:25.538667 1086826 main.go:141] libmachine: (addons-699562) DBG | Writing SSH key tar header
	I0603 12:24:25.538682 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:25.538619 1086848 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562 ...
	I0603 12:24:25.538813 1086826 main.go:141] libmachine: (addons-699562) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562 (perms=drwx------)
	I0603 12:24:25.538861 1086826 main.go:141] libmachine: (addons-699562) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562
	I0603 12:24:25.538880 1086826 main.go:141] libmachine: (addons-699562) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924/.minikube/machines (perms=drwxr-xr-x)
	I0603 12:24:25.538893 1086826 main.go:141] libmachine: (addons-699562) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924/.minikube (perms=drwxr-xr-x)
	I0603 12:24:25.538899 1086826 main.go:141] libmachine: (addons-699562) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924 (perms=drwxrwxr-x)
	I0603 12:24:25.538906 1086826 main.go:141] libmachine: (addons-699562) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0603 12:24:25.538917 1086826 main.go:141] libmachine: (addons-699562) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0603 12:24:25.538929 1086826 main.go:141] libmachine: (addons-699562) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines
	I0603 12:24:25.538943 1086826 main.go:141] libmachine: (addons-699562) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 12:24:25.538956 1086826 main.go:141] libmachine: (addons-699562) Creating domain...
	I0603 12:24:25.538971 1086826 main.go:141] libmachine: (addons-699562) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924
	I0603 12:24:25.538983 1086826 main.go:141] libmachine: (addons-699562) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0603 12:24:25.538990 1086826 main.go:141] libmachine: (addons-699562) DBG | Checking permissions on dir: /home/jenkins
	I0603 12:24:25.538995 1086826 main.go:141] libmachine: (addons-699562) DBG | Checking permissions on dir: /home
	I0603 12:24:25.539005 1086826 main.go:141] libmachine: (addons-699562) DBG | Skipping /home - not owner
	I0603 12:24:25.539894 1086826 main.go:141] libmachine: (addons-699562) define libvirt domain using xml: 
	I0603 12:24:25.539914 1086826 main.go:141] libmachine: (addons-699562) <domain type='kvm'>
	I0603 12:24:25.539924 1086826 main.go:141] libmachine: (addons-699562)   <name>addons-699562</name>
	I0603 12:24:25.539931 1086826 main.go:141] libmachine: (addons-699562)   <memory unit='MiB'>4000</memory>
	I0603 12:24:25.539939 1086826 main.go:141] libmachine: (addons-699562)   <vcpu>2</vcpu>
	I0603 12:24:25.539949 1086826 main.go:141] libmachine: (addons-699562)   <features>
	I0603 12:24:25.539955 1086826 main.go:141] libmachine: (addons-699562)     <acpi/>
	I0603 12:24:25.539961 1086826 main.go:141] libmachine: (addons-699562)     <apic/>
	I0603 12:24:25.539966 1086826 main.go:141] libmachine: (addons-699562)     <pae/>
	I0603 12:24:25.539970 1086826 main.go:141] libmachine: (addons-699562)     
	I0603 12:24:25.539977 1086826 main.go:141] libmachine: (addons-699562)   </features>
	I0603 12:24:25.539982 1086826 main.go:141] libmachine: (addons-699562)   <cpu mode='host-passthrough'>
	I0603 12:24:25.539992 1086826 main.go:141] libmachine: (addons-699562)   
	I0603 12:24:25.540021 1086826 main.go:141] libmachine: (addons-699562)   </cpu>
	I0603 12:24:25.540040 1086826 main.go:141] libmachine: (addons-699562)   <os>
	I0603 12:24:25.540047 1086826 main.go:141] libmachine: (addons-699562)     <type>hvm</type>
	I0603 12:24:25.540051 1086826 main.go:141] libmachine: (addons-699562)     <boot dev='cdrom'/>
	I0603 12:24:25.540056 1086826 main.go:141] libmachine: (addons-699562)     <boot dev='hd'/>
	I0603 12:24:25.540063 1086826 main.go:141] libmachine: (addons-699562)     <bootmenu enable='no'/>
	I0603 12:24:25.540068 1086826 main.go:141] libmachine: (addons-699562)   </os>
	I0603 12:24:25.540072 1086826 main.go:141] libmachine: (addons-699562)   <devices>
	I0603 12:24:25.540080 1086826 main.go:141] libmachine: (addons-699562)     <disk type='file' device='cdrom'>
	I0603 12:24:25.540089 1086826 main.go:141] libmachine: (addons-699562)       <source file='/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/boot2docker.iso'/>
	I0603 12:24:25.540099 1086826 main.go:141] libmachine: (addons-699562)       <target dev='hdc' bus='scsi'/>
	I0603 12:24:25.540109 1086826 main.go:141] libmachine: (addons-699562)       <readonly/>
	I0603 12:24:25.540135 1086826 main.go:141] libmachine: (addons-699562)     </disk>
	I0603 12:24:25.540160 1086826 main.go:141] libmachine: (addons-699562)     <disk type='file' device='disk'>
	I0603 12:24:25.540175 1086826 main.go:141] libmachine: (addons-699562)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0603 12:24:25.540192 1086826 main.go:141] libmachine: (addons-699562)       <source file='/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/addons-699562.rawdisk'/>
	I0603 12:24:25.540205 1086826 main.go:141] libmachine: (addons-699562)       <target dev='hda' bus='virtio'/>
	I0603 12:24:25.540215 1086826 main.go:141] libmachine: (addons-699562)     </disk>
	I0603 12:24:25.540222 1086826 main.go:141] libmachine: (addons-699562)     <interface type='network'>
	I0603 12:24:25.540238 1086826 main.go:141] libmachine: (addons-699562)       <source network='mk-addons-699562'/>
	I0603 12:24:25.540251 1086826 main.go:141] libmachine: (addons-699562)       <model type='virtio'/>
	I0603 12:24:25.540261 1086826 main.go:141] libmachine: (addons-699562)     </interface>
	I0603 12:24:25.540273 1086826 main.go:141] libmachine: (addons-699562)     <interface type='network'>
	I0603 12:24:25.540288 1086826 main.go:141] libmachine: (addons-699562)       <source network='default'/>
	I0603 12:24:25.540297 1086826 main.go:141] libmachine: (addons-699562)       <model type='virtio'/>
	I0603 12:24:25.540306 1086826 main.go:141] libmachine: (addons-699562)     </interface>
	I0603 12:24:25.540313 1086826 main.go:141] libmachine: (addons-699562)     <serial type='pty'>
	I0603 12:24:25.540323 1086826 main.go:141] libmachine: (addons-699562)       <target port='0'/>
	I0603 12:24:25.540336 1086826 main.go:141] libmachine: (addons-699562)     </serial>
	I0603 12:24:25.540346 1086826 main.go:141] libmachine: (addons-699562)     <console type='pty'>
	I0603 12:24:25.540358 1086826 main.go:141] libmachine: (addons-699562)       <target type='serial' port='0'/>
	I0603 12:24:25.540372 1086826 main.go:141] libmachine: (addons-699562)     </console>
	I0603 12:24:25.540383 1086826 main.go:141] libmachine: (addons-699562)     <rng model='virtio'>
	I0603 12:24:25.540394 1086826 main.go:141] libmachine: (addons-699562)       <backend model='random'>/dev/random</backend>
	I0603 12:24:25.540400 1086826 main.go:141] libmachine: (addons-699562)     </rng>
	I0603 12:24:25.540407 1086826 main.go:141] libmachine: (addons-699562)     
	I0603 12:24:25.540420 1086826 main.go:141] libmachine: (addons-699562)     
	I0603 12:24:25.540427 1086826 main.go:141] libmachine: (addons-699562)   </devices>
	I0603 12:24:25.540450 1086826 main.go:141] libmachine: (addons-699562) </domain>
	I0603 12:24:25.540473 1086826 main.go:141] libmachine: (addons-699562) 
	I0603 12:24:25.546035 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:52:26:8d in network default
	I0603 12:24:25.546507 1086826 main.go:141] libmachine: (addons-699562) Ensuring networks are active...
	I0603 12:24:25.546533 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:25.547150 1086826 main.go:141] libmachine: (addons-699562) Ensuring network default is active
	I0603 12:24:25.547454 1086826 main.go:141] libmachine: (addons-699562) Ensuring network mk-addons-699562 is active
	I0603 12:24:25.547879 1086826 main.go:141] libmachine: (addons-699562) Getting domain xml...
	I0603 12:24:25.548531 1086826 main.go:141] libmachine: (addons-699562) Creating domain...
	I0603 12:24:26.908511 1086826 main.go:141] libmachine: (addons-699562) Waiting to get IP...
	I0603 12:24:26.909158 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:26.909617 1086826 main.go:141] libmachine: (addons-699562) DBG | unable to find current IP address of domain addons-699562 in network mk-addons-699562
	I0603 12:24:26.909662 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:26.909614 1086848 retry.go:31] will retry after 278.583828ms: waiting for machine to come up
	I0603 12:24:27.190168 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:27.190625 1086826 main.go:141] libmachine: (addons-699562) DBG | unable to find current IP address of domain addons-699562 in network mk-addons-699562
	I0603 12:24:27.190656 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:27.190586 1086848 retry.go:31] will retry after 372.5456ms: waiting for machine to come up
	I0603 12:24:27.565372 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:27.565870 1086826 main.go:141] libmachine: (addons-699562) DBG | unable to find current IP address of domain addons-699562 in network mk-addons-699562
	I0603 12:24:27.565914 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:27.565824 1086848 retry.go:31] will retry after 296.896127ms: waiting for machine to come up
	I0603 12:24:27.864373 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:27.864848 1086826 main.go:141] libmachine: (addons-699562) DBG | unable to find current IP address of domain addons-699562 in network mk-addons-699562
	I0603 12:24:27.864874 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:27.864792 1086848 retry.go:31] will retry after 404.252126ms: waiting for machine to come up
	I0603 12:24:28.270290 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:28.270670 1086826 main.go:141] libmachine: (addons-699562) DBG | unable to find current IP address of domain addons-699562 in network mk-addons-699562
	I0603 12:24:28.270696 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:28.270636 1086848 retry.go:31] will retry after 599.58078ms: waiting for machine to come up
	I0603 12:24:28.871331 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:28.871741 1086826 main.go:141] libmachine: (addons-699562) DBG | unable to find current IP address of domain addons-699562 in network mk-addons-699562
	I0603 12:24:28.871765 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:28.871690 1086848 retry.go:31] will retry after 952.068344ms: waiting for machine to come up
	I0603 12:24:29.825179 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:29.825523 1086826 main.go:141] libmachine: (addons-699562) DBG | unable to find current IP address of domain addons-699562 in network mk-addons-699562
	I0603 12:24:29.825588 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:29.825498 1086848 retry.go:31] will retry after 1.104687103s: waiting for machine to come up
	I0603 12:24:30.931756 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:30.932080 1086826 main.go:141] libmachine: (addons-699562) DBG | unable to find current IP address of domain addons-699562 in network mk-addons-699562
	I0603 12:24:30.932117 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:30.932013 1086848 retry.go:31] will retry after 1.141640091s: waiting for machine to come up
	I0603 12:24:32.075239 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:32.075624 1086826 main.go:141] libmachine: (addons-699562) DBG | unable to find current IP address of domain addons-699562 in network mk-addons-699562
	I0603 12:24:32.075650 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:32.075551 1086848 retry.go:31] will retry after 1.323363823s: waiting for machine to come up
	I0603 12:24:33.401067 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:33.401447 1086826 main.go:141] libmachine: (addons-699562) DBG | unable to find current IP address of domain addons-699562 in network mk-addons-699562
	I0603 12:24:33.401478 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:33.401379 1086848 retry.go:31] will retry after 1.79959901s: waiting for machine to come up
	I0603 12:24:35.202394 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:35.202849 1086826 main.go:141] libmachine: (addons-699562) DBG | unable to find current IP address of domain addons-699562 in network mk-addons-699562
	I0603 12:24:35.202881 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:35.202784 1086848 retry.go:31] will retry after 2.402984849s: waiting for machine to come up
	I0603 12:24:37.608253 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:37.608533 1086826 main.go:141] libmachine: (addons-699562) DBG | unable to find current IP address of domain addons-699562 in network mk-addons-699562
	I0603 12:24:37.608549 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:37.608522 1086848 retry.go:31] will retry after 3.335405184s: waiting for machine to come up
	I0603 12:24:40.945518 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:40.945934 1086826 main.go:141] libmachine: (addons-699562) DBG | unable to find current IP address of domain addons-699562 in network mk-addons-699562
	I0603 12:24:40.945954 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:40.945909 1086848 retry.go:31] will retry after 3.713074283s: waiting for machine to come up
	I0603 12:24:44.660565 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:44.661082 1086826 main.go:141] libmachine: (addons-699562) DBG | unable to find current IP address of domain addons-699562 in network mk-addons-699562
	I0603 12:24:44.661109 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:44.661034 1086848 retry.go:31] will retry after 5.622787495s: waiting for machine to come up
	I0603 12:24:50.285257 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:50.285752 1086826 main.go:141] libmachine: (addons-699562) Found IP for machine: 192.168.39.241
	I0603 12:24:50.285779 1086826 main.go:141] libmachine: (addons-699562) Reserving static IP address...
	I0603 12:24:50.285794 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has current primary IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:50.286188 1086826 main.go:141] libmachine: (addons-699562) DBG | unable to find host DHCP lease matching {name: "addons-699562", mac: "52:54:00:d2:ff:f6", ip: "192.168.39.241"} in network mk-addons-699562
	I0603 12:24:50.392337 1086826 main.go:141] libmachine: (addons-699562) DBG | Getting to WaitForSSH function...
	I0603 12:24:50.392376 1086826 main.go:141] libmachine: (addons-699562) Reserved static IP address: 192.168.39.241
	I0603 12:24:50.392391 1086826 main.go:141] libmachine: (addons-699562) Waiting for SSH to be available...
	I0603 12:24:50.394776 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:50.395257 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:24:50.395285 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:50.395509 1086826 main.go:141] libmachine: (addons-699562) DBG | Using SSH client type: external
	I0603 12:24:50.395533 1086826 main.go:141] libmachine: (addons-699562) DBG | Using SSH private key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa (-rw-------)
	I0603 12:24:50.395566 1086826 main.go:141] libmachine: (addons-699562) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.241 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 12:24:50.395588 1086826 main.go:141] libmachine: (addons-699562) DBG | About to run SSH command:
	I0603 12:24:50.395604 1086826 main.go:141] libmachine: (addons-699562) DBG | exit 0
	I0603 12:24:50.517849 1086826 main.go:141] libmachine: (addons-699562) DBG | SSH cmd err, output: <nil>: 
	I0603 12:24:50.518082 1086826 main.go:141] libmachine: (addons-699562) KVM machine creation complete!
	I0603 12:24:50.518448 1086826 main.go:141] libmachine: (addons-699562) Calling .GetConfigRaw
	I0603 12:24:50.550682 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
	I0603 12:24:50.551029 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
	I0603 12:24:50.551273 1086826 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0603 12:24:50.551288 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
	I0603 12:24:50.552696 1086826 main.go:141] libmachine: Detecting operating system of created instance...
	I0603 12:24:50.552714 1086826 main.go:141] libmachine: Waiting for SSH to be available...
	I0603 12:24:50.552722 1086826 main.go:141] libmachine: Getting to WaitForSSH function...
	I0603 12:24:50.552730 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:24:50.554915 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:50.555224 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:24:50.555250 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:50.555415 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:24:50.555599 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:24:50.555761 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:24:50.555931 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:24:50.556114 1086826 main.go:141] libmachine: Using SSH client type: native
	I0603 12:24:50.556316 1086826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0603 12:24:50.556330 1086826 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0603 12:24:50.656762 1086826 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:24:50.656797 1086826 main.go:141] libmachine: Detecting the provisioner...
	I0603 12:24:50.656809 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:24:50.659643 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:50.660034 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:24:50.660063 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:50.660274 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:24:50.660454 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:24:50.660743 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:24:50.660933 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:24:50.661128 1086826 main.go:141] libmachine: Using SSH client type: native
	I0603 12:24:50.661342 1086826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0603 12:24:50.661355 1086826 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0603 12:24:50.763353 1086826 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0603 12:24:50.763438 1086826 main.go:141] libmachine: found compatible host: buildroot
	I0603 12:24:50.763447 1086826 main.go:141] libmachine: Provisioning with buildroot...
	I0603 12:24:50.763457 1086826 main.go:141] libmachine: (addons-699562) Calling .GetMachineName
	I0603 12:24:50.763772 1086826 buildroot.go:166] provisioning hostname "addons-699562"
	I0603 12:24:50.763801 1086826 main.go:141] libmachine: (addons-699562) Calling .GetMachineName
	I0603 12:24:50.764045 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:24:50.766806 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:50.767124 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:24:50.767155 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:50.767267 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:24:50.767455 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:24:50.767658 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:24:50.767810 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:24:50.768006 1086826 main.go:141] libmachine: Using SSH client type: native
	I0603 12:24:50.768174 1086826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0603 12:24:50.768185 1086826 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-699562 && echo "addons-699562" | sudo tee /etc/hostname
	I0603 12:24:50.884135 1086826 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-699562
	
	I0603 12:24:50.884174 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:24:50.886988 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:50.887370 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:24:50.887399 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:50.887522 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:24:50.887747 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:24:50.887929 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:24:50.888065 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:24:50.888223 1086826 main.go:141] libmachine: Using SSH client type: native
	I0603 12:24:50.888400 1086826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0603 12:24:50.888415 1086826 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-699562' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-699562/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-699562' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 12:24:50.994596 1086826 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:24:50.994627 1086826 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19011-1078924/.minikube CaCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19011-1078924/.minikube}
	I0603 12:24:50.994678 1086826 buildroot.go:174] setting up certificates
	I0603 12:24:50.994690 1086826 provision.go:84] configureAuth start
	I0603 12:24:50.994705 1086826 main.go:141] libmachine: (addons-699562) Calling .GetMachineName
	I0603 12:24:50.994999 1086826 main.go:141] libmachine: (addons-699562) Calling .GetIP
	I0603 12:24:50.997877 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:50.998222 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:24:50.998250 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:50.998373 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:24:51.000223 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:51.000545 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:24:51.000580 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:51.000681 1086826 provision.go:143] copyHostCerts
	I0603 12:24:51.000769 1086826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem (1675 bytes)
	I0603 12:24:51.000904 1086826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem (1078 bytes)
	I0603 12:24:51.000977 1086826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem (1123 bytes)
	I0603 12:24:51.001042 1086826 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem org=jenkins.addons-699562 san=[127.0.0.1 192.168.39.241 addons-699562 localhost minikube]
	I0603 12:24:51.342081 1086826 provision.go:177] copyRemoteCerts
	I0603 12:24:51.342147 1086826 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 12:24:51.342179 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:24:51.344885 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:51.345246 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:24:51.345285 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:51.345439 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:24:51.345638 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:24:51.345834 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:24:51.345944 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
	I0603 12:24:51.424011 1086826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 12:24:51.448763 1086826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0603 12:24:51.472279 1086826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 12:24:51.495752 1086826 provision.go:87] duration metric: took 501.043641ms to configureAuth
	I0603 12:24:51.495788 1086826 buildroot.go:189] setting minikube options for container-runtime
	I0603 12:24:51.495998 1086826 config.go:182] Loaded profile config "addons-699562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:24:51.496096 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:24:51.498510 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:51.498896 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:24:51.498926 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:51.499093 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:24:51.499296 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:24:51.499463 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:24:51.499633 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:24:51.499826 1086826 main.go:141] libmachine: Using SSH client type: native
	I0603 12:24:51.500031 1086826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0603 12:24:51.500047 1086826 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 12:24:51.754663 1086826 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 12:24:51.754694 1086826 main.go:141] libmachine: Checking connection to Docker...
	I0603 12:24:51.754702 1086826 main.go:141] libmachine: (addons-699562) Calling .GetURL
	I0603 12:24:51.756019 1086826 main.go:141] libmachine: (addons-699562) DBG | Using libvirt version 6000000
	I0603 12:24:51.758172 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:51.758540 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:24:51.758570 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:51.758742 1086826 main.go:141] libmachine: Docker is up and running!
	I0603 12:24:51.758758 1086826 main.go:141] libmachine: Reticulating splines...
	I0603 12:24:51.758766 1086826 client.go:171] duration metric: took 27.276637808s to LocalClient.Create
	I0603 12:24:51.758788 1086826 start.go:167] duration metric: took 27.276710156s to libmachine.API.Create "addons-699562"
	I0603 12:24:51.758798 1086826 start.go:293] postStartSetup for "addons-699562" (driver="kvm2")
	I0603 12:24:51.758807 1086826 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 12:24:51.758824 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
	I0603 12:24:51.759114 1086826 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 12:24:51.759147 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:24:51.761475 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:51.761749 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:24:51.761772 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:51.761911 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:24:51.762082 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:24:51.762241 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:24:51.762381 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
	I0603 12:24:51.844343 1086826 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 12:24:51.848752 1086826 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 12:24:51.848851 1086826 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/addons for local assets ...
	I0603 12:24:51.848922 1086826 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/files for local assets ...
	I0603 12:24:51.848944 1086826 start.go:296] duration metric: took 90.142044ms for postStartSetup
	I0603 12:24:51.848980 1086826 main.go:141] libmachine: (addons-699562) Calling .GetConfigRaw
	I0603 12:24:51.849638 1086826 main.go:141] libmachine: (addons-699562) Calling .GetIP
	I0603 12:24:51.852138 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:51.852481 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:24:51.852518 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:51.852718 1086826 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/config.json ...
	I0603 12:24:51.852881 1086826 start.go:128] duration metric: took 27.388970368s to createHost
	I0603 12:24:51.852902 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:24:51.854845 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:51.855095 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:24:51.855124 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:51.855228 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:24:51.855393 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:24:51.855524 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:24:51.855619 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:24:51.855730 1086826 main.go:141] libmachine: Using SSH client type: native
	I0603 12:24:51.855934 1086826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0603 12:24:51.855948 1086826 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 12:24:51.954192 1086826 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717417491.933231742
	
	I0603 12:24:51.954226 1086826 fix.go:216] guest clock: 1717417491.933231742
	I0603 12:24:51.954242 1086826 fix.go:229] Guest: 2024-06-03 12:24:51.933231742 +0000 UTC Remote: 2024-06-03 12:24:51.852891604 +0000 UTC m=+27.492675075 (delta=80.340138ms)
	I0603 12:24:51.954295 1086826 fix.go:200] guest clock delta is within tolerance: 80.340138ms
	I0603 12:24:51.954303 1086826 start.go:83] releasing machines lock for "addons-699562", held for 27.490498582s
	I0603 12:24:51.954328 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
	I0603 12:24:51.954622 1086826 main.go:141] libmachine: (addons-699562) Calling .GetIP
	I0603 12:24:51.957183 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:51.957520 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:24:51.957549 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:51.957708 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
	I0603 12:24:51.958214 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
	I0603 12:24:51.958399 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
	I0603 12:24:51.958512 1086826 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 12:24:51.958570 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:24:51.958629 1086826 ssh_runner.go:195] Run: cat /version.json
	I0603 12:24:51.958653 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:24:51.961231 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:51.961545 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:24:51.961572 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:51.961595 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:51.961831 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:24:51.961927 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:24:51.961956 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:51.961990 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:24:51.962080 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:24:51.962171 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:24:51.962217 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:24:51.962347 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:24:51.962424 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
	I0603 12:24:51.962504 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
	I0603 12:24:52.034613 1086826 ssh_runner.go:195] Run: systemctl --version
	I0603 12:24:52.061172 1086826 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 12:24:52.225052 1086826 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 12:24:52.231581 1086826 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 12:24:52.231658 1086826 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 12:24:52.250882 1086826 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 12:24:52.250910 1086826 start.go:494] detecting cgroup driver to use...
	I0603 12:24:52.250982 1086826 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 12:24:52.271994 1086826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:24:52.288519 1086826 docker.go:217] disabling cri-docker service (if available) ...
	I0603 12:24:52.288600 1086826 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 12:24:52.304066 1086826 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 12:24:52.318357 1086826 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 12:24:52.444110 1086826 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 12:24:52.607807 1086826 docker.go:233] disabling docker service ...
	I0603 12:24:52.607888 1086826 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 12:24:52.622983 1086826 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 12:24:52.635763 1086826 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 12:24:52.756045 1086826 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 12:24:52.870974 1086826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 12:24:52.885958 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:24:52.909939 1086826 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 12:24:52.910019 1086826 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:24:52.920976 1086826 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 12:24:52.921043 1086826 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:24:52.932075 1086826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:24:52.943370 1086826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:24:52.954401 1086826 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 12:24:52.965823 1086826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:24:52.976875 1086826 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:24:52.994792 1086826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:24:53.006108 1086826 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 12:24:53.016055 1086826 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 12:24:53.016127 1086826 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 12:24:53.030064 1086826 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 12:24:53.039928 1086826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:24:53.156311 1086826 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 12:24:53.297085 1086826 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 12:24:53.297199 1086826 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 12:24:53.302466 1086826 start.go:562] Will wait 60s for crictl version
	I0603 12:24:53.302559 1086826 ssh_runner.go:195] Run: which crictl
	I0603 12:24:53.306379 1086826 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 12:24:53.351831 1086826 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 12:24:53.351927 1086826 ssh_runner.go:195] Run: crio --version
	I0603 12:24:53.380029 1086826 ssh_runner.go:195] Run: crio --version
	I0603 12:24:53.410556 1086826 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 12:24:53.411804 1086826 main.go:141] libmachine: (addons-699562) Calling .GetIP
	I0603 12:24:53.414687 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:53.415038 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:24:53.415065 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:53.415276 1086826 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0603 12:24:53.419753 1086826 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:24:53.432681 1086826 kubeadm.go:877] updating cluster {Name:addons-699562 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:addons-699562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.241 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 12:24:53.432799 1086826 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 12:24:53.432842 1086826 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:24:53.466485 1086826 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0603 12:24:53.466571 1086826 ssh_runner.go:195] Run: which lz4
	I0603 12:24:53.470862 1086826 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 12:24:53.475112 1086826 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 12:24:53.475150 1086826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0603 12:24:54.805803 1086826 crio.go:462] duration metric: took 1.334972428s to copy over tarball
	I0603 12:24:54.805891 1086826 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 12:24:57.079171 1086826 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.273232417s)
	I0603 12:24:57.079222 1086826 crio.go:469] duration metric: took 2.273384926s to extract the tarball
	I0603 12:24:57.079239 1086826 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 12:24:57.118848 1086826 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:24:57.171954 1086826 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 12:24:57.171984 1086826 cache_images.go:84] Images are preloaded, skipping loading
	I0603 12:24:57.171995 1086826 kubeadm.go:928] updating node { 192.168.39.241 8443 v1.30.1 crio true true} ...
	I0603 12:24:57.172114 1086826 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-699562 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.241
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:addons-699562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 12:24:57.172180 1086826 ssh_runner.go:195] Run: crio config
	I0603 12:24:57.226884 1086826 cni.go:84] Creating CNI manager for ""
	I0603 12:24:57.226908 1086826 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:24:57.226918 1086826 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 12:24:57.226941 1086826 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.241 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-699562 NodeName:addons-699562 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.241"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.241 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 12:24:57.227076 1086826 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.241
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-699562"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.241
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.241"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 12:24:57.227137 1086826 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 12:24:57.239219 1086826 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 12:24:57.239289 1086826 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 12:24:57.250643 1086826 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0603 12:24:57.269452 1086826 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 12:24:57.289045 1086826 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0603 12:24:57.308060 1086826 ssh_runner.go:195] Run: grep 192.168.39.241	control-plane.minikube.internal$ /etc/hosts
	I0603 12:24:57.312353 1086826 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.241	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:24:57.326672 1086826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:24:57.469263 1086826 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:24:57.487461 1086826 certs.go:68] Setting up /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562 for IP: 192.168.39.241
	I0603 12:24:57.487489 1086826 certs.go:194] generating shared ca certs ...
	I0603 12:24:57.487508 1086826 certs.go:226] acquiring lock for ca certs: {Name:mkeec5aabce7c9540fcb31b78e4f96c2851d54f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:24:57.487662 1086826 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key
	I0603 12:24:57.796136 1086826 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt ...
	I0603 12:24:57.796173 1086826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt: {Name:mkf6899bfed4ad6512f084e6101d8170b87aa8c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:24:57.796347 1086826 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key ...
	I0603 12:24:57.796359 1086826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key: {Name:mkb9d4ed66614d50db2e65010103ad18fc38392f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:24:57.796434 1086826 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key
	I0603 12:24:57.988064 1086826 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt ...
	I0603 12:24:57.988093 1086826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt: {Name:mkab0d8277f7066917c19f74ecac4b98f17efe97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:24:57.988258 1086826 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key ...
	I0603 12:24:57.988269 1086826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key: {Name:mkfdedf65267e5b22a2568e9daa9efca1f06a694 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:24:57.988340 1086826 certs.go:256] generating profile certs ...
	I0603 12:24:57.988401 1086826 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.key
	I0603 12:24:57.988418 1086826 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.crt with IP's: []
	I0603 12:24:58.169717 1086826 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.crt ...
	I0603 12:24:58.169748 1086826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.crt: {Name:mk0332016de9f15436fb308f06459566b4755678 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:24:58.169912 1086826 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.key ...
	I0603 12:24:58.169924 1086826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.key: {Name:mkbd821f9271c2b7a33d746cd213fabc96fbeca6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:24:58.169995 1086826 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/apiserver.key.fe8c4ec0
	I0603 12:24:58.170014 1086826 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/apiserver.crt.fe8c4ec0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.241]
	I0603 12:24:58.353654 1086826 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/apiserver.crt.fe8c4ec0 ...
	I0603 12:24:58.353688 1086826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/apiserver.crt.fe8c4ec0: {Name:mk2efdc33db4a931854f6a87476a9e7c076c4560 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:24:58.353848 1086826 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/apiserver.key.fe8c4ec0 ...
	I0603 12:24:58.353862 1086826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/apiserver.key.fe8c4ec0: {Name:mkbfd6a19ac77e29694cc3e059a9a211b4a91c26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:24:58.353944 1086826 certs.go:381] copying /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/apiserver.crt.fe8c4ec0 -> /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/apiserver.crt
	I0603 12:24:58.354032 1086826 certs.go:385] copying /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/apiserver.key.fe8c4ec0 -> /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/apiserver.key
	I0603 12:24:58.354086 1086826 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/proxy-client.key
	I0603 12:24:58.354106 1086826 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/proxy-client.crt with IP's: []
	I0603 12:24:58.527806 1086826 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/proxy-client.crt ...
	I0603 12:24:58.527842 1086826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/proxy-client.crt: {Name:mkaadea5326f9442ed664027a21a81b1f09a2cbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:24:58.528017 1086826 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/proxy-client.key ...
	I0603 12:24:58.528030 1086826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/proxy-client.key: {Name:mk8d4a5cdfed9257e413dc25422f47f0d4704dc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:24:58.528204 1086826 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 12:24:58.528243 1086826 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem (1078 bytes)
	I0603 12:24:58.528269 1086826 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem (1123 bytes)
	I0603 12:24:58.528291 1086826 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem (1675 bytes)
	I0603 12:24:58.528908 1086826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 12:24:58.555501 1086826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 12:24:58.579639 1086826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 12:24:58.603222 1086826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0603 12:24:58.626441 1086826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0603 12:24:58.650027 1086826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0603 12:24:58.673722 1086826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 12:24:58.696976 1086826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 12:24:58.720137 1086826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 12:24:58.743306 1086826 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 12:24:58.760135 1086826 ssh_runner.go:195] Run: openssl version
	I0603 12:24:58.766118 1086826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 12:24:58.777446 1086826 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:24:58.781938 1086826 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:24 /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:24:58.781984 1086826 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:24:58.787982 1086826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 12:24:58.799110 1086826 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 12:24:58.803751 1086826 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 12:24:58.803822 1086826 kubeadm.go:391] StartCluster: {Name:addons-699562 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 C
lusterName:addons-699562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.241 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:24:58.803923 1086826 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 12:24:58.803994 1086826 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:24:58.845118 1086826 cri.go:89] found id: ""
	I0603 12:24:58.845210 1086826 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0603 12:24:58.855452 1086826 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:24:58.866068 1086826 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:24:58.876236 1086826 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:24:58.876269 1086826 kubeadm.go:156] found existing configuration files:
	
	I0603 12:24:58.876322 1086826 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 12:24:58.885660 1086826 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:24:58.885722 1086826 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:24:58.895484 1086826 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 12:24:58.904797 1086826 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:24:58.904849 1086826 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:24:58.914443 1086826 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 12:24:58.923913 1086826 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:24:58.923979 1086826 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:24:58.936380 1086826 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 12:24:58.946194 1086826 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:24:58.946246 1086826 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:24:58.968281 1086826 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 12:24:59.030885 1086826 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0603 12:24:59.030942 1086826 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 12:24:59.154648 1086826 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 12:24:59.154824 1086826 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 12:24:59.154989 1086826 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 12:24:59.386626 1086826 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 12:24:59.388555 1086826 out.go:204]   - Generating certificates and keys ...
	I0603 12:24:59.388648 1086826 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 12:24:59.388729 1086826 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 12:24:59.509601 1086826 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0603 12:24:59.635592 1086826 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0603 12:24:59.705913 1086826 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0603 12:24:59.780001 1086826 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0603 12:24:59.863390 1086826 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0603 12:24:59.863715 1086826 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-699562 localhost] and IPs [192.168.39.241 127.0.0.1 ::1]
	I0603 12:24:59.965490 1086826 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0603 12:24:59.965718 1086826 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-699562 localhost] and IPs [192.168.39.241 127.0.0.1 ::1]
	I0603 12:25:00.170107 1086826 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0603 12:25:00.327566 1086826 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0603 12:25:00.439543 1086826 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0603 12:25:00.439669 1086826 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 12:25:00.535598 1086826 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 12:25:00.754190 1086826 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0603 12:25:00.905712 1086826 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 12:25:01.465978 1086826 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 12:25:01.632676 1086826 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 12:25:01.633547 1086826 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 12:25:01.637277 1086826 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 12:25:01.639011 1086826 out.go:204]   - Booting up control plane ...
	I0603 12:25:01.639143 1086826 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 12:25:01.639247 1086826 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 12:25:01.639361 1086826 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 12:25:01.655395 1086826 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 12:25:01.656324 1086826 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 12:25:01.656395 1086826 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 12:25:01.797299 1086826 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0603 12:25:01.797451 1086826 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0603 12:25:02.797820 1086826 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.00116594s
	I0603 12:25:02.797972 1086826 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0603 12:25:07.796996 1086826 kubeadm.go:309] [api-check] The API server is healthy after 5.001434435s
	I0603 12:25:07.809118 1086826 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0603 12:25:07.824366 1086826 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0603 12:25:07.858549 1086826 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0603 12:25:07.858769 1086826 kubeadm.go:309] [mark-control-plane] Marking the node addons-699562 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0603 12:25:07.872341 1086826 kubeadm.go:309] [bootstrap-token] Using token: 949ojx.jojr63h99myrhn1a
	I0603 12:25:07.873773 1086826 out.go:204]   - Configuring RBAC rules ...
	I0603 12:25:07.873890 1086826 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0603 12:25:07.890269 1086826 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0603 12:25:07.901714 1086826 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0603 12:25:07.905951 1086826 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0603 12:25:07.910910 1086826 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0603 12:25:07.915270 1086826 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0603 12:25:08.203573 1086826 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0603 12:25:08.642159 1086826 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0603 12:25:09.203978 1086826 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0603 12:25:09.206070 1086826 kubeadm.go:309] 
	I0603 12:25:09.206152 1086826 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0603 12:25:09.206165 1086826 kubeadm.go:309] 
	I0603 12:25:09.206239 1086826 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0603 12:25:09.206250 1086826 kubeadm.go:309] 
	I0603 12:25:09.206294 1086826 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0603 12:25:09.206383 1086826 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0603 12:25:09.206468 1086826 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0603 12:25:09.206510 1086826 kubeadm.go:309] 
	I0603 12:25:09.206601 1086826 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0603 12:25:09.206613 1086826 kubeadm.go:309] 
	I0603 12:25:09.206679 1086826 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0603 12:25:09.206690 1086826 kubeadm.go:309] 
	I0603 12:25:09.206752 1086826 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0603 12:25:09.206864 1086826 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0603 12:25:09.206964 1086826 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0603 12:25:09.207036 1086826 kubeadm.go:309] 
	I0603 12:25:09.207149 1086826 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0603 12:25:09.207264 1086826 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0603 12:25:09.207280 1086826 kubeadm.go:309] 
	I0603 12:25:09.207387 1086826 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 949ojx.jojr63h99myrhn1a \
	I0603 12:25:09.207541 1086826 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c33e9516f6d05db03b44f9194bafe44692a1b8ae1d860b8bc74f77578e93fdb1 \
	I0603 12:25:09.207575 1086826 kubeadm.go:309] 	--control-plane 
	I0603 12:25:09.207586 1086826 kubeadm.go:309] 
	I0603 12:25:09.207685 1086826 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0603 12:25:09.207694 1086826 kubeadm.go:309] 
	I0603 12:25:09.207813 1086826 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 949ojx.jojr63h99myrhn1a \
	I0603 12:25:09.207922 1086826 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c33e9516f6d05db03b44f9194bafe44692a1b8ae1d860b8bc74f77578e93fdb1 
	I0603 12:25:09.209051 1086826 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 12:25:09.209091 1086826 cni.go:84] Creating CNI manager for ""
	I0603 12:25:09.209105 1086826 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:25:09.211191 1086826 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 12:25:09.212411 1086826 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 12:25:09.223015 1086826 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 12:25:09.241025 1086826 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 12:25:09.241111 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-699562 minikube.k8s.io/updated_at=2024_06_03T12_25_09_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354 minikube.k8s.io/name=addons-699562 minikube.k8s.io/primary=true
	I0603 12:25:09.241113 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:09.273153 1086826 ops.go:34] apiserver oom_adj: -16
	I0603 12:25:09.379202 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:09.879303 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:10.379958 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:10.880244 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:11.380084 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:11.879530 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:12.380197 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:12.879382 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:13.379614 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:13.879715 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:14.379835 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:14.879796 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:15.380103 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:15.879378 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:16.379576 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:16.879485 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:17.379812 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:17.879440 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:18.379509 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:18.879789 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:19.379568 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:19.880105 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:20.380021 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:20.880130 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:21.380225 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:21.879752 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:22.380331 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:22.880058 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:23.038068 1086826 kubeadm.go:1107] duration metric: took 13.797019699s to wait for elevateKubeSystemPrivileges
	W0603 12:25:23.038132 1086826 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0603 12:25:23.038145 1086826 kubeadm.go:393] duration metric: took 24.234331356s to StartCluster
	I0603 12:25:23.038180 1086826 settings.go:142] acquiring lock: {Name:mka7155af15d143794eb08b8670f7d850f44839e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:25:23.038355 1086826 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 12:25:23.038990 1086826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/kubeconfig: {Name:mk082a4c41fd0f4876b4085806e1bc5ef6533b14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:25:23.039268 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0603 12:25:23.039288 1086826 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.241 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 12:25:23.041364 1086826 out.go:177] * Verifying Kubernetes components...
	I0603 12:25:23.039372 1086826 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0603 12:25:23.039478 1086826 config.go:182] Loaded profile config "addons-699562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:25:23.042813 1086826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:25:23.042835 1086826 addons.go:69] Setting yakd=true in profile "addons-699562"
	I0603 12:25:23.042844 1086826 addons.go:69] Setting cloud-spanner=true in profile "addons-699562"
	I0603 12:25:23.042871 1086826 addons.go:234] Setting addon cloud-spanner=true in "addons-699562"
	I0603 12:25:23.042879 1086826 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-699562"
	I0603 12:25:23.042882 1086826 addons.go:69] Setting metrics-server=true in profile "addons-699562"
	I0603 12:25:23.042895 1086826 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-699562"
	I0603 12:25:23.042929 1086826 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-699562"
	I0603 12:25:23.042943 1086826 addons.go:69] Setting volcano=true in profile "addons-699562"
	I0603 12:25:23.042955 1086826 addons.go:69] Setting storage-provisioner=true in profile "addons-699562"
	I0603 12:25:23.042965 1086826 addons.go:69] Setting registry=true in profile "addons-699562"
	I0603 12:25:23.042969 1086826 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-699562"
	I0603 12:25:23.042983 1086826 addons.go:234] Setting addon registry=true in "addons-699562"
	I0603 12:25:23.042971 1086826 addons.go:234] Setting addon volcano=true in "addons-699562"
	I0603 12:25:23.043048 1086826 host.go:66] Checking if "addons-699562" exists ...
	I0603 12:25:23.043107 1086826 host.go:66] Checking if "addons-699562" exists ...
	I0603 12:25:23.042871 1086826 addons.go:234] Setting addon yakd=true in "addons-699562"
	I0603 12:25:23.043183 1086826 host.go:66] Checking if "addons-699562" exists ...
	I0603 12:25:23.042936 1086826 addons.go:69] Setting gcp-auth=true in profile "addons-699562"
	I0603 12:25:23.043240 1086826 mustload.go:65] Loading cluster: addons-699562
	I0603 12:25:23.043428 1086826 config.go:182] Loaded profile config "addons-699562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:25:23.043518 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.043550 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.043568 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.043579 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.043598 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.043609 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.043628 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.043668 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.042915 1086826 host.go:66] Checking if "addons-699562" exists ...
	I0603 12:25:23.042907 1086826 addons.go:234] Setting addon metrics-server=true in "addons-699562"
	I0603 12:25:23.044377 1086826 host.go:66] Checking if "addons-699562" exists ...
	I0603 12:25:23.042917 1086826 addons.go:69] Setting default-storageclass=true in profile "addons-699562"
	I0603 12:25:23.044579 1086826 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-699562"
	I0603 12:25:23.045076 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.045155 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.045255 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.045307 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.042925 1086826 addons.go:69] Setting ingress=true in profile "addons-699562"
	I0603 12:25:23.045756 1086826 addons.go:234] Setting addon ingress=true in "addons-699562"
	I0603 12:25:23.045822 1086826 host.go:66] Checking if "addons-699562" exists ...
	I0603 12:25:23.046281 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.046315 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.042989 1086826 host.go:66] Checking if "addons-699562" exists ...
	I0603 12:25:23.042936 1086826 addons.go:69] Setting inspektor-gadget=true in profile "addons-699562"
	I0603 12:25:23.047032 1086826 addons.go:234] Setting addon inspektor-gadget=true in "addons-699562"
	I0603 12:25:23.047071 1086826 host.go:66] Checking if "addons-699562" exists ...
	I0603 12:25:23.042943 1086826 addons.go:69] Setting helm-tiller=true in profile "addons-699562"
	I0603 12:25:23.047185 1086826 addons.go:234] Setting addon helm-tiller=true in "addons-699562"
	I0603 12:25:23.047212 1086826 host.go:66] Checking if "addons-699562" exists ...
	I0603 12:25:23.047273 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.047353 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.047374 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.042988 1086826 addons.go:234] Setting addon storage-provisioner=true in "addons-699562"
	I0603 12:25:23.047674 1086826 host.go:66] Checking if "addons-699562" exists ...
	I0603 12:25:23.042972 1086826 addons.go:69] Setting volumesnapshots=true in profile "addons-699562"
	I0603 12:25:23.047777 1086826 addons.go:234] Setting addon volumesnapshots=true in "addons-699562"
	I0603 12:25:23.047830 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.047846 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.047862 1086826 host.go:66] Checking if "addons-699562" exists ...
	I0603 12:25:23.047866 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.047882 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.044547 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.042832 1086826 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-699562"
	I0603 12:25:23.048354 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.042928 1086826 addons.go:69] Setting ingress-dns=true in profile "addons-699562"
	I0603 12:25:23.048397 1086826 addons.go:234] Setting addon ingress-dns=true in "addons-699562"
	I0603 12:25:23.048437 1086826 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-699562"
	I0603 12:25:23.048446 1086826 host.go:66] Checking if "addons-699562" exists ...
	I0603 12:25:23.048477 1086826 host.go:66] Checking if "addons-699562" exists ...
	I0603 12:25:23.047570 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.049263 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.049297 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.053615 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.054040 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.069220 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44425
	I0603 12:25:23.073731 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33585
	I0603 12:25:23.073786 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42327
	I0603 12:25:23.073889 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41657
	I0603 12:25:23.074001 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34815
	I0603 12:25:23.074277 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.074328 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.074963 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.075006 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.075941 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.076110 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.076230 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.076319 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.076403 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.077634 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.077660 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.077833 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.077853 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.077978 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.077989 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.078025 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.078054 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.078121 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.078545 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.078588 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.079218 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.079264 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.089204 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36147
	I0603 12:25:23.089281 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.089382 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
	I0603 12:25:23.089488 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.089515 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.090485 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.090522 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.091428 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.091439 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.091998 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.092019 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.092018 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.092061 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.092561 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.092601 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.104451 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34635
	I0603 12:25:23.104719 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46329
	I0603 12:25:23.105270 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.105974 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37029
	I0603 12:25:23.106145 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.106224 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42751
	I0603 12:25:23.106264 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.106336 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36991
	I0603 12:25:23.106543 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.106556 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.106884 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.106914 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.107363 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.107374 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.107456 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.108069 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.108094 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.108126 1086826 addons.go:234] Setting addon default-storageclass=true in "addons-699562"
	I0603 12:25:23.108164 1086826 host.go:66] Checking if "addons-699562" exists ...
	I0603 12:25:23.108242 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.108255 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.108522 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.108553 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.108768 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.108838 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35381
	I0603 12:25:23.109309 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.109376 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.110138 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.110159 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.110602 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.110637 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.111158 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.111762 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.111797 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.113967 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.114019 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.114729 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.114755 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.115240 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.115820 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.115863 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.116533 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.116565 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.116932 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.117491 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.117527 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.121971 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.122294 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
	I0603 12:25:23.125513 1086826 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-699562"
	I0603 12:25:23.125565 1086826 host.go:66] Checking if "addons-699562" exists ...
	I0603 12:25:23.125951 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.126003 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.127933 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39777
	I0603 12:25:23.128409 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.129023 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.129046 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.133336 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38839
	I0603 12:25:23.133978 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.134032 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.134207 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
	I0603 12:25:23.134828 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.134855 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.135234 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.135477 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
	I0603 12:25:23.137625 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
	I0603 12:25:23.139847 1086826 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0603 12:25:23.138376 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
	I0603 12:25:23.140132 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46499
	I0603 12:25:23.142584 1086826 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0603 12:25:23.140984 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40225
	I0603 12:25:23.141908 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.143938 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34971
	I0603 12:25:23.144587 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40201
	I0603 12:25:23.144635 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.145368 1086826 out.go:177]   - Using image docker.io/registry:2.8.3
	I0603 12:25:23.145978 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.146582 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.146643 1086826 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0603 12:25:23.148445 1086826 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0603 12:25:23.148467 1086826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0603 12:25:23.148489 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:25:23.146662 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39681
	I0603 12:25:23.146051 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.147210 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.147313 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.147468 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.149199 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.150420 1086826 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0603 12:25:23.150511 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.151060 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.152055 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.151064 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.152103 1086826 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0603 12:25:23.152121 1086826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0603 12:25:23.152142 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:25:23.152172 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.152107 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.151702 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
	I0603 12:25:23.151488 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.152265 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.152469 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.152541 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.152630 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.153074 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.153116 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.153228 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.153262 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.153264 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.153299 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.153701 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.153771 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:25:23.153798 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.153986 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
	I0603 12:25:23.154082 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:25:23.154329 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:25:23.154563 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:25:23.154740 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
	I0603 12:25:23.155872 1086826 host.go:66] Checking if "addons-699562" exists ...
	I0603 12:25:23.156261 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.156302 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.156862 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
	I0603 12:25:23.157120 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.158786 1086826 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0603 12:25:23.157660 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:25:23.157984 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:25:23.160125 1086826 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0603 12:25:23.160139 1086826 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0603 12:25:23.160159 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:25:23.160204 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.160399 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:25:23.160579 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:25:23.160730 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
	I0603 12:25:23.161513 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34551
	I0603 12:25:23.162147 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.162765 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.162792 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.163193 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.163488 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
	I0603 12:25:23.163614 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42687
	I0603 12:25:23.164160 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.164193 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.164587 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:25:23.164614 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.164750 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.164763 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.164805 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:25:23.164974 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:25:23.165184 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:25:23.165234 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.165357 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
	I0603 12:25:23.165664 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
	I0603 12:25:23.166081 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
	I0603 12:25:23.166343 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:23.166357 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:23.166891 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:23.166905 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:23.166914 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:23.166921 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:23.167166 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:23.167199 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:23.167207 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	W0603 12:25:23.167307 1086826 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0603 12:25:23.167541 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
	I0603 12:25:23.169814 1086826 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0603 12:25:23.169073 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40559
	I0603 12:25:23.171125 1086826 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0603 12:25:23.172468 1086826 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0603 12:25:23.171517 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.175038 1086826 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0603 12:25:23.174482 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.176419 1086826 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0603 12:25:23.176434 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.177888 1086826 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0603 12:25:23.180053 1086826 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0603 12:25:23.178457 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.180020 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36625
	I0603 12:25:23.183356 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36483
	I0603 12:25:23.183365 1086826 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0603 12:25:23.181980 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
	I0603 12:25:23.182398 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.182436 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38145
	I0603 12:25:23.182860 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46285
	I0603 12:25:23.183773 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.185766 1086826 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0603 12:25:23.185787 1086826 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0603 12:25:23.185818 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:25:23.184597 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36141
	I0603 12:25:23.185463 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45951
	I0603 12:25:23.186847 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.186864 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.186875 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.186883 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.187209 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.187318 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.187375 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.187761 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.187866 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
	I0603 12:25:23.188143 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.188161 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.188291 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
	I0603 12:25:23.188305 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.188319 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.188849 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.190945 1086826 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.29.0
	I0603 12:25:23.189483 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.189515 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.189541 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
	I0603 12:25:23.189703 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38189
	I0603 12:25:23.190094 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.190186 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38531
	I0603 12:25:23.190263 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
	I0603 12:25:23.190561 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.191601 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.192351 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:25:23.193396 1086826 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0603 12:25:23.193467 1086826 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0603 12:25:23.193473 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:25:23.193494 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:25:23.193500 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.193539 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.194368 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:25:23.194396 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.194450 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.196235 1086826 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:25:23.194512 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
	I0603 12:25:23.194692 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:25:23.195573 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.195618 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
	I0603 12:25:23.195621 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.195676 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38341
	I0603 12:25:23.195711 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.195757 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.196201 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
	I0603 12:25:23.197205 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.197890 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:25:23.197916 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.197972 1086826 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 12:25:23.197983 1086826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 12:25:23.198001 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:25:23.198024 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.198035 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.197833 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:25:23.198131 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
	I0603 12:25:23.198185 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.198474 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
	I0603 12:25:23.198547 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.200480 1086826 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0603 12:25:23.199048 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.199079 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:25:23.199482 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.199522 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.199896 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.201123 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
	I0603 12:25:23.201303 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
	I0603 12:25:23.201308 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
	I0603 12:25:23.201884 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.202138 1086826 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0603 12:25:23.202157 1086826 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0603 12:25:23.202175 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:25:23.202289 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.202978 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
	I0603 12:25:23.203030 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
	I0603 12:25:23.204877 1086826 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0603 12:25:23.203059 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:25:23.203097 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:25:23.203124 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:25:23.203254 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.204827 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
	I0603 12:25:23.205130 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44327
	I0603 12:25:23.205471 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
	I0603 12:25:23.206028 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.206324 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.206371 1086826 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0603 12:25:23.206656 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:25:23.206651 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
	I0603 12:25:23.206709 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:25:23.207699 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.207618 1086826 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0603 12:25:23.207788 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:25:23.207805 1086826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0603 12:25:23.209168 1086826 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0603 12:25:23.209191 1086826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0603 12:25:23.209215 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:25:23.209173 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:25:23.208391 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.208423 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:25:23.208443 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:25:23.210668 1086826 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0603 12:25:23.208815 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.209149 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.207681 1086826 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0603 12:25:23.209933 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:25:23.209956 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
	I0603 12:25:23.209992 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
	I0603 12:25:23.212401 1086826 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0603 12:25:23.212427 1086826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0603 12:25:23.212450 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:25:23.212513 1086826 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0603 12:25:23.214086 1086826 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0603 12:25:23.214106 1086826 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0603 12:25:23.214136 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:25:23.212668 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.214207 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:25:23.214231 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.215694 1086826 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0603 12:25:23.213493 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
	I0603 12:25:23.215710 1086826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0603 12:25:23.215726 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:25:23.213529 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:25:23.213766 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.216403 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.213791 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.215381 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:25:23.216476 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:25:23.216335 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:25:23.216495 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.216348 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.216526 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:25:23.216541 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.216699 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:25:23.216705 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:25:23.216755 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:25:23.216872 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:25:23.216928 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
	I0603 12:25:23.216996 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:25:23.217163 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:25:23.217220 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.217347 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
	I0603 12:25:23.217506 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
	I0603 12:25:23.218087 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.218182 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.218381 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.218727 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:25:23.218766 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.218941 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:25:23.219129 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:25:23.221916 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:25:23.221953 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.221958 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:25:23.221979 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:25:23.221998 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.222143 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
	I0603 12:25:23.222152 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:25:23.222332 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:25:23.222454 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
	I0603 12:25:23.225717 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33403
	I0603 12:25:23.254144 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.254702 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.254723 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	W0603 12:25:23.255054 1086826 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:43442->192.168.39.241:22: read: connection reset by peer
	I0603 12:25:23.255084 1086826 retry.go:31] will retry after 338.902816ms: ssh: handshake failed: read tcp 192.168.39.1:43442->192.168.39.241:22: read: connection reset by peer
	I0603 12:25:23.255125 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.255337 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
	I0603 12:25:23.256992 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
	I0603 12:25:23.257254 1086826 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 12:25:23.257274 1086826 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 12:25:23.257293 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:25:23.260592 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.261091 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:25:23.261117 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.261326 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:25:23.261563 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:25:23.261725 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:25:23.261873 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
	I0603 12:25:23.271376 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33833
	I0603 12:25:23.271765 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.272295 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.272319 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.272678 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.272874 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
	I0603 12:25:23.274522 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
	I0603 12:25:23.276422 1086826 out.go:177]   - Using image docker.io/busybox:stable
	I0603 12:25:23.278088 1086826 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0603 12:25:23.279525 1086826 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0603 12:25:23.279549 1086826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0603 12:25:23.279573 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:25:23.282585 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.283142 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:25:23.283175 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.283311 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:25:23.283518 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:25:23.283758 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:25:23.283941 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
	I0603 12:25:23.664725 1086826 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0603 12:25:23.664753 1086826 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0603 12:25:23.703703 1086826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0603 12:25:23.707713 1086826 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0603 12:25:23.707747 1086826 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0603 12:25:23.751402 1086826 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0603 12:25:23.751436 1086826 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0603 12:25:23.753543 1086826 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:25:23.755796 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0603 12:25:23.759663 1086826 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0603 12:25:23.759693 1086826 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0603 12:25:23.770791 1086826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 12:25:23.831389 1086826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0603 12:25:23.841828 1086826 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0603 12:25:23.841855 1086826 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0603 12:25:23.852519 1086826 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0603 12:25:23.852587 1086826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0603 12:25:23.862595 1086826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0603 12:25:23.878492 1086826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0603 12:25:23.880249 1086826 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0603 12:25:23.880280 1086826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0603 12:25:23.888423 1086826 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0603 12:25:23.888449 1086826 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0603 12:25:23.929569 1086826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 12:25:23.954649 1086826 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0603 12:25:23.954686 1086826 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0603 12:25:24.003881 1086826 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0603 12:25:24.003919 1086826 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0603 12:25:24.012187 1086826 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0603 12:25:24.012219 1086826 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0603 12:25:24.069615 1086826 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0603 12:25:24.069647 1086826 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0603 12:25:24.071510 1086826 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0603 12:25:24.071533 1086826 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0603 12:25:24.090048 1086826 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0603 12:25:24.090085 1086826 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0603 12:25:24.094186 1086826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0603 12:25:24.142839 1086826 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 12:25:24.142888 1086826 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0603 12:25:24.188372 1086826 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0603 12:25:24.188411 1086826 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0603 12:25:24.191010 1086826 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0603 12:25:24.191034 1086826 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0603 12:25:24.196505 1086826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0603 12:25:24.242688 1086826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0603 12:25:24.295016 1086826 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0603 12:25:24.295050 1086826 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0603 12:25:24.313366 1086826 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0603 12:25:24.313401 1086826 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0603 12:25:24.327127 1086826 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0603 12:25:24.327158 1086826 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0603 12:25:24.368069 1086826 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0603 12:25:24.368097 1086826 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0603 12:25:24.430388 1086826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 12:25:24.468724 1086826 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0603 12:25:24.468763 1086826 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0603 12:25:24.491688 1086826 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0603 12:25:24.491709 1086826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0603 12:25:24.507202 1086826 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0603 12:25:24.507226 1086826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0603 12:25:24.573395 1086826 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0603 12:25:24.573434 1086826 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0603 12:25:24.673069 1086826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0603 12:25:24.710678 1086826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0603 12:25:24.740006 1086826 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0603 12:25:24.740041 1086826 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0603 12:25:24.774202 1086826 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0603 12:25:24.774240 1086826 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0603 12:25:25.120454 1086826 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0603 12:25:25.120483 1086826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0603 12:25:25.139484 1086826 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0603 12:25:25.139513 1086826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0603 12:25:25.367527 1086826 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0603 12:25:25.367559 1086826 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0603 12:25:25.399703 1086826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0603 12:25:25.664579 1086826 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0603 12:25:25.664608 1086826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0603 12:25:25.965006 1086826 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0603 12:25:25.965039 1086826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0603 12:25:26.478180 1086826 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0603 12:25:26.478220 1086826 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0603 12:25:26.694814 1086826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0603 12:25:30.234960 1086826 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0603 12:25:30.235007 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:25:30.238190 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:30.238600 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:25:30.238635 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:30.238823 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:25:30.239054 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:25:30.239222 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:25:30.239392 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
	I0603 12:25:30.740050 1086826 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0603 12:25:30.851470 1086826 addons.go:234] Setting addon gcp-auth=true in "addons-699562"
	I0603 12:25:30.851557 1086826 host.go:66] Checking if "addons-699562" exists ...
	I0603 12:25:30.852058 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:30.852094 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:30.868185 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42573
	I0603 12:25:30.868773 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:30.869431 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:30.869461 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:30.869815 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:30.870344 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:30.870377 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:30.886448 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46435
	I0603 12:25:30.886966 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:30.887481 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:30.887505 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:30.887824 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:30.888034 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
	I0603 12:25:30.889619 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
	I0603 12:25:30.889859 1086826 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0603 12:25:30.889888 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:25:30.892565 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:30.893052 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:25:30.893120 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:30.893241 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:25:30.893420 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:25:30.893579 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:25:30.893836 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
	I0603 12:25:31.588516 1086826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.884760346s)
	I0603 12:25:31.588543 1086826 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.834961071s)
	I0603 12:25:31.588585 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.588588 1086826 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.832761998s)
	I0603 12:25:31.588599 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.588607 1086826 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0603 12:25:31.588647 1086826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.817821512s)
	I0603 12:25:31.588690 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.588707 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.588761 1086826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.72613988s)
	I0603 12:25:31.588790 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.588798 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.588816 1086826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.710295183s)
	I0603 12:25:31.588707 1086826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.757290038s)
	I0603 12:25:31.588849 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.588855 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.588835 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.588888 1086826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.659283987s)
	I0603 12:25:31.588892 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.588903 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.588912 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.588951 1086826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.494737371s)
	I0603 12:25:31.588970 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.588977 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.588991 1086826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.392463973s)
	I0603 12:25:31.589009 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.589017 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.589046 1086826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.346321083s)
	I0603 12:25:31.589063 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.589073 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.589124 1086826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.158682793s)
	I0603 12:25:31.589141 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.589151 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.589293 1086826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.916193022s)
	W0603 12:25:31.589322 1086826 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0603 12:25:31.589348 1086826 retry.go:31] will retry after 269.565045ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0603 12:25:31.589443 1086826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.878723157s)
	I0603 12:25:31.589471 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.589483 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.589514 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:31.589540 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:31.589573 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.589583 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:31.589592 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.589600 1086826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.18985866s)
	I0603 12:25:31.589630 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:31.589633 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.589651 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.589609 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.589720 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.589732 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:31.589736 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:31.589739 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.589746 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.589767 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.589777 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:31.589785 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.589791 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:31.589792 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.589802 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:31.589812 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.589815 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.589819 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.589822 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:31.589830 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.589836 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.589792 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.589877 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:31.589899 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.589906 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:31.589915 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.589922 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.589987 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:31.590005 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:31.590022 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.590027 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:31.590181 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.590194 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:31.590201 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.590207 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.590256 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:31.590277 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.590287 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:31.590294 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.590303 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.590339 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:31.590361 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.590374 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:31.590381 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.590388 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.590432 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:31.590455 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.590462 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:31.590469 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.590475 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.590535 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:31.590556 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.590563 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:31.590572 1086826 addons.go:475] Verifying addon metrics-server=true in "addons-699562"
	I0603 12:25:31.592647 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:31.592688 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.592699 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:31.593151 1086826 node_ready.go:35] waiting up to 6m0s for node "addons-699562" to be "Ready" ...
	I0603 12:25:31.593360 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:31.593391 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.593401 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:31.593427 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.593437 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.593482 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:31.593507 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.593513 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:31.595143 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:31.595175 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.595183 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:31.595271 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:31.595289 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.595296 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:31.595303 1086826 addons.go:475] Verifying addon ingress=true in "addons-699562"
	I0603 12:25:31.598115 1086826 out.go:177] * Verifying ingress addon...
	I0603 12:25:31.595658 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:31.595682 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.595698 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:31.595717 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.595736 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:31.595755 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.595770 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:31.595787 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.596138 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:31.596162 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.596337 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.596353 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:31.599578 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:31.599608 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:31.599609 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:31.599614 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:31.599672 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:31.599618 1086826 addons.go:475] Verifying addon registry=true in "addons-699562"
	I0603 12:25:31.599705 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:31.601327 1086826 out.go:177] * Verifying registry addon...
	I0603 12:25:31.599623 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.600355 1086826 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0603 12:25:31.603078 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.603348 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:31.603401 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.603419 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:31.604909 1086826 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-699562 service yakd-dashboard -n yakd-dashboard
	
	I0603 12:25:31.603845 1086826 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0603 12:25:31.618188 1086826 node_ready.go:49] node "addons-699562" has status "Ready":"True"
	I0603 12:25:31.618210 1086826 node_ready.go:38] duration metric: took 25.032701ms for node "addons-699562" to be "Ready" ...
	I0603 12:25:31.618219 1086826 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:25:31.619958 1086826 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0603 12:25:31.619977 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:31.629749 1086826 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0603 12:25:31.629771 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:31.643074 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.643101 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.643239 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.643263 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.643396 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.643421 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:31.643530 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:31.643580 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.643589 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	W0603 12:25:31.643687 1086826 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0603 12:25:31.661700 1086826 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hmhdl" in "kube-system" namespace to be "Ready" ...
	I0603 12:25:31.709539 1086826 pod_ready.go:92] pod "coredns-7db6d8ff4d-hmhdl" in "kube-system" namespace has status "Ready":"True"
	I0603 12:25:31.709561 1086826 pod_ready.go:81] duration metric: took 47.835085ms for pod "coredns-7db6d8ff4d-hmhdl" in "kube-system" namespace to be "Ready" ...
	I0603 12:25:31.709572 1086826 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-qjklp" in "kube-system" namespace to be "Ready" ...
	I0603 12:25:31.749241 1086826 pod_ready.go:92] pod "coredns-7db6d8ff4d-qjklp" in "kube-system" namespace has status "Ready":"True"
	I0603 12:25:31.749267 1086826 pod_ready.go:81] duration metric: took 39.689686ms for pod "coredns-7db6d8ff4d-qjklp" in "kube-system" namespace to be "Ready" ...
	I0603 12:25:31.749278 1086826 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-699562" in "kube-system" namespace to be "Ready" ...
	I0603 12:25:31.782613 1086826 pod_ready.go:92] pod "etcd-addons-699562" in "kube-system" namespace has status "Ready":"True"
	I0603 12:25:31.782641 1086826 pod_ready.go:81] duration metric: took 33.356992ms for pod "etcd-addons-699562" in "kube-system" namespace to be "Ready" ...
	I0603 12:25:31.782651 1086826 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-699562" in "kube-system" namespace to be "Ready" ...
	I0603 12:25:31.827401 1086826 pod_ready.go:92] pod "kube-apiserver-addons-699562" in "kube-system" namespace has status "Ready":"True"
	I0603 12:25:31.827426 1086826 pod_ready.go:81] duration metric: took 44.767786ms for pod "kube-apiserver-addons-699562" in "kube-system" namespace to be "Ready" ...
	I0603 12:25:31.827438 1086826 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-699562" in "kube-system" namespace to be "Ready" ...
	I0603 12:25:31.860140 1086826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0603 12:25:31.996499 1086826 pod_ready.go:92] pod "kube-controller-manager-addons-699562" in "kube-system" namespace has status "Ready":"True"
	I0603 12:25:31.996535 1086826 pod_ready.go:81] duration metric: took 169.090158ms for pod "kube-controller-manager-addons-699562" in "kube-system" namespace to be "Ready" ...
	I0603 12:25:31.996551 1086826 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6ssr8" in "kube-system" namespace to be "Ready" ...
	I0603 12:25:32.092808 1086826 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-699562" context rescaled to 1 replicas
	I0603 12:25:32.107781 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:32.113006 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:32.397400 1086826 pod_ready.go:92] pod "kube-proxy-6ssr8" in "kube-system" namespace has status "Ready":"True"
	I0603 12:25:32.397456 1086826 pod_ready.go:81] duration metric: took 400.897369ms for pod "kube-proxy-6ssr8" in "kube-system" namespace to be "Ready" ...
	I0603 12:25:32.397471 1086826 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-699562" in "kube-system" namespace to be "Ready" ...
	I0603 12:25:32.609145 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:32.625118 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:32.804548 1086826 pod_ready.go:92] pod "kube-scheduler-addons-699562" in "kube-system" namespace has status "Ready":"True"
	I0603 12:25:32.804582 1086826 pod_ready.go:81] duration metric: took 407.101572ms for pod "kube-scheduler-addons-699562" in "kube-system" namespace to be "Ready" ...
	I0603 12:25:32.804597 1086826 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace to be "Ready" ...
	I0603 12:25:33.108552 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:33.128901 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:33.534656 1086826 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.644766775s)
	I0603 12:25:33.536193 1086826 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0603 12:25:33.534859 1086826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.839987769s)
	I0603 12:25:33.536260 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:33.536276 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:33.539162 1086826 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0603 12:25:33.538004 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:33.538036 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:33.540446 1086826 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0603 12:25:33.540459 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:33.540472 1086826 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0603 12:25:33.540478 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:33.540491 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:33.540734 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:33.540752 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:33.540765 1086826 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-699562"
	I0603 12:25:33.542102 1086826 out.go:177] * Verifying csi-hostpath-driver addon...
	I0603 12:25:33.544077 1086826 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0603 12:25:33.568601 1086826 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0603 12:25:33.568623 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:33.641891 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:33.645693 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:33.791417 1086826 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0603 12:25:33.791441 1086826 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0603 12:25:33.842537 1086826 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0603 12:25:33.842563 1086826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0603 12:25:33.962934 1086826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0603 12:25:34.049552 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:34.107653 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:34.110594 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:34.205263 1086826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.345061511s)
	I0603 12:25:34.205315 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:34.205328 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:34.205726 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:34.205744 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:34.205755 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:34.205786 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:34.205838 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:34.206122 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:34.206140 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:34.206142 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:34.564029 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:34.621424 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:34.621952 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:34.812209 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:25:35.053298 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:35.111227 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:35.115219 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:35.560207 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:35.607579 1086826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.644597027s)
	I0603 12:25:35.607651 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:35.607672 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:35.608000 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:35.608020 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:35.608030 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:35.608038 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:35.608049 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:35.608290 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:35.608305 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:35.608315 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:35.609668 1086826 addons.go:475] Verifying addon gcp-auth=true in "addons-699562"
	I0603 12:25:35.611353 1086826 out.go:177] * Verifying gcp-auth addon...
	I0603 12:25:35.613544 1086826 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0603 12:25:35.622615 1086826 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0603 12:25:35.622636 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:35.622740 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:35.630159 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:36.049713 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:36.107880 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:36.111569 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:36.116514 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:36.550083 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:36.607537 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:36.610472 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:36.616565 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:37.049483 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:37.108073 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:37.112270 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:37.116999 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:37.310101 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:25:37.551354 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:37.607806 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:37.611159 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:37.616391 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:38.050593 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:38.108207 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:38.110598 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:38.116688 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:38.550813 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:38.608089 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:38.610368 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:38.617203 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:39.050555 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:39.109214 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:39.111875 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:39.117025 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:39.311090 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:25:39.550464 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:39.609235 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:39.619822 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:39.623829 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:40.050197 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:40.107940 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:40.111079 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:40.117190 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:40.555709 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:40.606871 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:40.610515 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:40.617821 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:41.051097 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:41.108305 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:41.112441 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:41.120952 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:41.550698 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:41.607766 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:41.611170 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:41.616572 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:41.812095 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:25:42.056884 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:42.107517 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:42.110628 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:42.117448 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:42.550228 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:42.607367 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:42.610776 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:42.617304 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:43.050537 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:43.107475 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:43.110791 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:43.117489 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:43.549629 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:43.607992 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:43.610648 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:43.617442 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:44.050927 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:44.108088 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:44.111037 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:44.117569 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:44.311412 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:25:44.549239 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:44.607460 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:44.610239 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:44.616396 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:45.050428 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:45.108292 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:45.111223 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:45.116803 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:45.549667 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:45.608159 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:45.611071 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:45.617369 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:46.049740 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:46.108356 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:46.111863 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:46.117184 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:46.550035 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:46.610665 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:46.616929 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:46.617766 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:46.811474 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:25:47.050811 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:47.108517 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:47.111277 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:47.118121 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:47.717272 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:47.718579 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:47.720052 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:47.720582 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:48.050550 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:48.107795 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:48.124102 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:48.125107 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:48.550864 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:48.608389 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:48.612915 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:48.617073 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:48.812147 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:25:49.050012 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:49.108020 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:49.111903 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:49.117394 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:49.553646 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:49.616003 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:49.616158 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:49.617707 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:50.050251 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:50.107666 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:50.110733 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:50.117753 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:50.551307 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:50.608304 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:50.611064 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:50.619295 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:50.812287 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:25:51.049733 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:51.108750 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:51.111336 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:51.117075 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:51.549537 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:51.607284 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:51.616857 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:51.621130 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:52.050226 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:52.107358 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:52.109850 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:52.117931 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:52.550300 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:52.607639 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:52.610451 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:52.616746 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:53.050021 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:53.107896 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:53.110487 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:53.118939 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:53.310161 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:25:53.549054 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:53.608773 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:53.611208 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:53.616121 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:54.051892 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:54.107514 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:54.110875 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:54.117146 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:54.550002 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:54.613315 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:54.614243 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:54.617980 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:55.050714 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:55.108031 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:55.114500 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:55.116547 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:55.311779 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:25:55.549866 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:55.607101 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:55.609799 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:55.616946 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:56.050001 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:56.107340 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:56.110621 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:56.116795 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:56.549343 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:56.606883 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:56.611566 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:56.616628 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:57.049643 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:57.107787 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:57.110720 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:57.116984 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:57.552490 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:57.608047 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:57.610865 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:57.617100 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:57.811362 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:25:58.050515 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:58.107939 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:58.111269 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:58.116768 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:58.550559 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:58.607094 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:58.610875 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:58.624472 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:59.050561 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:59.107350 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:59.112122 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:59.116553 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:59.550259 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:59.607876 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:59.611375 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:59.616611 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:59.813733 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:00.050436 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:00.107324 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:00.110757 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:00.116870 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:00.551058 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:00.607390 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:00.611966 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:01.121735 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:01.121924 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:01.126130 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:01.126983 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:01.129553 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:01.550366 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:01.607513 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:01.613604 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:01.617595 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:02.050994 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:02.107780 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:02.112490 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:02.116676 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:02.311086 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:02.550413 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:02.607651 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:02.611082 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:02.616850 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:03.057400 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:03.112385 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:03.120791 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:03.123378 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:03.564652 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:03.608348 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:03.613237 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:03.617465 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:04.049337 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:04.108077 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:04.111136 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:04.116450 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:04.312631 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:04.550826 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:04.607574 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:04.610189 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:04.616596 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:05.049987 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:05.107654 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:05.112022 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:05.116491 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:05.558654 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:05.610678 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:05.612773 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:05.616256 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:06.050837 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:06.108894 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:06.116558 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:06.117868 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:06.318490 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:06.552708 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:06.607804 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:06.611009 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:06.616728 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:07.050266 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:07.108163 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:07.110480 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:07.116995 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:07.549887 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:07.607248 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:07.609801 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:07.616886 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:08.050013 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:08.107356 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:08.110936 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:08.117445 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:08.549795 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:08.608572 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:08.612546 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:08.618812 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:08.809645 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:09.050140 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:09.108213 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:09.110707 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:09.126656 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:09.627655 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:09.628075 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:09.629523 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:09.632992 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:10.052780 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:10.107687 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:10.115155 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:10.117087 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:10.559545 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:10.618077 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:10.618827 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:10.624128 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:10.809946 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:11.051024 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:11.107444 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:11.119674 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:11.129911 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:11.551452 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:11.608709 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:11.612387 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:11.618669 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:12.051203 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:12.108483 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:12.113638 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:12.117352 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:12.550419 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:12.606898 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:12.615006 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:12.622892 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:12.810639 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:13.050044 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:13.107552 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:13.110151 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:13.116169 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:13.549565 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:13.607597 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:13.610397 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:13.616616 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:14.075866 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:14.107785 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:14.110642 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:14.116976 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:14.550402 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:14.607903 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:14.624896 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:14.625760 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:14.811698 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:15.050201 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:15.107696 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:15.110185 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:15.116396 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:15.549935 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:15.608025 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:15.613716 1086826 kapi.go:107] duration metric: took 44.009864907s to wait for kubernetes.io/minikube-addons=registry ...
	I0603 12:26:15.616045 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:16.050623 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:16.107997 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:16.118286 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:16.550566 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:16.607168 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:16.617337 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:17.049985 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:17.111754 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:17.117847 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:17.310623 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:17.550155 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:17.607335 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:17.617935 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:18.050331 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:18.107929 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:18.117159 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:18.549743 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:18.608683 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:18.616912 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:19.051032 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:19.107395 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:19.117733 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:19.312689 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:19.551709 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:19.607931 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:19.618005 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:20.056111 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:20.120487 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:20.123371 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:20.549077 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:20.607149 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:20.617546 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:21.049157 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:21.107033 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:21.117469 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:21.553377 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:21.607131 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:21.617199 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:21.811213 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:22.049945 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:22.107589 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:22.117235 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:22.550601 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:22.748260 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:22.748410 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:23.051046 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:23.107427 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:23.116770 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:23.552644 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:23.607528 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:23.616536 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:23.815297 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:24.049690 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:24.107818 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:24.116850 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:24.549590 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:24.607702 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:24.616744 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:25.051117 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:25.110714 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:25.118175 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:25.549997 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:25.607989 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:25.616932 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:26.049772 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:26.107839 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:26.117258 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:26.313163 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:26.550215 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:26.609188 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:26.617513 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:27.050025 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:27.110354 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:27.121036 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:27.552177 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:27.609081 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:27.617129 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:28.063267 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:28.111656 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:28.140408 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:28.554049 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:28.607648 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:28.617599 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:28.815776 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:29.050702 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:29.108722 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:29.117641 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:29.549232 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:29.612125 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:29.619247 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:30.050328 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:30.107579 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:30.116842 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:30.549987 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:30.607456 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:30.617024 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:31.049548 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:31.107580 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:31.117334 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:31.310166 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:31.549604 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:31.610474 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:31.619704 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:32.049274 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:32.107173 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:32.117440 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:32.549145 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:32.607102 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:32.617211 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:33.050797 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:33.107940 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:33.117288 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:33.314001 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:33.553058 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:33.607307 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:33.625683 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:34.051432 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:34.108409 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:34.119806 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:34.549419 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:34.608318 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:34.618266 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:35.049502 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:35.107005 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:35.116950 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:35.549499 1086826 kapi.go:107] duration metric: took 1m2.005422319s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0603 12:26:35.607675 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:35.616842 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:35.810331 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:36.108106 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:36.117479 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:36.609257 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:36.617584 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:37.108854 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:37.116821 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:37.792868 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:37.793314 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:37.811615 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:38.107926 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:38.117239 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:38.607378 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:38.616370 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:39.107642 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:39.117063 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:39.609297 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:39.616591 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:40.107745 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:40.117791 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:40.319439 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:40.986440 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:41.000146 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:41.108887 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:41.119076 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:41.607823 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:41.617096 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:42.108419 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:42.117525 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:42.608967 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:42.617643 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:42.811899 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:43.108162 1086826 kapi.go:107] duration metric: took 1m11.5078006s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0603 12:26:43.117460 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:43.617102 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:44.306990 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:44.618442 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:45.117494 1086826 kapi.go:107] duration metric: took 1m9.503943568s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0603 12:26:45.119711 1086826 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-699562 cluster.
	I0603 12:26:45.121144 1086826 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0603 12:26:45.122532 1086826 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0603 12:26:45.123925 1086826 out.go:177] * Enabled addons: metrics-server, storage-provisioner, ingress-dns, nvidia-device-plugin, cloud-spanner, inspektor-gadget, helm-tiller, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0603 12:26:45.125165 1086826 addons.go:510] duration metric: took 1m22.085788168s for enable addons: enabled=[metrics-server storage-provisioner ingress-dns nvidia-device-plugin cloud-spanner inspektor-gadget helm-tiller yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0603 12:26:45.311068 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:47.318344 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:49.810186 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:51.811646 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:54.311268 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:56.311669 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:58.311958 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:27:00.811116 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:27:02.812103 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:27:05.311540 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:27:07.811048 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:27:09.811356 1086826 pod_ready.go:92] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"True"
	I0603 12:27:09.811384 1086826 pod_ready.go:81] duration metric: took 1m37.006778734s for pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace to be "Ready" ...
	I0603 12:27:09.811395 1086826 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-2sw5z" in "kube-system" namespace to be "Ready" ...
	I0603 12:27:09.817624 1086826 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-2sw5z" in "kube-system" namespace has status "Ready":"True"
	I0603 12:27:09.817647 1086826 pod_ready.go:81] duration metric: took 6.245626ms for pod "nvidia-device-plugin-daemonset-2sw5z" in "kube-system" namespace to be "Ready" ...
	I0603 12:27:09.817666 1086826 pod_ready.go:38] duration metric: took 1m38.19943696s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:27:09.817688 1086826 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:27:09.817740 1086826 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:27:09.817800 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:27:09.866770 1086826 cri.go:89] found id: "ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4"
	I0603 12:27:09.866803 1086826 cri.go:89] found id: ""
	I0603 12:27:09.866814 1086826 logs.go:276] 1 containers: [ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4]
	I0603 12:27:09.866881 1086826 ssh_runner.go:195] Run: which crictl
	I0603 12:27:09.871934 1086826 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:27:09.872023 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:27:09.912368 1086826 cri.go:89] found id: "0c7a1cc6df31c0c301fee639aa62ce868d9a11802928a59d2d19c941e0c51514"
	I0603 12:27:09.912393 1086826 cri.go:89] found id: ""
	I0603 12:27:09.912402 1086826 logs.go:276] 1 containers: [0c7a1cc6df31c0c301fee639aa62ce868d9a11802928a59d2d19c941e0c51514]
	I0603 12:27:09.912466 1086826 ssh_runner.go:195] Run: which crictl
	I0603 12:27:09.917195 1086826 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:27:09.917259 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:27:09.966345 1086826 cri.go:89] found id: "35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3"
	I0603 12:27:09.966367 1086826 cri.go:89] found id: ""
	I0603 12:27:09.966376 1086826 logs.go:276] 1 containers: [35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3]
	I0603 12:27:09.966437 1086826 ssh_runner.go:195] Run: which crictl
	I0603 12:27:09.970794 1086826 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:27:09.970861 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:27:10.010220 1086826 cri.go:89] found id: "92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b"
	I0603 12:27:10.010243 1086826 cri.go:89] found id: ""
	I0603 12:27:10.010252 1086826 logs.go:276] 1 containers: [92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b]
	I0603 12:27:10.010307 1086826 ssh_runner.go:195] Run: which crictl
	I0603 12:27:10.014883 1086826 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:27:10.014938 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:27:10.056002 1086826 cri.go:89] found id: "6add0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e"
	I0603 12:27:10.056034 1086826 cri.go:89] found id: ""
	I0603 12:27:10.056046 1086826 logs.go:276] 1 containers: [6add0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e]
	I0603 12:27:10.056117 1086826 ssh_runner.go:195] Run: which crictl
	I0603 12:27:10.060632 1086826 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:27:10.060698 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:27:10.098738 1086826 cri.go:89] found id: "5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8"
	I0603 12:27:10.098762 1086826 cri.go:89] found id: ""
	I0603 12:27:10.098770 1086826 logs.go:276] 1 containers: [5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8]
	I0603 12:27:10.098819 1086826 ssh_runner.go:195] Run: which crictl
	I0603 12:27:10.103199 1086826 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:27:10.103278 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:27:10.152873 1086826 cri.go:89] found id: ""
	I0603 12:27:10.152908 1086826 logs.go:276] 0 containers: []
	W0603 12:27:10.152919 1086826 logs.go:278] No container was found matching "kindnet"
	I0603 12:27:10.152934 1086826 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:27:10.152953 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:27:11.259298 1086826 logs.go:123] Gathering logs for container status ...
	I0603 12:27:11.259359 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:27:11.306971 1086826 logs.go:123] Gathering logs for kubelet ...
	I0603 12:27:11.307009 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0603 12:27:11.359963 1086826 logs.go:138] Found kubelet problem: Jun 03 12:25:29 addons-699562 kubelet[1268]: W0603 12:25:29.345897    1268 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-699562" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-699562' and this object
	W0603 12:27:11.360135 1086826 logs.go:138] Found kubelet problem: Jun 03 12:25:29 addons-699562 kubelet[1268]: E0603 12:25:29.346036    1268 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-699562" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-699562' and this object
	I0603 12:27:11.392921 1086826 logs.go:123] Gathering logs for etcd [0c7a1cc6df31c0c301fee639aa62ce868d9a11802928a59d2d19c941e0c51514] ...
	I0603 12:27:11.392964 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c7a1cc6df31c0c301fee639aa62ce868d9a11802928a59d2d19c941e0c51514"
	I0603 12:27:11.453617 1086826 logs.go:123] Gathering logs for kube-scheduler [92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b] ...
	I0603 12:27:11.453666 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b"
	I0603 12:27:11.502128 1086826 logs.go:123] Gathering logs for kube-controller-manager [5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8] ...
	I0603 12:27:11.502170 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8"
	I0603 12:27:11.563524 1086826 logs.go:123] Gathering logs for kube-proxy [6add0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e] ...
	I0603 12:27:11.563569 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6add0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e"
	I0603 12:27:11.600560 1086826 logs.go:123] Gathering logs for dmesg ...
	I0603 12:27:11.600598 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:27:11.616410 1086826 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:27:11.616449 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 12:27:11.743210 1086826 logs.go:123] Gathering logs for kube-apiserver [ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4] ...
	I0603 12:27:11.743245 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4"
	I0603 12:27:11.790432 1086826 logs.go:123] Gathering logs for coredns [35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3] ...
	I0603 12:27:11.790480 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3"
	I0603 12:27:11.832586 1086826 out.go:304] Setting ErrFile to fd 2...
	I0603 12:27:11.832622 1086826 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0603 12:27:11.832717 1086826 out.go:239] X Problems detected in kubelet:
	W0603 12:27:11.832734 1086826 out.go:239]   Jun 03 12:25:29 addons-699562 kubelet[1268]: W0603 12:25:29.345897    1268 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-699562" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-699562' and this object
	W0603 12:27:11.832747 1086826 out.go:239]   Jun 03 12:25:29 addons-699562 kubelet[1268]: E0603 12:25:29.346036    1268 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-699562" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-699562' and this object
	I0603 12:27:11.832762 1086826 out.go:304] Setting ErrFile to fd 2...
	I0603 12:27:11.832772 1086826 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:27:21.834706 1086826 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:27:21.854545 1086826 api_server.go:72] duration metric: took 1m58.8152147s to wait for apiserver process to appear ...
	I0603 12:27:21.854577 1086826 api_server.go:88] waiting for apiserver healthz status ...
	I0603 12:27:21.854630 1086826 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:27:21.854692 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:27:21.895375 1086826 cri.go:89] found id: "ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4"
	I0603 12:27:21.895398 1086826 cri.go:89] found id: ""
	I0603 12:27:21.895406 1086826 logs.go:276] 1 containers: [ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4]
	I0603 12:27:21.895460 1086826 ssh_runner.go:195] Run: which crictl
	I0603 12:27:21.900015 1086826 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:27:21.900067 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:27:21.943623 1086826 cri.go:89] found id: "0c7a1cc6df31c0c301fee639aa62ce868d9a11802928a59d2d19c941e0c51514"
	I0603 12:27:21.943658 1086826 cri.go:89] found id: ""
	I0603 12:27:21.943667 1086826 logs.go:276] 1 containers: [0c7a1cc6df31c0c301fee639aa62ce868d9a11802928a59d2d19c941e0c51514]
	I0603 12:27:21.943731 1086826 ssh_runner.go:195] Run: which crictl
	I0603 12:27:21.948627 1086826 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:27:21.948694 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:27:21.992699 1086826 cri.go:89] found id: "35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3"
	I0603 12:27:21.992725 1086826 cri.go:89] found id: ""
	I0603 12:27:21.992735 1086826 logs.go:276] 1 containers: [35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3]
	I0603 12:27:21.992800 1086826 ssh_runner.go:195] Run: which crictl
	I0603 12:27:21.998808 1086826 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:27:21.998885 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:27:22.044523 1086826 cri.go:89] found id: "92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b"
	I0603 12:27:22.044551 1086826 cri.go:89] found id: ""
	I0603 12:27:22.044562 1086826 logs.go:276] 1 containers: [92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b]
	I0603 12:27:22.044631 1086826 ssh_runner.go:195] Run: which crictl
	I0603 12:27:22.049328 1086826 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:27:22.049401 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:27:22.091373 1086826 cri.go:89] found id: "6add0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e"
	I0603 12:27:22.091398 1086826 cri.go:89] found id: ""
	I0603 12:27:22.091406 1086826 logs.go:276] 1 containers: [6add0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e]
	I0603 12:27:22.091468 1086826 ssh_runner.go:195] Run: which crictl
	I0603 12:27:22.095823 1086826 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:27:22.095878 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:27:22.134585 1086826 cri.go:89] found id: "5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8"
	I0603 12:27:22.134616 1086826 cri.go:89] found id: ""
	I0603 12:27:22.134627 1086826 logs.go:276] 1 containers: [5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8]
	I0603 12:27:22.134682 1086826 ssh_runner.go:195] Run: which crictl
	I0603 12:27:22.138852 1086826 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:27:22.138911 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:27:22.183843 1086826 cri.go:89] found id: ""
	I0603 12:27:22.183868 1086826 logs.go:276] 0 containers: []
	W0603 12:27:22.183876 1086826 logs.go:278] No container was found matching "kindnet"
	I0603 12:27:22.183886 1086826 logs.go:123] Gathering logs for dmesg ...
	I0603 12:27:22.183900 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:27:22.199319 1086826 logs.go:123] Gathering logs for kube-apiserver [ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4] ...
	I0603 12:27:22.199362 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4"
	I0603 12:27:22.255816 1086826 logs.go:123] Gathering logs for etcd [0c7a1cc6df31c0c301fee639aa62ce868d9a11802928a59d2d19c941e0c51514] ...
	I0603 12:27:22.255848 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c7a1cc6df31c0c301fee639aa62ce868d9a11802928a59d2d19c941e0c51514"
	I0603 12:27:22.309345 1086826 logs.go:123] Gathering logs for kube-proxy [6add0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e] ...
	I0603 12:27:22.309387 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6add0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e"
	I0603 12:27:22.349280 1086826 logs.go:123] Gathering logs for kube-controller-manager [5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8] ...
	I0603 12:27:22.349321 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8"
	I0603 12:27:22.418782 1086826 logs.go:123] Gathering logs for kubelet ...
	I0603 12:27:22.418821 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0603 12:27:22.478644 1086826 logs.go:138] Found kubelet problem: Jun 03 12:25:29 addons-699562 kubelet[1268]: W0603 12:25:29.345897    1268 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-699562" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-699562' and this object
	W0603 12:27:22.478888 1086826 logs.go:138] Found kubelet problem: Jun 03 12:25:29 addons-699562 kubelet[1268]: E0603 12:25:29.346036    1268 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-699562" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-699562' and this object
	I0603 12:27:22.516116 1086826 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:27:22.516161 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 12:27:22.649308 1086826 logs.go:123] Gathering logs for coredns [35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3] ...
	I0603 12:27:22.649341 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3"
	I0603 12:27:22.689734 1086826 logs.go:123] Gathering logs for kube-scheduler [92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b] ...
	I0603 12:27:22.689785 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b"
	I0603 12:27:22.733915 1086826 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:27:22.733952 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:27:23.479507 1086826 logs.go:123] Gathering logs for container status ...
	I0603 12:27:23.479570 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:27:23.533200 1086826 out.go:304] Setting ErrFile to fd 2...
	I0603 12:27:23.533234 1086826 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0603 12:27:23.533305 1086826 out.go:239] X Problems detected in kubelet:
	W0603 12:27:23.533317 1086826 out.go:239]   Jun 03 12:25:29 addons-699562 kubelet[1268]: W0603 12:25:29.345897    1268 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-699562" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-699562' and this object
	W0603 12:27:23.533324 1086826 out.go:239]   Jun 03 12:25:29 addons-699562 kubelet[1268]: E0603 12:25:29.346036    1268 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-699562" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-699562' and this object
	I0603 12:27:23.533336 1086826 out.go:304] Setting ErrFile to fd 2...
	I0603 12:27:23.533346 1086826 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:27:33.534751 1086826 api_server.go:253] Checking apiserver healthz at https://192.168.39.241:8443/healthz ...
	I0603 12:27:33.539341 1086826 api_server.go:279] https://192.168.39.241:8443/healthz returned 200:
	ok
	I0603 12:27:33.540654 1086826 api_server.go:141] control plane version: v1.30.1
	I0603 12:27:33.540682 1086826 api_server.go:131] duration metric: took 11.686097199s to wait for apiserver health ...
	I0603 12:27:33.540692 1086826 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 12:27:33.540725 1086826 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:27:33.540790 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:27:33.580781 1086826 cri.go:89] found id: "ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4"
	I0603 12:27:33.580805 1086826 cri.go:89] found id: ""
	I0603 12:27:33.580815 1086826 logs.go:276] 1 containers: [ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4]
	I0603 12:27:33.580884 1086826 ssh_runner.go:195] Run: which crictl
	I0603 12:27:33.585293 1086826 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:27:33.585356 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:27:33.624776 1086826 cri.go:89] found id: "0c7a1cc6df31c0c301fee639aa62ce868d9a11802928a59d2d19c941e0c51514"
	I0603 12:27:33.624799 1086826 cri.go:89] found id: ""
	I0603 12:27:33.624807 1086826 logs.go:276] 1 containers: [0c7a1cc6df31c0c301fee639aa62ce868d9a11802928a59d2d19c941e0c51514]
	I0603 12:27:33.624855 1086826 ssh_runner.go:195] Run: which crictl
	I0603 12:27:33.629338 1086826 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:27:33.629437 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:27:33.676960 1086826 cri.go:89] found id: "35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3"
	I0603 12:27:33.676995 1086826 cri.go:89] found id: ""
	I0603 12:27:33.677007 1086826 logs.go:276] 1 containers: [35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3]
	I0603 12:27:33.677078 1086826 ssh_runner.go:195] Run: which crictl
	I0603 12:27:33.681551 1086826 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:27:33.681615 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:27:33.719279 1086826 cri.go:89] found id: "92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b"
	I0603 12:27:33.719302 1086826 cri.go:89] found id: ""
	I0603 12:27:33.719311 1086826 logs.go:276] 1 containers: [92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b]
	I0603 12:27:33.719384 1086826 ssh_runner.go:195] Run: which crictl
	I0603 12:27:33.723688 1086826 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:27:33.723743 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:27:33.760181 1086826 cri.go:89] found id: "6add0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e"
	I0603 12:27:33.760210 1086826 cri.go:89] found id: ""
	I0603 12:27:33.760221 1086826 logs.go:276] 1 containers: [6add0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e]
	I0603 12:27:33.760283 1086826 ssh_runner.go:195] Run: which crictl
	I0603 12:27:33.764788 1086826 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:27:33.764868 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:27:33.805979 1086826 cri.go:89] found id: "5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8"
	I0603 12:27:33.806017 1086826 cri.go:89] found id: ""
	I0603 12:27:33.806030 1086826 logs.go:276] 1 containers: [5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8]
	I0603 12:27:33.806117 1086826 ssh_runner.go:195] Run: which crictl
	I0603 12:27:33.810640 1086826 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:27:33.810719 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:27:33.860436 1086826 cri.go:89] found id: ""
	I0603 12:27:33.860478 1086826 logs.go:276] 0 containers: []
	W0603 12:27:33.860490 1086826 logs.go:278] No container was found matching "kindnet"
	I0603 12:27:33.860503 1086826 logs.go:123] Gathering logs for kubelet ...
	I0603 12:27:33.860523 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0603 12:27:33.912867 1086826 logs.go:138] Found kubelet problem: Jun 03 12:25:29 addons-699562 kubelet[1268]: W0603 12:25:29.345897    1268 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-699562" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-699562' and this object
	W0603 12:27:33.913116 1086826 logs.go:138] Found kubelet problem: Jun 03 12:25:29 addons-699562 kubelet[1268]: E0603 12:25:29.346036    1268 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-699562" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-699562' and this object
	I0603 12:27:33.946641 1086826 logs.go:123] Gathering logs for kube-apiserver [ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4] ...
	I0603 12:27:33.946688 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4"
	I0603 12:27:33.995447 1086826 logs.go:123] Gathering logs for etcd [0c7a1cc6df31c0c301fee639aa62ce868d9a11802928a59d2d19c941e0c51514] ...
	I0603 12:27:33.995490 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c7a1cc6df31c0c301fee639aa62ce868d9a11802928a59d2d19c941e0c51514"
	I0603 12:27:34.053247 1086826 logs.go:123] Gathering logs for kube-proxy [6add0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e] ...
	I0603 12:27:34.053293 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6add0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e"
	I0603 12:27:34.092640 1086826 logs.go:123] Gathering logs for kube-controller-manager [5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8] ...
	I0603 12:27:34.092671 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8"
	I0603 12:27:34.161946 1086826 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:27:34.161991 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:27:35.152385 1086826 logs.go:123] Gathering logs for container status ...
	I0603 12:27:35.152441 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:27:35.199230 1086826 logs.go:123] Gathering logs for dmesg ...
	I0603 12:27:35.199272 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:27:35.215226 1086826 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:27:35.215263 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 12:27:35.337967 1086826 logs.go:123] Gathering logs for coredns [35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3] ...
	I0603 12:27:35.338010 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3"
	I0603 12:27:35.380440 1086826 logs.go:123] Gathering logs for kube-scheduler [92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b] ...
	I0603 12:27:35.380475 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b"
	I0603 12:27:35.427393 1086826 out.go:304] Setting ErrFile to fd 2...
	I0603 12:27:35.427426 1086826 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0603 12:27:35.427490 1086826 out.go:239] X Problems detected in kubelet:
	W0603 12:27:35.427499 1086826 out.go:239]   Jun 03 12:25:29 addons-699562 kubelet[1268]: W0603 12:25:29.345897    1268 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-699562" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-699562' and this object
	W0603 12:27:35.427506 1086826 out.go:239]   Jun 03 12:25:29 addons-699562 kubelet[1268]: E0603 12:25:29.346036    1268 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-699562" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-699562' and this object
	I0603 12:27:35.427513 1086826 out.go:304] Setting ErrFile to fd 2...
	I0603 12:27:35.427520 1086826 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:27:45.442234 1086826 system_pods.go:59] 18 kube-system pods found
	I0603 12:27:45.442278 1086826 system_pods.go:61] "coredns-7db6d8ff4d-hmhdl" [c3cfe166-99f3-4ac9-9905-8be76bcb511d] Running
	I0603 12:27:45.442285 1086826 system_pods.go:61] "csi-hostpath-attacher-0" [c6efaa50-400a-4e2d-9610-290b08ca0e27] Running
	I0603 12:27:45.442291 1086826 system_pods.go:61] "csi-hostpath-resizer-0" [d102455a-acf1-4067-b512-3e7d24676733] Running
	I0603 12:27:45.442296 1086826 system_pods.go:61] "csi-hostpathplugin-ldcdv" [db932b0d-726d-4b8d-b47c-dcbc1657a70d] Running
	I0603 12:27:45.442300 1086826 system_pods.go:61] "etcd-addons-699562" [90cdaf3f-ae75-439a-84f1-78cba28a6085] Running
	I0603 12:27:45.442305 1086826 system_pods.go:61] "kube-apiserver-addons-699562" [08e077e7-849e-40e9-bbb8-d3d5857a87bb] Running
	I0603 12:27:45.442309 1086826 system_pods.go:61] "kube-controller-manager-addons-699562" [1fb0b7db-5179-43de-bdea-0a9c8666d1dd] Running
	I0603 12:27:45.442313 1086826 system_pods.go:61] "kube-ingress-dns-minikube" [21a1c096-2479-4d10-864a-8b202b08a284] Running
	I0603 12:27:45.442318 1086826 system_pods.go:61] "kube-proxy-6ssr8" [609d1553-86b5-46ea-b503-bdfd9f291571] Running
	I0603 12:27:45.442323 1086826 system_pods.go:61] "kube-scheduler-addons-699562" [d5748ac9-a1c8-496a-aa0f-8a75c6a8b12c] Running
	I0603 12:27:45.442327 1086826 system_pods.go:61] "metrics-server-c59844bb4-pl8qk" [26f4580a-9514-47c0-aa22-11c454eaca32] Running
	I0603 12:27:45.442332 1086826 system_pods.go:61] "nvidia-device-plugin-daemonset-2sw5z" [3ad1866a-b3d5-4783-b2dd-557082180d8f] Running
	I0603 12:27:45.442337 1086826 system_pods.go:61] "registry-jrrh7" [af432feb-b699-477a-8cd5-ff109071d13d] Running
	I0603 12:27:45.442342 1086826 system_pods.go:61] "registry-proxy-n8265" [343bbd2c-1a4b-4796-8401-ebd3686c0a61] Running
	I0603 12:27:45.442348 1086826 system_pods.go:61] "snapshot-controller-745499f584-dk5sk" [e74e33d1-7eaf-46d7-bcb2-2a088a1687bd] Running
	I0603 12:27:45.442356 1086826 system_pods.go:61] "snapshot-controller-745499f584-nkg59" [dd8cffdf-f15c-405a-95d3-fa13eb7a4908] Running
	I0603 12:27:45.442361 1086826 system_pods.go:61] "storage-provisioner" [c3d92bc5-3f10-47e3-84a9-f532f14deae4] Running
	I0603 12:27:45.442370 1086826 system_pods.go:61] "tiller-deploy-6677d64bcd-k4tt8" [0ecadef4-5251-4d11-a39c-77a196200334] Running
	I0603 12:27:45.442378 1086826 system_pods.go:74] duration metric: took 11.901678581s to wait for pod list to return data ...
	I0603 12:27:45.442391 1086826 default_sa.go:34] waiting for default service account to be created ...
	I0603 12:27:45.444510 1086826 default_sa.go:45] found service account: "default"
	I0603 12:27:45.444530 1086826 default_sa.go:55] duration metric: took 2.131961ms for default service account to be created ...
	I0603 12:27:45.444537 1086826 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 12:27:45.453736 1086826 system_pods.go:86] 18 kube-system pods found
	I0603 12:27:45.453760 1086826 system_pods.go:89] "coredns-7db6d8ff4d-hmhdl" [c3cfe166-99f3-4ac9-9905-8be76bcb511d] Running
	I0603 12:27:45.453766 1086826 system_pods.go:89] "csi-hostpath-attacher-0" [c6efaa50-400a-4e2d-9610-290b08ca0e27] Running
	I0603 12:27:45.453770 1086826 system_pods.go:89] "csi-hostpath-resizer-0" [d102455a-acf1-4067-b512-3e7d24676733] Running
	I0603 12:27:45.453774 1086826 system_pods.go:89] "csi-hostpathplugin-ldcdv" [db932b0d-726d-4b8d-b47c-dcbc1657a70d] Running
	I0603 12:27:45.453778 1086826 system_pods.go:89] "etcd-addons-699562" [90cdaf3f-ae75-439a-84f1-78cba28a6085] Running
	I0603 12:27:45.453782 1086826 system_pods.go:89] "kube-apiserver-addons-699562" [08e077e7-849e-40e9-bbb8-d3d5857a87bb] Running
	I0603 12:27:45.453786 1086826 system_pods.go:89] "kube-controller-manager-addons-699562" [1fb0b7db-5179-43de-bdea-0a9c8666d1dd] Running
	I0603 12:27:45.453791 1086826 system_pods.go:89] "kube-ingress-dns-minikube" [21a1c096-2479-4d10-864a-8b202b08a284] Running
	I0603 12:27:45.453795 1086826 system_pods.go:89] "kube-proxy-6ssr8" [609d1553-86b5-46ea-b503-bdfd9f291571] Running
	I0603 12:27:45.453799 1086826 system_pods.go:89] "kube-scheduler-addons-699562" [d5748ac9-a1c8-496a-aa0f-8a75c6a8b12c] Running
	I0603 12:27:45.453805 1086826 system_pods.go:89] "metrics-server-c59844bb4-pl8qk" [26f4580a-9514-47c0-aa22-11c454eaca32] Running
	I0603 12:27:45.453809 1086826 system_pods.go:89] "nvidia-device-plugin-daemonset-2sw5z" [3ad1866a-b3d5-4783-b2dd-557082180d8f] Running
	I0603 12:27:45.453814 1086826 system_pods.go:89] "registry-jrrh7" [af432feb-b699-477a-8cd5-ff109071d13d] Running
	I0603 12:27:45.453818 1086826 system_pods.go:89] "registry-proxy-n8265" [343bbd2c-1a4b-4796-8401-ebd3686c0a61] Running
	I0603 12:27:45.453821 1086826 system_pods.go:89] "snapshot-controller-745499f584-dk5sk" [e74e33d1-7eaf-46d7-bcb2-2a088a1687bd] Running
	I0603 12:27:45.453828 1086826 system_pods.go:89] "snapshot-controller-745499f584-nkg59" [dd8cffdf-f15c-405a-95d3-fa13eb7a4908] Running
	I0603 12:27:45.453832 1086826 system_pods.go:89] "storage-provisioner" [c3d92bc5-3f10-47e3-84a9-f532f14deae4] Running
	I0603 12:27:45.453835 1086826 system_pods.go:89] "tiller-deploy-6677d64bcd-k4tt8" [0ecadef4-5251-4d11-a39c-77a196200334] Running
	I0603 12:27:45.453842 1086826 system_pods.go:126] duration metric: took 9.30001ms to wait for k8s-apps to be running ...
	I0603 12:27:45.453849 1086826 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 12:27:45.453893 1086826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:27:45.472770 1086826 system_svc.go:56] duration metric: took 18.912332ms WaitForService to wait for kubelet
	I0603 12:27:45.472793 1086826 kubeadm.go:576] duration metric: took 2m22.433473354s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 12:27:45.472813 1086826 node_conditions.go:102] verifying NodePressure condition ...
	I0603 12:27:45.476327 1086826 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 12:27:45.476351 1086826 node_conditions.go:123] node cpu capacity is 2
	I0603 12:27:45.476365 1086826 node_conditions.go:105] duration metric: took 3.54603ms to run NodePressure ...
	I0603 12:27:45.476377 1086826 start.go:240] waiting for startup goroutines ...
	I0603 12:27:45.476384 1086826 start.go:245] waiting for cluster config update ...
	I0603 12:27:45.476401 1086826 start.go:254] writing updated cluster config ...
	I0603 12:27:45.476702 1086826 ssh_runner.go:195] Run: rm -f paused
	I0603 12:27:45.526801 1086826 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 12:27:45.529608 1086826 out.go:177] * Done! kubectl is now configured to use "addons-699562" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jun 03 12:30:33 addons-699562 crio[679]: time="2024-06-03 12:30:33.528938196Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717417833528913422,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584738,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=121ccec4-380f-46bd-9fef-9b30dbc18861 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:30:33 addons-699562 crio[679]: time="2024-06-03 12:30:33.529569461Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d7c6a863-792e-4035-a4c3-6d145ae50a33 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:30:33 addons-699562 crio[679]: time="2024-06-03 12:30:33.529695724Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d7c6a863-792e-4035-a4c3-6d145ae50a33 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:30:33 addons-699562 crio[679]: time="2024-06-03 12:30:33.530102065Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8d642bc3271163ac07b67d5d065df03b7f8ec966349c684f21b1bd59704f0e69,PodSandboxId:2cd7a7a28e0a544856b8d5555606d91ef93f12a0d206dd883ea24422a0b3358b,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1717417824993167174,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-79c22,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 084158b3-1687-4f4c-b741-cbab7ca11858,},Annotations:map[string]string{io.kubernetes.container.hash: 29cd3655,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a173411215156f8b18d5fbf4880f8e8fdde156ec2a9e410913aa0c571553461a,PodSandboxId:f58876a06d48db0d426b93948a20a6d66b18f312f72772da56715a105e6fb466,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4,State:CONTAINER_RUNNING,CreatedAt:1717417684144009266,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 22eac9e0-47f1-46a1-9745-87ca515de64e,},Annotations:map[string]string{io.kubern
etes.container.hash: d250beef,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bf5932194780bd3cb97f9edc1a375d5a81fda4f3a462fe7b477ade5bb3d2ef1,PodSandboxId:83a0e5827ce1a87f9a28b80c3e8aef138aa1aafbe0be947094b5660af09e3673,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bd42824d488ce58074f6d54cb051437d0dc2669f3f96a4d9b3b72a8d7ddda679,State:CONTAINER_RUNNING,CreatedAt:1717417672655915191,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-68456f997b-tpgtj,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: c02f3cb7-dd75-4d83-89fe-082ca6c80805,},Annotations:map[string]string{io.kubernetes.container.hash: 22246b3d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f787a95dc6ea2d78d819bc9e3ce31d217271f40af9c989319ddc466faa542c4,PodSandboxId:266b9c9ff3c4be3b23861b9133bb076fc65c831d3b9d14a733790d25dd14cecb,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1717417604379198722,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-vq6sn,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: d4773645-1a91-48bc-a27e-61822e3eb944,},Annotations:map[string]string{io.kubernetes.container.hash: 7c46e196,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d13bd5e73c30ae4165f06b2185319aeacf35e7ffaa0b56363eb04137f5f6968,PodSandboxId:b07a28e9eef859d799892174459ac6d06f7f796f90c1d805586d8d4438fd0f2d,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTA
INER_EXITED,CreatedAt:1717417585093806521,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rl49z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 40f21b83-9dbc-4bc9-b23d-5c8c1aa04d70,},Annotations:map[string]string{io.kubernetes.container.hash: 7e9d6aae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78062314a87041d03dc6e5e7132662b0ff33b7d83c2f19e08843de0216f60c0f,PodSandboxId:52827ed278e4e8c5717896922ec331ff8f986d672d90a707092e16fb2a1356a5,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f
6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1717417584963806225,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-h7kn8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 676ead8d-a891-4dac-8cc5-992c426fcdc9,},Annotations:map[string]string{io.kubernetes.container.hash: d1d99a3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08062fd585905d420205076b1d3f748399b18214282181c986eb4e0dcdcb686f,PodSandboxId:3c836ea529a741dc02ac68dcdc31ac8e0d959d76d24d1fa21e9a52f7f15d92d9,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259b
fb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1717417579443576098,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-th7qj,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: cb66a0b3-53cb-493e-8010-d545cc1dc5b8,},Annotations:map[string]string{io.kubernetes.container.hash: 32e0c41d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff24eb8563b0cdf092e0477748bf3b7abddc2958ee31f2fce5b90d2987e09ab0,PodSandboxId:b76bfaf676bbeb8b006b7032d5ac92a16463266e80b93e0e52cda1863a458b9c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd
96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1717417569721394616,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-2trqm,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 1f4740f5-01f4-413e-8b79-311c67526d69,},Annotations:map[string]string{io.kubernetes.container.hash: df90b885,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:071b33296d63e35493453ab9868ec545daa42d19a1436dbc8c4e22d7983162fa,PodSandboxId:c808b7e546b606d9b24586ce1db0e971c4e35cf5bc1eae84ffb5fa24b44cfbf6,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-ser
ver/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1717417565412297941,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-pl8qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f4580a-9514-47c0-aa22-11c454eaca32,},Annotations:map[string]string{io.kubernetes.container.hash: 382214a7,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:435447885e6a3c602fae7bd77b056971c54bbc1ada0aa5e9e9f634db78fc7c0a,PodSandboxId:543fde334d4b530434d593b1fb43a32cd0a
a6dd937131e82b4db8d5f79083144,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1499ed4fbd0aa6ea742ab6bce25603aa33556e1ac0e2f24a4901a675247e538a,State:CONTAINER_EXITED,CreatedAt:1717417539747054220,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21a1c096-2479-4d10-864a-8b202b08a284,},Annotations:map[string]string{io.kubernetes.container.hash: 409d8265,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod
: 30,},},&Container{Id:17a9104d810266c5a8079eeaf8d0c23a2e4538617523b6b90bff538c0454bd06,PodSandboxId:81961c6a37d61c8b612f41f7f942f2b0a4c108ba966128c4208ecab42f3fe95c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717417533319293443,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3d92bc5-3f10-47e3-84a9-f532f14deae4,},Annotations:map[string]string{io.kubernetes.container.hash: ed5f337c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Co
ntainer{Id:35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3,PodSandboxId:0bfe8f416027409f7e1eac5af8acd40c936317be61611534f1284947fa2ef9f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717417526164116856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hmhdl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3cfe166-99f3-4ac9-9905-8be76bcb511d,},Annotations:map[string]string{io.kubernetes.container.hash: b559e7d2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.contain
er.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6add0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e,PodSandboxId:bdb166637cc76b778ba00bf3d396efbe8bd2978f9e621874b1bb0fb2220aff46,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717417523013109940,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6ssr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 609d1553-86b5-46ea-b503-bdfd9f291571,},Annotations:map[string]string{io.kubernetes.container.hash: dde3c0ec,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c7a1cc6df31c0c301fee639aa62ce868d9a11802928a59d2d19c941e0c51514,PodSandboxId:b7cc010079adda6957a2caae2c510628a876664b8dca66867c8c5a8f08ddc1c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717417503291175049,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 628fee2574e8e2e94faacdc70733c8af,},Annotations:map[string]string{io.kubernetes.container.hash: eee46468,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8,PodSandboxId:dfa5c4cb4bc79b6324610ebb8427a1121ff9130a3754c8522018aadb5bc2e443,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717417503268941281,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65e7e1f6b5fb520ef619dd246fd97035,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4,PodSandboxId:96186e4c50e5eb41ec7257ba2b4ec8474fc6064c8a935839054727baa8b306a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717417503266848986,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c2e93774694bec0d9f39543e1c101b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6bfe1b2c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b,PodSandboxId:0b594b2f837fe04b34ea1200fca819f9b4bc408fed28f0e293849d18e3e2d779,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717417503202321229,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2e264d67def89fa6266f980f6f77444,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d7c6a863-792e-4035-a4c3-6d145ae50a33 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:30:33 addons-699562 crio[679]: time="2024-06-03 12:30:33.567383343Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=faa35769-6109-4f37-9e76-68c92b302a11 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:30:33 addons-699562 crio[679]: time="2024-06-03 12:30:33.567470068Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=faa35769-6109-4f37-9e76-68c92b302a11 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:30:33 addons-699562 crio[679]: time="2024-06-03 12:30:33.568582446Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f2125ce5-aa30-4bd6-8252-4cd5fc3a0d94 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:30:33 addons-699562 crio[679]: time="2024-06-03 12:30:33.570152328Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717417833570126196,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584738,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f2125ce5-aa30-4bd6-8252-4cd5fc3a0d94 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:30:33 addons-699562 crio[679]: time="2024-06-03 12:30:33.570773856Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7d5896ab-34f8-41c7-8f10-e1d224bd05c9 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:30:33 addons-699562 crio[679]: time="2024-06-03 12:30:33.570844543Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7d5896ab-34f8-41c7-8f10-e1d224bd05c9 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:30:33 addons-699562 crio[679]: time="2024-06-03 12:30:33.571191236Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8d642bc3271163ac07b67d5d065df03b7f8ec966349c684f21b1bd59704f0e69,PodSandboxId:2cd7a7a28e0a544856b8d5555606d91ef93f12a0d206dd883ea24422a0b3358b,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1717417824993167174,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-79c22,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 084158b3-1687-4f4c-b741-cbab7ca11858,},Annotations:map[string]string{io.kubernetes.container.hash: 29cd3655,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a173411215156f8b18d5fbf4880f8e8fdde156ec2a9e410913aa0c571553461a,PodSandboxId:f58876a06d48db0d426b93948a20a6d66b18f312f72772da56715a105e6fb466,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4,State:CONTAINER_RUNNING,CreatedAt:1717417684144009266,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 22eac9e0-47f1-46a1-9745-87ca515de64e,},Annotations:map[string]string{io.kubern
etes.container.hash: d250beef,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bf5932194780bd3cb97f9edc1a375d5a81fda4f3a462fe7b477ade5bb3d2ef1,PodSandboxId:83a0e5827ce1a87f9a28b80c3e8aef138aa1aafbe0be947094b5660af09e3673,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bd42824d488ce58074f6d54cb051437d0dc2669f3f96a4d9b3b72a8d7ddda679,State:CONTAINER_RUNNING,CreatedAt:1717417672655915191,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-68456f997b-tpgtj,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: c02f3cb7-dd75-4d83-89fe-082ca6c80805,},Annotations:map[string]string{io.kubernetes.container.hash: 22246b3d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f787a95dc6ea2d78d819bc9e3ce31d217271f40af9c989319ddc466faa542c4,PodSandboxId:266b9c9ff3c4be3b23861b9133bb076fc65c831d3b9d14a733790d25dd14cecb,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1717417604379198722,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-vq6sn,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: d4773645-1a91-48bc-a27e-61822e3eb944,},Annotations:map[string]string{io.kubernetes.container.hash: 7c46e196,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d13bd5e73c30ae4165f06b2185319aeacf35e7ffaa0b56363eb04137f5f6968,PodSandboxId:b07a28e9eef859d799892174459ac6d06f7f796f90c1d805586d8d4438fd0f2d,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTA
INER_EXITED,CreatedAt:1717417585093806521,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rl49z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 40f21b83-9dbc-4bc9-b23d-5c8c1aa04d70,},Annotations:map[string]string{io.kubernetes.container.hash: 7e9d6aae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78062314a87041d03dc6e5e7132662b0ff33b7d83c2f19e08843de0216f60c0f,PodSandboxId:52827ed278e4e8c5717896922ec331ff8f986d672d90a707092e16fb2a1356a5,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f
6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1717417584963806225,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-h7kn8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 676ead8d-a891-4dac-8cc5-992c426fcdc9,},Annotations:map[string]string{io.kubernetes.container.hash: d1d99a3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08062fd585905d420205076b1d3f748399b18214282181c986eb4e0dcdcb686f,PodSandboxId:3c836ea529a741dc02ac68dcdc31ac8e0d959d76d24d1fa21e9a52f7f15d92d9,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259b
fb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1717417579443576098,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-th7qj,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: cb66a0b3-53cb-493e-8010-d545cc1dc5b8,},Annotations:map[string]string{io.kubernetes.container.hash: 32e0c41d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff24eb8563b0cdf092e0477748bf3b7abddc2958ee31f2fce5b90d2987e09ab0,PodSandboxId:b76bfaf676bbeb8b006b7032d5ac92a16463266e80b93e0e52cda1863a458b9c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd
96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1717417569721394616,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-2trqm,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 1f4740f5-01f4-413e-8b79-311c67526d69,},Annotations:map[string]string{io.kubernetes.container.hash: df90b885,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:071b33296d63e35493453ab9868ec545daa42d19a1436dbc8c4e22d7983162fa,PodSandboxId:c808b7e546b606d9b24586ce1db0e971c4e35cf5bc1eae84ffb5fa24b44cfbf6,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-ser
ver/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1717417565412297941,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-pl8qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f4580a-9514-47c0-aa22-11c454eaca32,},Annotations:map[string]string{io.kubernetes.container.hash: 382214a7,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:435447885e6a3c602fae7bd77b056971c54bbc1ada0aa5e9e9f634db78fc7c0a,PodSandboxId:543fde334d4b530434d593b1fb43a32cd0a
a6dd937131e82b4db8d5f79083144,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1499ed4fbd0aa6ea742ab6bce25603aa33556e1ac0e2f24a4901a675247e538a,State:CONTAINER_EXITED,CreatedAt:1717417539747054220,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21a1c096-2479-4d10-864a-8b202b08a284,},Annotations:map[string]string{io.kubernetes.container.hash: 409d8265,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod
: 30,},},&Container{Id:17a9104d810266c5a8079eeaf8d0c23a2e4538617523b6b90bff538c0454bd06,PodSandboxId:81961c6a37d61c8b612f41f7f942f2b0a4c108ba966128c4208ecab42f3fe95c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717417533319293443,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3d92bc5-3f10-47e3-84a9-f532f14deae4,},Annotations:map[string]string{io.kubernetes.container.hash: ed5f337c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Co
ntainer{Id:35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3,PodSandboxId:0bfe8f416027409f7e1eac5af8acd40c936317be61611534f1284947fa2ef9f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717417526164116856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hmhdl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3cfe166-99f3-4ac9-9905-8be76bcb511d,},Annotations:map[string]string{io.kubernetes.container.hash: b559e7d2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.contain
er.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6add0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e,PodSandboxId:bdb166637cc76b778ba00bf3d396efbe8bd2978f9e621874b1bb0fb2220aff46,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717417523013109940,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6ssr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 609d1553-86b5-46ea-b503-bdfd9f291571,},Annotations:map[string]string{io.kubernetes.container.hash: dde3c0ec,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c7a1cc6df31c0c301fee639aa62ce868d9a11802928a59d2d19c941e0c51514,PodSandboxId:b7cc010079adda6957a2caae2c510628a876664b8dca66867c8c5a8f08ddc1c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717417503291175049,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 628fee2574e8e2e94faacdc70733c8af,},Annotations:map[string]string{io.kubernetes.container.hash: eee46468,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8,PodSandboxId:dfa5c4cb4bc79b6324610ebb8427a1121ff9130a3754c8522018aadb5bc2e443,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717417503268941281,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65e7e1f6b5fb520ef619dd246fd97035,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4,PodSandboxId:96186e4c50e5eb41ec7257ba2b4ec8474fc6064c8a935839054727baa8b306a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717417503266848986,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c2e93774694bec0d9f39543e1c101b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6bfe1b2c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b,PodSandboxId:0b594b2f837fe04b34ea1200fca819f9b4bc408fed28f0e293849d18e3e2d779,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717417503202321229,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2e264d67def89fa6266f980f6f77444,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7d5896ab-34f8-41c7-8f10-e1d224bd05c9 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:30:33 addons-699562 crio[679]: time="2024-06-03 12:30:33.604619003Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d791d7a5-a49c-4828-8b1a-1abda82ba5ef name=/runtime.v1.RuntimeService/Version
	Jun 03 12:30:33 addons-699562 crio[679]: time="2024-06-03 12:30:33.604811155Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d791d7a5-a49c-4828-8b1a-1abda82ba5ef name=/runtime.v1.RuntimeService/Version
	Jun 03 12:30:33 addons-699562 crio[679]: time="2024-06-03 12:30:33.606089887Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d49b64d9-dffb-4f75-a133-301bf33b4ea5 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:30:33 addons-699562 crio[679]: time="2024-06-03 12:30:33.607430682Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717417833607405119,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584738,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d49b64d9-dffb-4f75-a133-301bf33b4ea5 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:30:33 addons-699562 crio[679]: time="2024-06-03 12:30:33.608092019Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ded845f4-ad6b-4e79-8933-a2af527044f3 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:30:33 addons-699562 crio[679]: time="2024-06-03 12:30:33.608162721Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ded845f4-ad6b-4e79-8933-a2af527044f3 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:30:33 addons-699562 crio[679]: time="2024-06-03 12:30:33.608480061Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8d642bc3271163ac07b67d5d065df03b7f8ec966349c684f21b1bd59704f0e69,PodSandboxId:2cd7a7a28e0a544856b8d5555606d91ef93f12a0d206dd883ea24422a0b3358b,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1717417824993167174,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-79c22,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 084158b3-1687-4f4c-b741-cbab7ca11858,},Annotations:map[string]string{io.kubernetes.container.hash: 29cd3655,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a173411215156f8b18d5fbf4880f8e8fdde156ec2a9e410913aa0c571553461a,PodSandboxId:f58876a06d48db0d426b93948a20a6d66b18f312f72772da56715a105e6fb466,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4,State:CONTAINER_RUNNING,CreatedAt:1717417684144009266,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 22eac9e0-47f1-46a1-9745-87ca515de64e,},Annotations:map[string]string{io.kubern
etes.container.hash: d250beef,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bf5932194780bd3cb97f9edc1a375d5a81fda4f3a462fe7b477ade5bb3d2ef1,PodSandboxId:83a0e5827ce1a87f9a28b80c3e8aef138aa1aafbe0be947094b5660af09e3673,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bd42824d488ce58074f6d54cb051437d0dc2669f3f96a4d9b3b72a8d7ddda679,State:CONTAINER_RUNNING,CreatedAt:1717417672655915191,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-68456f997b-tpgtj,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: c02f3cb7-dd75-4d83-89fe-082ca6c80805,},Annotations:map[string]string{io.kubernetes.container.hash: 22246b3d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f787a95dc6ea2d78d819bc9e3ce31d217271f40af9c989319ddc466faa542c4,PodSandboxId:266b9c9ff3c4be3b23861b9133bb076fc65c831d3b9d14a733790d25dd14cecb,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1717417604379198722,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-vq6sn,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: d4773645-1a91-48bc-a27e-61822e3eb944,},Annotations:map[string]string{io.kubernetes.container.hash: 7c46e196,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d13bd5e73c30ae4165f06b2185319aeacf35e7ffaa0b56363eb04137f5f6968,PodSandboxId:b07a28e9eef859d799892174459ac6d06f7f796f90c1d805586d8d4438fd0f2d,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTA
INER_EXITED,CreatedAt:1717417585093806521,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rl49z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 40f21b83-9dbc-4bc9-b23d-5c8c1aa04d70,},Annotations:map[string]string{io.kubernetes.container.hash: 7e9d6aae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78062314a87041d03dc6e5e7132662b0ff33b7d83c2f19e08843de0216f60c0f,PodSandboxId:52827ed278e4e8c5717896922ec331ff8f986d672d90a707092e16fb2a1356a5,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f
6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1717417584963806225,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-h7kn8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 676ead8d-a891-4dac-8cc5-992c426fcdc9,},Annotations:map[string]string{io.kubernetes.container.hash: d1d99a3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08062fd585905d420205076b1d3f748399b18214282181c986eb4e0dcdcb686f,PodSandboxId:3c836ea529a741dc02ac68dcdc31ac8e0d959d76d24d1fa21e9a52f7f15d92d9,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259b
fb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1717417579443576098,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-th7qj,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: cb66a0b3-53cb-493e-8010-d545cc1dc5b8,},Annotations:map[string]string{io.kubernetes.container.hash: 32e0c41d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff24eb8563b0cdf092e0477748bf3b7abddc2958ee31f2fce5b90d2987e09ab0,PodSandboxId:b76bfaf676bbeb8b006b7032d5ac92a16463266e80b93e0e52cda1863a458b9c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd
96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1717417569721394616,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-2trqm,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 1f4740f5-01f4-413e-8b79-311c67526d69,},Annotations:map[string]string{io.kubernetes.container.hash: df90b885,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:071b33296d63e35493453ab9868ec545daa42d19a1436dbc8c4e22d7983162fa,PodSandboxId:c808b7e546b606d9b24586ce1db0e971c4e35cf5bc1eae84ffb5fa24b44cfbf6,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-ser
ver/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1717417565412297941,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-pl8qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f4580a-9514-47c0-aa22-11c454eaca32,},Annotations:map[string]string{io.kubernetes.container.hash: 382214a7,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:435447885e6a3c602fae7bd77b056971c54bbc1ada0aa5e9e9f634db78fc7c0a,PodSandboxId:543fde334d4b530434d593b1fb43a32cd0a
a6dd937131e82b4db8d5f79083144,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1499ed4fbd0aa6ea742ab6bce25603aa33556e1ac0e2f24a4901a675247e538a,State:CONTAINER_EXITED,CreatedAt:1717417539747054220,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21a1c096-2479-4d10-864a-8b202b08a284,},Annotations:map[string]string{io.kubernetes.container.hash: 409d8265,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod
: 30,},},&Container{Id:17a9104d810266c5a8079eeaf8d0c23a2e4538617523b6b90bff538c0454bd06,PodSandboxId:81961c6a37d61c8b612f41f7f942f2b0a4c108ba966128c4208ecab42f3fe95c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717417533319293443,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3d92bc5-3f10-47e3-84a9-f532f14deae4,},Annotations:map[string]string{io.kubernetes.container.hash: ed5f337c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Co
ntainer{Id:35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3,PodSandboxId:0bfe8f416027409f7e1eac5af8acd40c936317be61611534f1284947fa2ef9f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717417526164116856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hmhdl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3cfe166-99f3-4ac9-9905-8be76bcb511d,},Annotations:map[string]string{io.kubernetes.container.hash: b559e7d2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.contain
er.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6add0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e,PodSandboxId:bdb166637cc76b778ba00bf3d396efbe8bd2978f9e621874b1bb0fb2220aff46,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717417523013109940,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6ssr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 609d1553-86b5-46ea-b503-bdfd9f291571,},Annotations:map[string]string{io.kubernetes.container.hash: dde3c0ec,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c7a1cc6df31c0c301fee639aa62ce868d9a11802928a59d2d19c941e0c51514,PodSandboxId:b7cc010079adda6957a2caae2c510628a876664b8dca66867c8c5a8f08ddc1c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717417503291175049,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 628fee2574e8e2e94faacdc70733c8af,},Annotations:map[string]string{io.kubernetes.container.hash: eee46468,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8,PodSandboxId:dfa5c4cb4bc79b6324610ebb8427a1121ff9130a3754c8522018aadb5bc2e443,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717417503268941281,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65e7e1f6b5fb520ef619dd246fd97035,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4,PodSandboxId:96186e4c50e5eb41ec7257ba2b4ec8474fc6064c8a935839054727baa8b306a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717417503266848986,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c2e93774694bec0d9f39543e1c101b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6bfe1b2c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b,PodSandboxId:0b594b2f837fe04b34ea1200fca819f9b4bc408fed28f0e293849d18e3e2d779,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717417503202321229,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2e264d67def89fa6266f980f6f77444,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ded845f4-ad6b-4e79-8933-a2af527044f3 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:30:33 addons-699562 crio[679]: time="2024-06-03 12:30:33.645864445Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e023e47a-ce10-4ded-a64a-ae6a2d3bcb26 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:30:33 addons-699562 crio[679]: time="2024-06-03 12:30:33.645955534Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e023e47a-ce10-4ded-a64a-ae6a2d3bcb26 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:30:33 addons-699562 crio[679]: time="2024-06-03 12:30:33.647182216Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=23f6cfba-eaea-4c30-b7ac-5db27e7dd1e1 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:30:33 addons-699562 crio[679]: time="2024-06-03 12:30:33.648442087Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717417833648415709,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584738,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=23f6cfba-eaea-4c30-b7ac-5db27e7dd1e1 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:30:33 addons-699562 crio[679]: time="2024-06-03 12:30:33.649011776Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0adb0d59-0b1d-4b86-b2aa-3f6e75a8abb0 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:30:33 addons-699562 crio[679]: time="2024-06-03 12:30:33.649103464Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0adb0d59-0b1d-4b86-b2aa-3f6e75a8abb0 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:30:33 addons-699562 crio[679]: time="2024-06-03 12:30:33.649524921Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8d642bc3271163ac07b67d5d065df03b7f8ec966349c684f21b1bd59704f0e69,PodSandboxId:2cd7a7a28e0a544856b8d5555606d91ef93f12a0d206dd883ea24422a0b3358b,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1717417824993167174,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-79c22,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 084158b3-1687-4f4c-b741-cbab7ca11858,},Annotations:map[string]string{io.kubernetes.container.hash: 29cd3655,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a173411215156f8b18d5fbf4880f8e8fdde156ec2a9e410913aa0c571553461a,PodSandboxId:f58876a06d48db0d426b93948a20a6d66b18f312f72772da56715a105e6fb466,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4,State:CONTAINER_RUNNING,CreatedAt:1717417684144009266,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 22eac9e0-47f1-46a1-9745-87ca515de64e,},Annotations:map[string]string{io.kubern
etes.container.hash: d250beef,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bf5932194780bd3cb97f9edc1a375d5a81fda4f3a462fe7b477ade5bb3d2ef1,PodSandboxId:83a0e5827ce1a87f9a28b80c3e8aef138aa1aafbe0be947094b5660af09e3673,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bd42824d488ce58074f6d54cb051437d0dc2669f3f96a4d9b3b72a8d7ddda679,State:CONTAINER_RUNNING,CreatedAt:1717417672655915191,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-68456f997b-tpgtj,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: c02f3cb7-dd75-4d83-89fe-082ca6c80805,},Annotations:map[string]string{io.kubernetes.container.hash: 22246b3d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f787a95dc6ea2d78d819bc9e3ce31d217271f40af9c989319ddc466faa542c4,PodSandboxId:266b9c9ff3c4be3b23861b9133bb076fc65c831d3b9d14a733790d25dd14cecb,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1717417604379198722,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-vq6sn,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: d4773645-1a91-48bc-a27e-61822e3eb944,},Annotations:map[string]string{io.kubernetes.container.hash: 7c46e196,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d13bd5e73c30ae4165f06b2185319aeacf35e7ffaa0b56363eb04137f5f6968,PodSandboxId:b07a28e9eef859d799892174459ac6d06f7f796f90c1d805586d8d4438fd0f2d,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTA
INER_EXITED,CreatedAt:1717417585093806521,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rl49z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 40f21b83-9dbc-4bc9-b23d-5c8c1aa04d70,},Annotations:map[string]string{io.kubernetes.container.hash: 7e9d6aae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78062314a87041d03dc6e5e7132662b0ff33b7d83c2f19e08843de0216f60c0f,PodSandboxId:52827ed278e4e8c5717896922ec331ff8f986d672d90a707092e16fb2a1356a5,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f
6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1717417584963806225,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-h7kn8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 676ead8d-a891-4dac-8cc5-992c426fcdc9,},Annotations:map[string]string{io.kubernetes.container.hash: d1d99a3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08062fd585905d420205076b1d3f748399b18214282181c986eb4e0dcdcb686f,PodSandboxId:3c836ea529a741dc02ac68dcdc31ac8e0d959d76d24d1fa21e9a52f7f15d92d9,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259b
fb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1717417579443576098,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-th7qj,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: cb66a0b3-53cb-493e-8010-d545cc1dc5b8,},Annotations:map[string]string{io.kubernetes.container.hash: 32e0c41d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff24eb8563b0cdf092e0477748bf3b7abddc2958ee31f2fce5b90d2987e09ab0,PodSandboxId:b76bfaf676bbeb8b006b7032d5ac92a16463266e80b93e0e52cda1863a458b9c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd
96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1717417569721394616,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-2trqm,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 1f4740f5-01f4-413e-8b79-311c67526d69,},Annotations:map[string]string{io.kubernetes.container.hash: df90b885,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:071b33296d63e35493453ab9868ec545daa42d19a1436dbc8c4e22d7983162fa,PodSandboxId:c808b7e546b606d9b24586ce1db0e971c4e35cf5bc1eae84ffb5fa24b44cfbf6,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-ser
ver/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1717417565412297941,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-pl8qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f4580a-9514-47c0-aa22-11c454eaca32,},Annotations:map[string]string{io.kubernetes.container.hash: 382214a7,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:435447885e6a3c602fae7bd77b056971c54bbc1ada0aa5e9e9f634db78fc7c0a,PodSandboxId:543fde334d4b530434d593b1fb43a32cd0a
a6dd937131e82b4db8d5f79083144,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1499ed4fbd0aa6ea742ab6bce25603aa33556e1ac0e2f24a4901a675247e538a,State:CONTAINER_EXITED,CreatedAt:1717417539747054220,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21a1c096-2479-4d10-864a-8b202b08a284,},Annotations:map[string]string{io.kubernetes.container.hash: 409d8265,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod
: 30,},},&Container{Id:17a9104d810266c5a8079eeaf8d0c23a2e4538617523b6b90bff538c0454bd06,PodSandboxId:81961c6a37d61c8b612f41f7f942f2b0a4c108ba966128c4208ecab42f3fe95c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717417533319293443,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3d92bc5-3f10-47e3-84a9-f532f14deae4,},Annotations:map[string]string{io.kubernetes.container.hash: ed5f337c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Co
ntainer{Id:35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3,PodSandboxId:0bfe8f416027409f7e1eac5af8acd40c936317be61611534f1284947fa2ef9f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717417526164116856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hmhdl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3cfe166-99f3-4ac9-9905-8be76bcb511d,},Annotations:map[string]string{io.kubernetes.container.hash: b559e7d2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.contain
er.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6add0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e,PodSandboxId:bdb166637cc76b778ba00bf3d396efbe8bd2978f9e621874b1bb0fb2220aff46,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717417523013109940,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6ssr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 609d1553-86b5-46ea-b503-bdfd9f291571,},Annotations:map[string]string{io.kubernetes.container.hash: dde3c0ec,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c7a1cc6df31c0c301fee639aa62ce868d9a11802928a59d2d19c941e0c51514,PodSandboxId:b7cc010079adda6957a2caae2c510628a876664b8dca66867c8c5a8f08ddc1c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717417503291175049,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 628fee2574e8e2e94faacdc70733c8af,},Annotations:map[string]string{io.kubernetes.container.hash: eee46468,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8,PodSandboxId:dfa5c4cb4bc79b6324610ebb8427a1121ff9130a3754c8522018aadb5bc2e443,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717417503268941281,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65e7e1f6b5fb520ef619dd246fd97035,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4,PodSandboxId:96186e4c50e5eb41ec7257ba2b4ec8474fc6064c8a935839054727baa8b306a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717417503266848986,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c2e93774694bec0d9f39543e1c101b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6bfe1b2c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b,PodSandboxId:0b594b2f837fe04b34ea1200fca819f9b4bc408fed28f0e293849d18e3e2d779,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717417503202321229,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2e264d67def89fa6266f980f6f77444,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0adb0d59-0b1d-4b86-b2aa-3f6e75a8abb0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8d642bc327116       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      8 seconds ago       Running             hello-world-app           0                   2cd7a7a28e0a5       hello-world-app-86c47465fc-79c22
	a173411215156       docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa                              2 minutes ago       Running             nginx                     0                   f58876a06d48d       nginx
	9bf5932194780       ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474                        2 minutes ago       Running             headlamp                  0                   83a0e5827ce1a       headlamp-68456f997b-tpgtj
	8f787a95dc6ea       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 3 minutes ago       Running             gcp-auth                  0                   266b9c9ff3c4b       gcp-auth-5db96cd9b4-vq6sn
	3d13bd5e73c30       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   4 minutes ago       Exited              patch                     0                   b07a28e9eef85       ingress-nginx-admission-patch-rl49z
	78062314a8704       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   4 minutes ago       Exited              create                    0                   52827ed278e4e       ingress-nginx-admission-create-h7kn8
	08062fd585905       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              4 minutes ago       Running             yakd                      0                   3c836ea529a74       yakd-dashboard-5ddbf7d777-th7qj
	ff24eb8563b0c       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             4 minutes ago       Running             local-path-provisioner    0                   b76bfaf676bbe       local-path-provisioner-8d985888d-2trqm
	071b33296d63e       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        4 minutes ago       Running             metrics-server            0                   c808b7e546b60       metrics-server-c59844bb4-pl8qk
	435447885e6a3       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f             4 minutes ago       Exited              minikube-ingress-dns      0                   543fde334d4b5       kube-ingress-dns-minikube
	17a9104d81026       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   81961c6a37d61       storage-provisioner
	35f4eaf8d81f1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             5 minutes ago       Running             coredns                   0                   0bfe8f4160274       coredns-7db6d8ff4d-hmhdl
	6add0233edc94       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                                             5 minutes ago       Running             kube-proxy                0                   bdb166637cc76       kube-proxy-6ssr8
	0c7a1cc6df31c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                             5 minutes ago       Running             etcd                      0                   b7cc010079add       etcd-addons-699562
	5dacc96e3a0d6       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                                             5 minutes ago       Running             kube-controller-manager   0                   dfa5c4cb4bc79       kube-controller-manager-addons-699562
	ff21db0353955       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                                             5 minutes ago       Running             kube-apiserver            0                   96186e4c50e5e       kube-apiserver-addons-699562
	92e20bf314646       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                                             5 minutes ago       Running             kube-scheduler            0                   0b594b2f837fe       kube-scheduler-addons-699562
	
	
	==> coredns [35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3] <==
	[INFO] 10.244.0.8:53029 - 52615 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000329373s
	[INFO] 10.244.0.8:53749 - 1624 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000062672s
	[INFO] 10.244.0.8:53749 - 24926 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00015104s
	[INFO] 10.244.0.8:58411 - 17668 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000212275s
	[INFO] 10.244.0.8:58411 - 11274 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0000595s
	[INFO] 10.244.0.8:59239 - 53735 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00017039s
	[INFO] 10.244.0.8:59239 - 37605 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000201028s
	[INFO] 10.244.0.8:52190 - 44357 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000260199s
	[INFO] 10.244.0.8:52190 - 54344 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000203243s
	[INFO] 10.244.0.8:40017 - 29233 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000095616s
	[INFO] 10.244.0.8:40017 - 50748 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00023833s
	[INFO] 10.244.0.8:40407 - 532 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000075947s
	[INFO] 10.244.0.8:40407 - 24106 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000107276s
	[INFO] 10.244.0.8:55786 - 11074 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000068658s
	[INFO] 10.244.0.8:55786 - 40000 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000154175s
	[INFO] 10.244.0.22:48426 - 37810 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000911895s
	[INFO] 10.244.0.22:55143 - 37903 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000924721s
	[INFO] 10.244.0.22:54175 - 35195 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000155679s
	[INFO] 10.244.0.22:46392 - 19652 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000056593s
	[INFO] 10.244.0.22:44105 - 37037 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000209248s
	[INFO] 10.244.0.22:58175 - 33620 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000073278s
	[INFO] 10.244.0.22:48829 - 15494 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001483557s
	[INFO] 10.244.0.22:45600 - 59491 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.00177612s
	[INFO] 10.244.0.27:52018 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000487785s
	[INFO] 10.244.0.27:44227 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000146213s
	
	
	==> describe nodes <==
	Name:               addons-699562
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-699562
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	                    minikube.k8s.io/name=addons-699562
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_03T12_25_09_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-699562
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 12:25:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-699562
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 12:30:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 12:28:43 +0000   Mon, 03 Jun 2024 12:25:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 12:28:43 +0000   Mon, 03 Jun 2024 12:25:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 12:28:43 +0000   Mon, 03 Jun 2024 12:25:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 12:28:43 +0000   Mon, 03 Jun 2024 12:25:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.241
	  Hostname:    addons-699562
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 84ef91a2c9524e6487a854dd506d694c
	  System UUID:                84ef91a2-c952-4e64-87a8-54dd506d694c
	  Boot ID:                    af6edd86-d456-43e7-97d1-dac4dba15c8e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                      ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-79c22          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m33s
	  gcp-auth                    gcp-auth-5db96cd9b4-vq6sn                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  headlamp                    headlamp-68456f997b-tpgtj                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m47s
	  kube-system                 coredns-7db6d8ff4d-hmhdl                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m11s
	  kube-system                 etcd-addons-699562                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m25s
	  kube-system                 kube-apiserver-addons-699562              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m25s
	  kube-system                 kube-controller-manager-addons-699562     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m25s
	  kube-system                 kube-proxy-6ssr8                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m11s
	  kube-system                 kube-scheduler-addons-699562              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m25s
	  kube-system                 metrics-server-c59844bb4-pl8qk            100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         5m5s
	  kube-system                 storage-provisioner                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m5s
	  local-path-storage          local-path-provisioner-8d985888d-2trqm    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m4s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-th7qj           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     5m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             498Mi (13%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m10s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m31s (x8 over 5m31s)  kubelet          Node addons-699562 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m31s (x8 over 5m31s)  kubelet          Node addons-699562 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m31s (x7 over 5m31s)  kubelet          Node addons-699562 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m25s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m25s                  kubelet          Node addons-699562 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m25s                  kubelet          Node addons-699562 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m25s                  kubelet          Node addons-699562 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m24s                  kubelet          Node addons-699562 status is now: NodeReady
	  Normal  RegisteredNode           5m12s                  node-controller  Node addons-699562 event: Registered Node addons-699562 in Controller
	
	
	==> dmesg <==
	[  +4.797363] kauditd_printk_skb: 96 callbacks suppressed
	[  +5.070667] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.543858] kauditd_printk_skb: 115 callbacks suppressed
	[  +8.740436] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.913062] kauditd_printk_skb: 2 callbacks suppressed
	[Jun 3 12:26] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.135899] kauditd_printk_skb: 40 callbacks suppressed
	[ +11.704931] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.044616] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.006333] kauditd_printk_skb: 85 callbacks suppressed
	[  +9.474385] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.925812] kauditd_printk_skb: 18 callbacks suppressed
	[  +8.811764] kauditd_printk_skb: 24 callbacks suppressed
	[Jun 3 12:27] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.419790] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.060102] kauditd_printk_skb: 35 callbacks suppressed
	[Jun 3 12:28] kauditd_printk_skb: 82 callbacks suppressed
	[  +6.441502] kauditd_printk_skb: 56 callbacks suppressed
	[  +5.709911] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.585647] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.428593] kauditd_printk_skb: 3 callbacks suppressed
	[ +15.030336] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.159254] kauditd_printk_skb: 33 callbacks suppressed
	[Jun 3 12:30] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.068631] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [0c7a1cc6df31c0c301fee639aa62ce868d9a11802928a59d2d19c941e0c51514] <==
	{"level":"info","ts":"2024-06-03T12:26:40.969184Z","caller":"traceutil/trace.go:171","msg":"trace[875911949] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-c59844bb4-pl8qk; range_end:; response_count:1; response_revision:1164; }","duration":"172.808766ms","start":"2024-06-03T12:26:40.796368Z","end":"2024-06-03T12:26:40.969176Z","steps":["trace[875911949] 'agreement among raft nodes before linearized reading'  (duration: 172.731177ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T12:26:40.969083Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"167.658256ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices/\" range_end:\"/registry/apiregistration.k8s.io/apiservices0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-06-03T12:26:40.969383Z","caller":"traceutil/trace.go:171","msg":"trace[1638708360] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/; range_end:/registry/apiregistration.k8s.io/apiservices0; response_count:0; response_revision:1164; }","duration":"168.010019ms","start":"2024-06-03T12:26:40.801363Z","end":"2024-06-03T12:26:40.969373Z","steps":["trace[1638708360] 'agreement among raft nodes before linearized reading'  (duration: 167.606225ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T12:26:40.969317Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"363.010904ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11453"}
	{"level":"info","ts":"2024-06-03T12:26:40.969599Z","caller":"traceutil/trace.go:171","msg":"trace[494188914] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1164; }","duration":"363.307211ms","start":"2024-06-03T12:26:40.606284Z","end":"2024-06-03T12:26:40.969591Z","steps":["trace[494188914] 'agreement among raft nodes before linearized reading'  (duration: 362.985664ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T12:26:40.969676Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-03T12:26:40.60627Z","time spent":"363.34474ms","remote":"127.0.0.1:33146","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":3,"response size":11475,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"info","ts":"2024-06-03T12:26:44.2914Z","caller":"traceutil/trace.go:171","msg":"trace[27234936] linearizableReadLoop","detail":"{readStateIndex:1214; appliedIndex:1213; }","duration":"329.42933ms","start":"2024-06-03T12:26:43.961874Z","end":"2024-06-03T12:26:44.291304Z","steps":["trace[27234936] 'read index received'  (duration: 329.000447ms)","trace[27234936] 'applied index is now lower than readState.Index'  (duration: 428.24µs)"],"step_count":2}
	{"level":"info","ts":"2024-06-03T12:26:44.291596Z","caller":"traceutil/trace.go:171","msg":"trace[1309044197] transaction","detail":"{read_only:false; response_revision:1180; number_of_response:1; }","duration":"436.33847ms","start":"2024-06-03T12:26:43.85524Z","end":"2024-06-03T12:26:44.291579Z","steps":["trace[1309044197] 'process raft request'  (duration: 435.849089ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T12:26:44.29339Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-03T12:26:43.855226Z","time spent":"438.045751ms","remote":"127.0.0.1:33140","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1166 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-06-03T12:26:44.293718Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.692606ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11453"}
	{"level":"info","ts":"2024-06-03T12:26:44.293762Z","caller":"traceutil/trace.go:171","msg":"trace[49774894] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1181; }","duration":"187.765631ms","start":"2024-06-03T12:26:44.10599Z","end":"2024-06-03T12:26:44.293756Z","steps":["trace[49774894] 'agreement among raft nodes before linearized reading'  (duration: 187.507223ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T12:26:44.293953Z","caller":"traceutil/trace.go:171","msg":"trace[1996245162] transaction","detail":"{read_only:false; response_revision:1181; number_of_response:1; }","duration":"289.652518ms","start":"2024-06-03T12:26:44.004294Z","end":"2024-06-03T12:26:44.293946Z","steps":["trace[1996245162] 'process raft request'  (duration: 289.152911ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T12:26:44.291718Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"329.823247ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-03T12:26:44.294214Z","caller":"traceutil/trace.go:171","msg":"trace[1501958648] range","detail":"{range_begin:/registry/leases/ingress-nginx/ingress-nginx-leader; range_end:; response_count:0; response_revision:1180; }","duration":"332.361109ms","start":"2024-06-03T12:26:43.961847Z","end":"2024-06-03T12:26:44.294208Z","steps":["trace[1501958648] 'agreement among raft nodes before linearized reading'  (duration: 329.829315ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T12:26:44.294236Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-03T12:26:43.961834Z","time spent":"332.394927ms","remote":"127.0.0.1:33232","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":0,"response size":27,"request content":"key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" "}
	{"level":"info","ts":"2024-06-03T12:28:03.350249Z","caller":"traceutil/trace.go:171","msg":"trace[112141829] linearizableReadLoop","detail":"{readStateIndex:1550; appliedIndex:1549; }","duration":"344.411339ms","start":"2024-06-03T12:28:03.005804Z","end":"2024-06-03T12:28:03.350215Z","steps":["trace[112141829] 'read index received'  (duration: 344.171196ms)","trace[112141829] 'applied index is now lower than readState.Index'  (duration: 239.674µs)"],"step_count":2}
	{"level":"info","ts":"2024-06-03T12:28:03.350549Z","caller":"traceutil/trace.go:171","msg":"trace[525061229] transaction","detail":"{read_only:false; response_revision:1493; number_of_response:1; }","duration":"407.785729ms","start":"2024-06-03T12:28:02.942753Z","end":"2024-06-03T12:28:03.350539Z","steps":["trace[525061229] 'process raft request'  (duration: 407.359787ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T12:28:03.350779Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-03T12:28:02.942735Z","time spent":"407.849445ms","remote":"127.0.0.1:33146","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4125,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/registry-proxy-n8265\" mod_revision:1490 > success:<request_put:<key:\"/registry/pods/kube-system/registry-proxy-n8265\" value_size:4070 >> failure:<request_range:<key:\"/registry/pods/kube-system/registry-proxy-n8265\" > >"}
	{"level":"warn","ts":"2024-06-03T12:28:03.350953Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"345.148304ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-06-03T12:28:03.350975Z","caller":"traceutil/trace.go:171","msg":"trace[166667215] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1493; }","duration":"345.228477ms","start":"2024-06-03T12:28:03.00574Z","end":"2024-06-03T12:28:03.350969Z","steps":["trace[166667215] 'agreement among raft nodes before linearized reading'  (duration: 345.150589ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T12:28:03.35099Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-03T12:28:03.005727Z","time spent":"345.260644ms","remote":"127.0.0.1:33232","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":1,"response size":521,"request content":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" "}
	{"level":"warn","ts":"2024-06-03T12:28:03.351095Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"223.791048ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:6051"}
	{"level":"info","ts":"2024-06-03T12:28:03.351136Z","caller":"traceutil/trace.go:171","msg":"trace[2033513505] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1493; }","duration":"223.850857ms","start":"2024-06-03T12:28:03.12728Z","end":"2024-06-03T12:28:03.35113Z","steps":["trace[2033513505] 'agreement among raft nodes before linearized reading'  (duration: 223.776833ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T12:28:19.240319Z","caller":"traceutil/trace.go:171","msg":"trace[1745785475] transaction","detail":"{read_only:false; response_revision:1593; number_of_response:1; }","duration":"377.328726ms","start":"2024-06-03T12:28:18.862975Z","end":"2024-06-03T12:28:19.240304Z","steps":["trace[1745785475] 'process raft request'  (duration: 377.221796ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T12:28:19.240445Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-03T12:28:18.862954Z","time spent":"377.437463ms","remote":"127.0.0.1:33140","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1589 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	
	==> gcp-auth [8f787a95dc6ea2d78d819bc9e3ce31d217271f40af9c989319ddc466faa542c4] <==
	2024/06/03 12:26:44 GCP Auth Webhook started!
	2024/06/03 12:27:45 Ready to marshal response ...
	2024/06/03 12:27:45 Ready to write response ...
	2024/06/03 12:27:45 Ready to marshal response ...
	2024/06/03 12:27:45 Ready to write response ...
	2024/06/03 12:27:46 Ready to marshal response ...
	2024/06/03 12:27:46 Ready to write response ...
	2024/06/03 12:27:46 Ready to marshal response ...
	2024/06/03 12:27:46 Ready to write response ...
	2024/06/03 12:27:46 Ready to marshal response ...
	2024/06/03 12:27:46 Ready to write response ...
	2024/06/03 12:27:51 Ready to marshal response ...
	2024/06/03 12:27:51 Ready to write response ...
	2024/06/03 12:27:57 Ready to marshal response ...
	2024/06/03 12:27:57 Ready to write response ...
	2024/06/03 12:27:58 Ready to marshal response ...
	2024/06/03 12:27:58 Ready to write response ...
	2024/06/03 12:28:00 Ready to marshal response ...
	2024/06/03 12:28:00 Ready to write response ...
	2024/06/03 12:28:13 Ready to marshal response ...
	2024/06/03 12:28:13 Ready to write response ...
	2024/06/03 12:28:35 Ready to marshal response ...
	2024/06/03 12:28:35 Ready to write response ...
	2024/06/03 12:30:22 Ready to marshal response ...
	2024/06/03 12:30:22 Ready to write response ...
	
	
	==> kernel <==
	 12:30:34 up 6 min,  0 users,  load average: 0.34, 1.11, 0.63
	Linux addons-699562 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4] <==
	E0603 12:27:09.544611       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.164.223:443/apis/metrics.k8s.io/v1beta1: Get "https://10.111.164.223:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.111.164.223:443: connect: connection refused
	I0603 12:27:09.607112       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0603 12:27:46.582529       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.91.232"}
	I0603 12:27:59.941066       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0603 12:28:00.119538       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.213.114"}
	I0603 12:28:04.555445       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	E0603 12:28:05.573603       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"gadget\" not found]"
	E0603 12:28:05.580561       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"gadget\" not found]"
	W0603 12:28:05.588138       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0603 12:28:26.681238       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0603 12:28:50.936173       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0603 12:28:50.936287       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0603 12:28:50.965267       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0603 12:28:50.965333       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0603 12:28:50.975534       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0603 12:28:50.975583       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0603 12:28:50.987547       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0603 12:28:50.987604       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0603 12:28:51.026275       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0603 12:28:51.029483       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0603 12:28:51.975974       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0603 12:28:52.027110       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0603 12:28:52.037480       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0603 12:30:22.605713       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.102.62.100"}
	E0603 12:30:25.751190       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8] <==
	W0603 12:29:17.014441       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0603 12:29:17.014553       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0603 12:29:27.425835       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0603 12:29:27.425985       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0603 12:29:30.412418       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0603 12:29:30.412518       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0603 12:29:30.812381       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0603 12:29:30.812479       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0603 12:30:04.128267       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0603 12:30:04.128694       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0603 12:30:04.205594       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0603 12:30:04.205775       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0603 12:30:07.142462       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0603 12:30:07.142560       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0603 12:30:07.589170       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0603 12:30:07.589205       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0603 12:30:22.438797       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="40.583646ms"
	I0603 12:30:22.461785       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="22.885543ms"
	I0603 12:30:22.461869       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="36.813µs"
	I0603 12:30:22.468989       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="24.42µs"
	I0603 12:30:25.651086       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="3.624µs"
	I0603 12:30:25.654400       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0603 12:30:25.681077       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0603 12:30:26.051406       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="9.979993ms"
	I0603 12:30:26.052358       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="79.584µs"
	
	
	==> kube-proxy [6add0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e] <==
	I0603 12:25:23.585524       1 server_linux.go:69] "Using iptables proxy"
	I0603 12:25:23.608160       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.241"]
	I0603 12:25:23.755662       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 12:25:23.755712       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 12:25:23.755727       1 server_linux.go:165] "Using iptables Proxier"
	I0603 12:25:23.759852       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 12:25:23.760062       1 server.go:872] "Version info" version="v1.30.1"
	I0603 12:25:23.760076       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 12:25:23.764482       1 config.go:192] "Starting service config controller"
	I0603 12:25:23.764500       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 12:25:23.764539       1 config.go:101] "Starting endpoint slice config controller"
	I0603 12:25:23.764543       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 12:25:23.766909       1 config.go:319] "Starting node config controller"
	I0603 12:25:23.766944       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 12:25:23.865029       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 12:25:23.865071       1 shared_informer.go:320] Caches are synced for service config
	I0603 12:25:23.867400       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b] <==
	W0603 12:25:05.882911       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0603 12:25:05.883016       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0603 12:25:05.883736       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0603 12:25:05.884966       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0603 12:25:05.884245       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0603 12:25:05.884352       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0603 12:25:05.884720       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0603 12:25:05.884860       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0603 12:25:06.694024       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0603 12:25:06.694053       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0603 12:25:06.778297       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0603 12:25:06.778385       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0603 12:25:06.850846       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0603 12:25:06.850894       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0603 12:25:06.890880       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0603 12:25:06.891766       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0603 12:25:06.929739       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0603 12:25:06.929827       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0603 12:25:06.932321       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0603 12:25:06.932367       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0603 12:25:07.026054       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0603 12:25:07.026211       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0603 12:25:07.199563       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0603 12:25:07.199751       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 12:25:09.962396       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 03 12:30:22 addons-699562 kubelet[1268]: I0603 12:30:22.444619    1268 memory_manager.go:354] "RemoveStaleState removing state" podUID="db932b0d-726d-4b8d-b47c-dcbc1657a70d" containerName="node-driver-registrar"
	Jun 03 12:30:22 addons-699562 kubelet[1268]: I0603 12:30:22.444708    1268 memory_manager.go:354] "RemoveStaleState removing state" podUID="d102455a-acf1-4067-b512-3e7d24676733" containerName="csi-resizer"
	Jun 03 12:30:22 addons-699562 kubelet[1268]: I0603 12:30:22.444714    1268 memory_manager.go:354] "RemoveStaleState removing state" podUID="db932b0d-726d-4b8d-b47c-dcbc1657a70d" containerName="csi-provisioner"
	Jun 03 12:30:22 addons-699562 kubelet[1268]: I0603 12:30:22.444719    1268 memory_manager.go:354] "RemoveStaleState removing state" podUID="db932b0d-726d-4b8d-b47c-dcbc1657a70d" containerName="csi-snapshotter"
	Jun 03 12:30:22 addons-699562 kubelet[1268]: I0603 12:30:22.483130    1268 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/084158b3-1687-4f4c-b741-cbab7ca11858-gcp-creds\") pod \"hello-world-app-86c47465fc-79c22\" (UID: \"084158b3-1687-4f4c-b741-cbab7ca11858\") " pod="default/hello-world-app-86c47465fc-79c22"
	Jun 03 12:30:22 addons-699562 kubelet[1268]: I0603 12:30:22.483186    1268 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jrhz\" (UniqueName: \"kubernetes.io/projected/084158b3-1687-4f4c-b741-cbab7ca11858-kube-api-access-8jrhz\") pod \"hello-world-app-86c47465fc-79c22\" (UID: \"084158b3-1687-4f4c-b741-cbab7ca11858\") " pod="default/hello-world-app-86c47465fc-79c22"
	Jun 03 12:30:24 addons-699562 kubelet[1268]: I0603 12:30:24.014181    1268 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="543fde334d4b530434d593b1fb43a32cd0aa6dd937131e82b4db8d5f79083144"
	Jun 03 12:30:24 addons-699562 kubelet[1268]: I0603 12:30:24.294970    1268 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlv79\" (UniqueName: \"kubernetes.io/projected/21a1c096-2479-4d10-864a-8b202b08a284-kube-api-access-wlv79\") pod \"21a1c096-2479-4d10-864a-8b202b08a284\" (UID: \"21a1c096-2479-4d10-864a-8b202b08a284\") "
	Jun 03 12:30:24 addons-699562 kubelet[1268]: I0603 12:30:24.308951    1268 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21a1c096-2479-4d10-864a-8b202b08a284-kube-api-access-wlv79" (OuterVolumeSpecName: "kube-api-access-wlv79") pod "21a1c096-2479-4d10-864a-8b202b08a284" (UID: "21a1c096-2479-4d10-864a-8b202b08a284"). InnerVolumeSpecName "kube-api-access-wlv79". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jun 03 12:30:24 addons-699562 kubelet[1268]: I0603 12:30:24.396390    1268 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-wlv79\" (UniqueName: \"kubernetes.io/projected/21a1c096-2479-4d10-864a-8b202b08a284-kube-api-access-wlv79\") on node \"addons-699562\" DevicePath \"\""
	Jun 03 12:30:26 addons-699562 kubelet[1268]: I0603 12:30:26.040372    1268 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-86c47465fc-79c22" podStartSLOduration=2.1337134349999998 podStartE2EDuration="4.040342322s" podCreationTimestamp="2024-06-03 12:30:22 +0000 UTC" firstStartedPulling="2024-06-03 12:30:23.061899927 +0000 UTC m=+314.684732158" lastFinishedPulling="2024-06-03 12:30:24.968528811 +0000 UTC m=+316.591361045" observedRunningTime="2024-06-03 12:30:26.039989496 +0000 UTC m=+317.662821745" watchObservedRunningTime="2024-06-03 12:30:26.040342322 +0000 UTC m=+317.663174573"
	Jun 03 12:30:26 addons-699562 kubelet[1268]: I0603 12:30:26.549105    1268 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21a1c096-2479-4d10-864a-8b202b08a284" path="/var/lib/kubelet/pods/21a1c096-2479-4d10-864a-8b202b08a284/volumes"
	Jun 03 12:30:26 addons-699562 kubelet[1268]: I0603 12:30:26.549498    1268 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40f21b83-9dbc-4bc9-b23d-5c8c1aa04d70" path="/var/lib/kubelet/pods/40f21b83-9dbc-4bc9-b23d-5c8c1aa04d70/volumes"
	Jun 03 12:30:26 addons-699562 kubelet[1268]: I0603 12:30:26.549950    1268 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="676ead8d-a891-4dac-8cc5-992c426fcdc9" path="/var/lib/kubelet/pods/676ead8d-a891-4dac-8cc5-992c426fcdc9/volumes"
	Jun 03 12:30:28 addons-699562 kubelet[1268]: I0603 12:30:28.933838    1268 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/748f7279-00fd-4d10-aa57-2f4c60258fe2-webhook-cert\") pod \"748f7279-00fd-4d10-aa57-2f4c60258fe2\" (UID: \"748f7279-00fd-4d10-aa57-2f4c60258fe2\") "
	Jun 03 12:30:28 addons-699562 kubelet[1268]: I0603 12:30:28.933879    1268 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vh86b\" (UniqueName: \"kubernetes.io/projected/748f7279-00fd-4d10-aa57-2f4c60258fe2-kube-api-access-vh86b\") pod \"748f7279-00fd-4d10-aa57-2f4c60258fe2\" (UID: \"748f7279-00fd-4d10-aa57-2f4c60258fe2\") "
	Jun 03 12:30:28 addons-699562 kubelet[1268]: I0603 12:30:28.936167    1268 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/748f7279-00fd-4d10-aa57-2f4c60258fe2-kube-api-access-vh86b" (OuterVolumeSpecName: "kube-api-access-vh86b") pod "748f7279-00fd-4d10-aa57-2f4c60258fe2" (UID: "748f7279-00fd-4d10-aa57-2f4c60258fe2"). InnerVolumeSpecName "kube-api-access-vh86b". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jun 03 12:30:28 addons-699562 kubelet[1268]: I0603 12:30:28.937567    1268 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/748f7279-00fd-4d10-aa57-2f4c60258fe2-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "748f7279-00fd-4d10-aa57-2f4c60258fe2" (UID: "748f7279-00fd-4d10-aa57-2f4c60258fe2"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jun 03 12:30:29 addons-699562 kubelet[1268]: I0603 12:30:29.034524    1268 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/748f7279-00fd-4d10-aa57-2f4c60258fe2-webhook-cert\") on node \"addons-699562\" DevicePath \"\""
	Jun 03 12:30:29 addons-699562 kubelet[1268]: I0603 12:30:29.034557    1268 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-vh86b\" (UniqueName: \"kubernetes.io/projected/748f7279-00fd-4d10-aa57-2f4c60258fe2-kube-api-access-vh86b\") on node \"addons-699562\" DevicePath \"\""
	Jun 03 12:30:29 addons-699562 kubelet[1268]: I0603 12:30:29.045918    1268 scope.go:117] "RemoveContainer" containerID="77d76a053b1fc4548fe88ede699aaf238870e27444a32c29242f5f6d0b76f40c"
	Jun 03 12:30:29 addons-699562 kubelet[1268]: I0603 12:30:29.071057    1268 scope.go:117] "RemoveContainer" containerID="77d76a053b1fc4548fe88ede699aaf238870e27444a32c29242f5f6d0b76f40c"
	Jun 03 12:30:29 addons-699562 kubelet[1268]: E0603 12:30:29.071800    1268 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77d76a053b1fc4548fe88ede699aaf238870e27444a32c29242f5f6d0b76f40c\": container with ID starting with 77d76a053b1fc4548fe88ede699aaf238870e27444a32c29242f5f6d0b76f40c not found: ID does not exist" containerID="77d76a053b1fc4548fe88ede699aaf238870e27444a32c29242f5f6d0b76f40c"
	Jun 03 12:30:29 addons-699562 kubelet[1268]: I0603 12:30:29.071849    1268 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77d76a053b1fc4548fe88ede699aaf238870e27444a32c29242f5f6d0b76f40c"} err="failed to get container status \"77d76a053b1fc4548fe88ede699aaf238870e27444a32c29242f5f6d0b76f40c\": rpc error: code = NotFound desc = could not find container \"77d76a053b1fc4548fe88ede699aaf238870e27444a32c29242f5f6d0b76f40c\": container with ID starting with 77d76a053b1fc4548fe88ede699aaf238870e27444a32c29242f5f6d0b76f40c not found: ID does not exist"
	Jun 03 12:30:30 addons-699562 kubelet[1268]: I0603 12:30:30.556191    1268 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="748f7279-00fd-4d10-aa57-2f4c60258fe2" path="/var/lib/kubelet/pods/748f7279-00fd-4d10-aa57-2f4c60258fe2/volumes"
	
	
	==> storage-provisioner [17a9104d810266c5a8079eeaf8d0c23a2e4538617523b6b90bff538c0454bd06] <==
	I0603 12:25:34.289349       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0603 12:25:34.302711       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0603 12:25:34.302770       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0603 12:25:34.321137       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0603 12:25:34.321265       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-699562_54717765-0edd-48a4-aaa9-cc3e6be606f3!
	I0603 12:25:34.323378       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6b63e30e-da74-48d8-b9d7-4d6f0eeb01ad", APIVersion:"v1", ResourceVersion:"827", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-699562_54717765-0edd-48a4-aaa9-cc3e6be606f3 became leader
	I0603 12:25:34.422282       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-699562_54717765-0edd-48a4-aaa9-cc3e6be606f3!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-699562 -n addons-699562
helpers_test.go:261: (dbg) Run:  kubectl --context addons-699562 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (155.09s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (328.63s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.764697ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-pl8qk" [26f4580a-9514-47c0-aa22-11c454eaca32] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004635504s
addons_test.go:417: (dbg) Run:  kubectl --context addons-699562 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-699562 top pods -n kube-system: exit status 1 (81.720874ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-hmhdl, age: 2m40.589497089s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-699562 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-699562 top pods -n kube-system: exit status 1 (83.663602ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-hmhdl, age: 2m43.020696368s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-699562 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-699562 top pods -n kube-system: exit status 1 (66.075991ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-hmhdl, age: 2m45.578980804s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-699562 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-699562 top pods -n kube-system: exit status 1 (71.288634ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-hmhdl, age: 2m50.525232391s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-699562 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-699562 top pods -n kube-system: exit status 1 (72.823802ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-hmhdl, age: 2m58.324624601s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-699562 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-699562 top pods -n kube-system: exit status 1 (67.920527ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-hmhdl, age: 3m8.741169124s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-699562 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-699562 top pods -n kube-system: exit status 1 (63.283423ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-hmhdl, age: 3m35.725206201s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-699562 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-699562 top pods -n kube-system: exit status 1 (62.861535ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-hmhdl, age: 4m3.892219826s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-699562 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-699562 top pods -n kube-system: exit status 1 (70.166415ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-hmhdl, age: 5m13.862602041s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-699562 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-699562 top pods -n kube-system: exit status 1 (76.222706ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-hmhdl, age: 6m37.270620292s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-699562 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-699562 top pods -n kube-system: exit status 1 (70.810526ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-hmhdl, age: 8m1.299069096s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-699562 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-699562 -n addons-699562
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-699562 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-699562 logs -n 25: (1.407102346s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.1 | 03 Jun 24 12:24 UTC | 03 Jun 24 12:24 UTC |
	| delete  | -p download-only-640021                                                                     | download-only-640021 | jenkins | v1.33.1 | 03 Jun 24 12:24 UTC | 03 Jun 24 12:24 UTC |
	| delete  | -p download-only-979896                                                                     | download-only-979896 | jenkins | v1.33.1 | 03 Jun 24 12:24 UTC | 03 Jun 24 12:24 UTC |
	| delete  | -p download-only-640021                                                                     | download-only-640021 | jenkins | v1.33.1 | 03 Jun 24 12:24 UTC | 03 Jun 24 12:24 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-778765 | jenkins | v1.33.1 | 03 Jun 24 12:24 UTC |                     |
	|         | binary-mirror-778765                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:35769                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-778765                                                                     | binary-mirror-778765 | jenkins | v1.33.1 | 03 Jun 24 12:24 UTC | 03 Jun 24 12:24 UTC |
	| addons  | enable dashboard -p                                                                         | addons-699562        | jenkins | v1.33.1 | 03 Jun 24 12:24 UTC |                     |
	|         | addons-699562                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-699562        | jenkins | v1.33.1 | 03 Jun 24 12:24 UTC |                     |
	|         | addons-699562                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-699562 --wait=true                                                                | addons-699562        | jenkins | v1.33.1 | 03 Jun 24 12:24 UTC | 03 Jun 24 12:27 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-699562        | jenkins | v1.33.1 | 03 Jun 24 12:27 UTC | 03 Jun 24 12:27 UTC |
	|         | -p addons-699562                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-699562 addons disable                                                                | addons-699562        | jenkins | v1.33.1 | 03 Jun 24 12:27 UTC | 03 Jun 24 12:27 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-699562 ssh cat                                                                       | addons-699562        | jenkins | v1.33.1 | 03 Jun 24 12:27 UTC | 03 Jun 24 12:27 UTC |
	|         | /opt/local-path-provisioner/pvc-322948b5-f737-472a-a023-d147f813616b_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-699562 addons disable                                                                | addons-699562        | jenkins | v1.33.1 | 03 Jun 24 12:27 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-699562 ip                                                                            | addons-699562        | jenkins | v1.33.1 | 03 Jun 24 12:28 UTC | 03 Jun 24 12:28 UTC |
	| addons  | addons-699562 addons disable                                                                | addons-699562        | jenkins | v1.33.1 | 03 Jun 24 12:28 UTC | 03 Jun 24 12:28 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-699562        | jenkins | v1.33.1 | 03 Jun 24 12:28 UTC | 03 Jun 24 12:28 UTC |
	|         | addons-699562                                                                               |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-699562        | jenkins | v1.33.1 | 03 Jun 24 12:28 UTC | 03 Jun 24 12:28 UTC |
	|         | addons-699562                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-699562 ssh curl -s                                                                   | addons-699562        | jenkins | v1.33.1 | 03 Jun 24 12:28 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-699562        | jenkins | v1.33.1 | 03 Jun 24 12:28 UTC | 03 Jun 24 12:28 UTC |
	|         | -p addons-699562                                                                            |                      |         |         |                     |                     |
	| addons  | addons-699562 addons                                                                        | addons-699562        | jenkins | v1.33.1 | 03 Jun 24 12:28 UTC | 03 Jun 24 12:28 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-699562 addons                                                                        | addons-699562        | jenkins | v1.33.1 | 03 Jun 24 12:28 UTC | 03 Jun 24 12:28 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-699562 ip                                                                            | addons-699562        | jenkins | v1.33.1 | 03 Jun 24 12:30 UTC | 03 Jun 24 12:30 UTC |
	| addons  | addons-699562 addons disable                                                                | addons-699562        | jenkins | v1.33.1 | 03 Jun 24 12:30 UTC | 03 Jun 24 12:30 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-699562 addons disable                                                                | addons-699562        | jenkins | v1.33.1 | 03 Jun 24 12:30 UTC | 03 Jun 24 12:30 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-699562 addons                                                                        | addons-699562        | jenkins | v1.33.1 | 03 Jun 24 12:33 UTC | 03 Jun 24 12:33 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 12:24:24
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 12:24:24.395017 1086826 out.go:291] Setting OutFile to fd 1 ...
	I0603 12:24:24.395285 1086826 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:24:24.395295 1086826 out.go:304] Setting ErrFile to fd 2...
	I0603 12:24:24.395299 1086826 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:24:24.395564 1086826 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
	I0603 12:24:24.396217 1086826 out.go:298] Setting JSON to false
	I0603 12:24:24.397840 1086826 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":11211,"bootTime":1717406253,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 12:24:24.397980 1086826 start.go:139] virtualization: kvm guest
	I0603 12:24:24.400088 1086826 out.go:177] * [addons-699562] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 12:24:24.401631 1086826 out.go:177]   - MINIKUBE_LOCATION=19011
	I0603 12:24:24.401592 1086826 notify.go:220] Checking for updates...
	I0603 12:24:24.403113 1086826 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 12:24:24.404638 1086826 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 12:24:24.406028 1086826 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 12:24:24.407381 1086826 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 12:24:24.408703 1086826 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 12:24:24.410378 1086826 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 12:24:24.441869 1086826 out.go:177] * Using the kvm2 driver based on user configuration
	I0603 12:24:24.443443 1086826 start.go:297] selected driver: kvm2
	I0603 12:24:24.443462 1086826 start.go:901] validating driver "kvm2" against <nil>
	I0603 12:24:24.443474 1086826 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 12:24:24.444153 1086826 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 12:24:24.444232 1086826 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19011-1078924/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0603 12:24:24.459337 1086826 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0603 12:24:24.459391 1086826 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0603 12:24:24.459645 1086826 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 12:24:24.459722 1086826 cni.go:84] Creating CNI manager for ""
	I0603 12:24:24.459739 1086826 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:24:24.459752 1086826 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0603 12:24:24.459835 1086826 start.go:340] cluster config:
	{Name:addons-699562 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-699562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:24:24.459949 1086826 iso.go:125] acquiring lock: {Name:mka26d6a83f88b83737ccc78b57cc462fbe70fe1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 12:24:24.461749 1086826 out.go:177] * Starting "addons-699562" primary control-plane node in "addons-699562" cluster
	I0603 12:24:24.462982 1086826 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 12:24:24.463022 1086826 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0603 12:24:24.463036 1086826 cache.go:56] Caching tarball of preloaded images
	I0603 12:24:24.463123 1086826 preload.go:173] Found /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 12:24:24.463134 1086826 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0603 12:24:24.463498 1086826 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/config.json ...
	I0603 12:24:24.463531 1086826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/config.json: {Name:mka3fc11f119399ce4f1970b76b906c714896655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:24:24.463710 1086826 start.go:360] acquireMachinesLock for addons-699562: {Name:mk20baaab39609d00406b78ad309423511e633ec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 12:24:24.463793 1086826 start.go:364] duration metric: took 57.075µs to acquireMachinesLock for "addons-699562"
	I0603 12:24:24.463819 1086826 start.go:93] Provisioning new machine with config: &{Name:addons-699562 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:addons-699562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 12:24:24.463894 1086826 start.go:125] createHost starting for "" (driver="kvm2")
	I0603 12:24:24.465633 1086826 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0603 12:24:24.465788 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:24:24.465842 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:24:24.480246 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33161
	I0603 12:24:24.480702 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:24:24.481265 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:24:24.481286 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:24:24.481607 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:24:24.481786 1086826 main.go:141] libmachine: (addons-699562) Calling .GetMachineName
	I0603 12:24:24.481950 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
	I0603 12:24:24.482080 1086826 start.go:159] libmachine.API.Create for "addons-699562" (driver="kvm2")
	I0603 12:24:24.482118 1086826 client.go:168] LocalClient.Create starting
	I0603 12:24:24.482153 1086826 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem
	I0603 12:24:24.830722 1086826 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem
	I0603 12:24:25.061334 1086826 main.go:141] libmachine: Running pre-create checks...
	I0603 12:24:25.061363 1086826 main.go:141] libmachine: (addons-699562) Calling .PreCreateCheck
	I0603 12:24:25.061875 1086826 main.go:141] libmachine: (addons-699562) Calling .GetConfigRaw
	I0603 12:24:25.062377 1086826 main.go:141] libmachine: Creating machine...
	I0603 12:24:25.062395 1086826 main.go:141] libmachine: (addons-699562) Calling .Create
	I0603 12:24:25.062542 1086826 main.go:141] libmachine: (addons-699562) Creating KVM machine...
	I0603 12:24:25.063695 1086826 main.go:141] libmachine: (addons-699562) DBG | found existing default KVM network
	I0603 12:24:25.064418 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:25.064281 1086848 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015340}
	I0603 12:24:25.064476 1086826 main.go:141] libmachine: (addons-699562) DBG | created network xml: 
	I0603 12:24:25.064498 1086826 main.go:141] libmachine: (addons-699562) DBG | <network>
	I0603 12:24:25.064510 1086826 main.go:141] libmachine: (addons-699562) DBG |   <name>mk-addons-699562</name>
	I0603 12:24:25.064522 1086826 main.go:141] libmachine: (addons-699562) DBG |   <dns enable='no'/>
	I0603 12:24:25.064532 1086826 main.go:141] libmachine: (addons-699562) DBG |   
	I0603 12:24:25.064546 1086826 main.go:141] libmachine: (addons-699562) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0603 12:24:25.064557 1086826 main.go:141] libmachine: (addons-699562) DBG |     <dhcp>
	I0603 12:24:25.064570 1086826 main.go:141] libmachine: (addons-699562) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0603 12:24:25.064619 1086826 main.go:141] libmachine: (addons-699562) DBG |     </dhcp>
	I0603 12:24:25.064645 1086826 main.go:141] libmachine: (addons-699562) DBG |   </ip>
	I0603 12:24:25.064652 1086826 main.go:141] libmachine: (addons-699562) DBG |   
	I0603 12:24:25.064657 1086826 main.go:141] libmachine: (addons-699562) DBG | </network>
	I0603 12:24:25.064665 1086826 main.go:141] libmachine: (addons-699562) DBG | 
	I0603 12:24:25.069891 1086826 main.go:141] libmachine: (addons-699562) DBG | trying to create private KVM network mk-addons-699562 192.168.39.0/24...
	I0603 12:24:25.134687 1086826 main.go:141] libmachine: (addons-699562) DBG | private KVM network mk-addons-699562 192.168.39.0/24 created
	I0603 12:24:25.134727 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:25.134642 1086848 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 12:24:25.134742 1086826 main.go:141] libmachine: (addons-699562) Setting up store path in /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562 ...
	I0603 12:24:25.134761 1086826 main.go:141] libmachine: (addons-699562) Building disk image from file:///home/jenkins/minikube-integration/19011-1078924/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso
	I0603 12:24:25.134778 1086826 main.go:141] libmachine: (addons-699562) Downloading /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19011-1078924/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0603 12:24:25.382531 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:25.382373 1086848 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa...
	I0603 12:24:25.538612 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:25.538462 1086848 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/addons-699562.rawdisk...
	I0603 12:24:25.538652 1086826 main.go:141] libmachine: (addons-699562) DBG | Writing magic tar header
	I0603 12:24:25.538667 1086826 main.go:141] libmachine: (addons-699562) DBG | Writing SSH key tar header
	I0603 12:24:25.538682 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:25.538619 1086848 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562 ...
	I0603 12:24:25.538813 1086826 main.go:141] libmachine: (addons-699562) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562 (perms=drwx------)
	I0603 12:24:25.538861 1086826 main.go:141] libmachine: (addons-699562) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562
	I0603 12:24:25.538880 1086826 main.go:141] libmachine: (addons-699562) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924/.minikube/machines (perms=drwxr-xr-x)
	I0603 12:24:25.538893 1086826 main.go:141] libmachine: (addons-699562) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924/.minikube (perms=drwxr-xr-x)
	I0603 12:24:25.538899 1086826 main.go:141] libmachine: (addons-699562) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924 (perms=drwxrwxr-x)
	I0603 12:24:25.538906 1086826 main.go:141] libmachine: (addons-699562) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0603 12:24:25.538917 1086826 main.go:141] libmachine: (addons-699562) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0603 12:24:25.538929 1086826 main.go:141] libmachine: (addons-699562) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines
	I0603 12:24:25.538943 1086826 main.go:141] libmachine: (addons-699562) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 12:24:25.538956 1086826 main.go:141] libmachine: (addons-699562) Creating domain...
	I0603 12:24:25.538971 1086826 main.go:141] libmachine: (addons-699562) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924
	I0603 12:24:25.538983 1086826 main.go:141] libmachine: (addons-699562) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0603 12:24:25.538990 1086826 main.go:141] libmachine: (addons-699562) DBG | Checking permissions on dir: /home/jenkins
	I0603 12:24:25.538995 1086826 main.go:141] libmachine: (addons-699562) DBG | Checking permissions on dir: /home
	I0603 12:24:25.539005 1086826 main.go:141] libmachine: (addons-699562) DBG | Skipping /home - not owner
	I0603 12:24:25.539894 1086826 main.go:141] libmachine: (addons-699562) define libvirt domain using xml: 
	I0603 12:24:25.539914 1086826 main.go:141] libmachine: (addons-699562) <domain type='kvm'>
	I0603 12:24:25.539924 1086826 main.go:141] libmachine: (addons-699562)   <name>addons-699562</name>
	I0603 12:24:25.539931 1086826 main.go:141] libmachine: (addons-699562)   <memory unit='MiB'>4000</memory>
	I0603 12:24:25.539939 1086826 main.go:141] libmachine: (addons-699562)   <vcpu>2</vcpu>
	I0603 12:24:25.539949 1086826 main.go:141] libmachine: (addons-699562)   <features>
	I0603 12:24:25.539955 1086826 main.go:141] libmachine: (addons-699562)     <acpi/>
	I0603 12:24:25.539961 1086826 main.go:141] libmachine: (addons-699562)     <apic/>
	I0603 12:24:25.539966 1086826 main.go:141] libmachine: (addons-699562)     <pae/>
	I0603 12:24:25.539970 1086826 main.go:141] libmachine: (addons-699562)     
	I0603 12:24:25.539977 1086826 main.go:141] libmachine: (addons-699562)   </features>
	I0603 12:24:25.539982 1086826 main.go:141] libmachine: (addons-699562)   <cpu mode='host-passthrough'>
	I0603 12:24:25.539992 1086826 main.go:141] libmachine: (addons-699562)   
	I0603 12:24:25.540021 1086826 main.go:141] libmachine: (addons-699562)   </cpu>
	I0603 12:24:25.540040 1086826 main.go:141] libmachine: (addons-699562)   <os>
	I0603 12:24:25.540047 1086826 main.go:141] libmachine: (addons-699562)     <type>hvm</type>
	I0603 12:24:25.540051 1086826 main.go:141] libmachine: (addons-699562)     <boot dev='cdrom'/>
	I0603 12:24:25.540056 1086826 main.go:141] libmachine: (addons-699562)     <boot dev='hd'/>
	I0603 12:24:25.540063 1086826 main.go:141] libmachine: (addons-699562)     <bootmenu enable='no'/>
	I0603 12:24:25.540068 1086826 main.go:141] libmachine: (addons-699562)   </os>
	I0603 12:24:25.540072 1086826 main.go:141] libmachine: (addons-699562)   <devices>
	I0603 12:24:25.540080 1086826 main.go:141] libmachine: (addons-699562)     <disk type='file' device='cdrom'>
	I0603 12:24:25.540089 1086826 main.go:141] libmachine: (addons-699562)       <source file='/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/boot2docker.iso'/>
	I0603 12:24:25.540099 1086826 main.go:141] libmachine: (addons-699562)       <target dev='hdc' bus='scsi'/>
	I0603 12:24:25.540109 1086826 main.go:141] libmachine: (addons-699562)       <readonly/>
	I0603 12:24:25.540135 1086826 main.go:141] libmachine: (addons-699562)     </disk>
	I0603 12:24:25.540160 1086826 main.go:141] libmachine: (addons-699562)     <disk type='file' device='disk'>
	I0603 12:24:25.540175 1086826 main.go:141] libmachine: (addons-699562)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0603 12:24:25.540192 1086826 main.go:141] libmachine: (addons-699562)       <source file='/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/addons-699562.rawdisk'/>
	I0603 12:24:25.540205 1086826 main.go:141] libmachine: (addons-699562)       <target dev='hda' bus='virtio'/>
	I0603 12:24:25.540215 1086826 main.go:141] libmachine: (addons-699562)     </disk>
	I0603 12:24:25.540222 1086826 main.go:141] libmachine: (addons-699562)     <interface type='network'>
	I0603 12:24:25.540238 1086826 main.go:141] libmachine: (addons-699562)       <source network='mk-addons-699562'/>
	I0603 12:24:25.540251 1086826 main.go:141] libmachine: (addons-699562)       <model type='virtio'/>
	I0603 12:24:25.540261 1086826 main.go:141] libmachine: (addons-699562)     </interface>
	I0603 12:24:25.540273 1086826 main.go:141] libmachine: (addons-699562)     <interface type='network'>
	I0603 12:24:25.540288 1086826 main.go:141] libmachine: (addons-699562)       <source network='default'/>
	I0603 12:24:25.540297 1086826 main.go:141] libmachine: (addons-699562)       <model type='virtio'/>
	I0603 12:24:25.540306 1086826 main.go:141] libmachine: (addons-699562)     </interface>
	I0603 12:24:25.540313 1086826 main.go:141] libmachine: (addons-699562)     <serial type='pty'>
	I0603 12:24:25.540323 1086826 main.go:141] libmachine: (addons-699562)       <target port='0'/>
	I0603 12:24:25.540336 1086826 main.go:141] libmachine: (addons-699562)     </serial>
	I0603 12:24:25.540346 1086826 main.go:141] libmachine: (addons-699562)     <console type='pty'>
	I0603 12:24:25.540358 1086826 main.go:141] libmachine: (addons-699562)       <target type='serial' port='0'/>
	I0603 12:24:25.540372 1086826 main.go:141] libmachine: (addons-699562)     </console>
	I0603 12:24:25.540383 1086826 main.go:141] libmachine: (addons-699562)     <rng model='virtio'>
	I0603 12:24:25.540394 1086826 main.go:141] libmachine: (addons-699562)       <backend model='random'>/dev/random</backend>
	I0603 12:24:25.540400 1086826 main.go:141] libmachine: (addons-699562)     </rng>
	I0603 12:24:25.540407 1086826 main.go:141] libmachine: (addons-699562)     
	I0603 12:24:25.540420 1086826 main.go:141] libmachine: (addons-699562)     
	I0603 12:24:25.540427 1086826 main.go:141] libmachine: (addons-699562)   </devices>
	I0603 12:24:25.540450 1086826 main.go:141] libmachine: (addons-699562) </domain>
	I0603 12:24:25.540473 1086826 main.go:141] libmachine: (addons-699562) 
	I0603 12:24:25.546035 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:52:26:8d in network default
	I0603 12:24:25.546507 1086826 main.go:141] libmachine: (addons-699562) Ensuring networks are active...
	I0603 12:24:25.546533 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:25.547150 1086826 main.go:141] libmachine: (addons-699562) Ensuring network default is active
	I0603 12:24:25.547454 1086826 main.go:141] libmachine: (addons-699562) Ensuring network mk-addons-699562 is active
	I0603 12:24:25.547879 1086826 main.go:141] libmachine: (addons-699562) Getting domain xml...
	I0603 12:24:25.548531 1086826 main.go:141] libmachine: (addons-699562) Creating domain...
	I0603 12:24:26.908511 1086826 main.go:141] libmachine: (addons-699562) Waiting to get IP...
	I0603 12:24:26.909158 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:26.909617 1086826 main.go:141] libmachine: (addons-699562) DBG | unable to find current IP address of domain addons-699562 in network mk-addons-699562
	I0603 12:24:26.909662 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:26.909614 1086848 retry.go:31] will retry after 278.583828ms: waiting for machine to come up
	I0603 12:24:27.190168 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:27.190625 1086826 main.go:141] libmachine: (addons-699562) DBG | unable to find current IP address of domain addons-699562 in network mk-addons-699562
	I0603 12:24:27.190656 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:27.190586 1086848 retry.go:31] will retry after 372.5456ms: waiting for machine to come up
	I0603 12:24:27.565372 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:27.565870 1086826 main.go:141] libmachine: (addons-699562) DBG | unable to find current IP address of domain addons-699562 in network mk-addons-699562
	I0603 12:24:27.565914 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:27.565824 1086848 retry.go:31] will retry after 296.896127ms: waiting for machine to come up
	I0603 12:24:27.864373 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:27.864848 1086826 main.go:141] libmachine: (addons-699562) DBG | unable to find current IP address of domain addons-699562 in network mk-addons-699562
	I0603 12:24:27.864874 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:27.864792 1086848 retry.go:31] will retry after 404.252126ms: waiting for machine to come up
	I0603 12:24:28.270290 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:28.270670 1086826 main.go:141] libmachine: (addons-699562) DBG | unable to find current IP address of domain addons-699562 in network mk-addons-699562
	I0603 12:24:28.270696 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:28.270636 1086848 retry.go:31] will retry after 599.58078ms: waiting for machine to come up
	I0603 12:24:28.871331 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:28.871741 1086826 main.go:141] libmachine: (addons-699562) DBG | unable to find current IP address of domain addons-699562 in network mk-addons-699562
	I0603 12:24:28.871765 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:28.871690 1086848 retry.go:31] will retry after 952.068344ms: waiting for machine to come up
	I0603 12:24:29.825179 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:29.825523 1086826 main.go:141] libmachine: (addons-699562) DBG | unable to find current IP address of domain addons-699562 in network mk-addons-699562
	I0603 12:24:29.825588 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:29.825498 1086848 retry.go:31] will retry after 1.104687103s: waiting for machine to come up
	I0603 12:24:30.931756 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:30.932080 1086826 main.go:141] libmachine: (addons-699562) DBG | unable to find current IP address of domain addons-699562 in network mk-addons-699562
	I0603 12:24:30.932117 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:30.932013 1086848 retry.go:31] will retry after 1.141640091s: waiting for machine to come up
	I0603 12:24:32.075239 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:32.075624 1086826 main.go:141] libmachine: (addons-699562) DBG | unable to find current IP address of domain addons-699562 in network mk-addons-699562
	I0603 12:24:32.075650 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:32.075551 1086848 retry.go:31] will retry after 1.323363823s: waiting for machine to come up
	I0603 12:24:33.401067 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:33.401447 1086826 main.go:141] libmachine: (addons-699562) DBG | unable to find current IP address of domain addons-699562 in network mk-addons-699562
	I0603 12:24:33.401478 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:33.401379 1086848 retry.go:31] will retry after 1.79959901s: waiting for machine to come up
	I0603 12:24:35.202394 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:35.202849 1086826 main.go:141] libmachine: (addons-699562) DBG | unable to find current IP address of domain addons-699562 in network mk-addons-699562
	I0603 12:24:35.202881 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:35.202784 1086848 retry.go:31] will retry after 2.402984849s: waiting for machine to come up
	I0603 12:24:37.608253 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:37.608533 1086826 main.go:141] libmachine: (addons-699562) DBG | unable to find current IP address of domain addons-699562 in network mk-addons-699562
	I0603 12:24:37.608549 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:37.608522 1086848 retry.go:31] will retry after 3.335405184s: waiting for machine to come up
	I0603 12:24:40.945518 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:40.945934 1086826 main.go:141] libmachine: (addons-699562) DBG | unable to find current IP address of domain addons-699562 in network mk-addons-699562
	I0603 12:24:40.945954 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:40.945909 1086848 retry.go:31] will retry after 3.713074283s: waiting for machine to come up
	I0603 12:24:44.660565 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:44.661082 1086826 main.go:141] libmachine: (addons-699562) DBG | unable to find current IP address of domain addons-699562 in network mk-addons-699562
	I0603 12:24:44.661109 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:44.661034 1086848 retry.go:31] will retry after 5.622787495s: waiting for machine to come up
	I0603 12:24:50.285257 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:50.285752 1086826 main.go:141] libmachine: (addons-699562) Found IP for machine: 192.168.39.241
	I0603 12:24:50.285779 1086826 main.go:141] libmachine: (addons-699562) Reserving static IP address...
	I0603 12:24:50.285794 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has current primary IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:50.286188 1086826 main.go:141] libmachine: (addons-699562) DBG | unable to find host DHCP lease matching {name: "addons-699562", mac: "52:54:00:d2:ff:f6", ip: "192.168.39.241"} in network mk-addons-699562
	I0603 12:24:50.392337 1086826 main.go:141] libmachine: (addons-699562) DBG | Getting to WaitForSSH function...
	I0603 12:24:50.392376 1086826 main.go:141] libmachine: (addons-699562) Reserved static IP address: 192.168.39.241
	I0603 12:24:50.392391 1086826 main.go:141] libmachine: (addons-699562) Waiting for SSH to be available...
	I0603 12:24:50.394776 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:50.395257 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:24:50.395285 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:50.395509 1086826 main.go:141] libmachine: (addons-699562) DBG | Using SSH client type: external
	I0603 12:24:50.395533 1086826 main.go:141] libmachine: (addons-699562) DBG | Using SSH private key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa (-rw-------)
	I0603 12:24:50.395566 1086826 main.go:141] libmachine: (addons-699562) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.241 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 12:24:50.395588 1086826 main.go:141] libmachine: (addons-699562) DBG | About to run SSH command:
	I0603 12:24:50.395604 1086826 main.go:141] libmachine: (addons-699562) DBG | exit 0
	I0603 12:24:50.517849 1086826 main.go:141] libmachine: (addons-699562) DBG | SSH cmd err, output: <nil>: 
	I0603 12:24:50.518082 1086826 main.go:141] libmachine: (addons-699562) KVM machine creation complete!
	I0603 12:24:50.518448 1086826 main.go:141] libmachine: (addons-699562) Calling .GetConfigRaw
	I0603 12:24:50.550682 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
	I0603 12:24:50.551029 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
	I0603 12:24:50.551273 1086826 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0603 12:24:50.551288 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
	I0603 12:24:50.552696 1086826 main.go:141] libmachine: Detecting operating system of created instance...
	I0603 12:24:50.552714 1086826 main.go:141] libmachine: Waiting for SSH to be available...
	I0603 12:24:50.552722 1086826 main.go:141] libmachine: Getting to WaitForSSH function...
	I0603 12:24:50.552730 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:24:50.554915 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:50.555224 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:24:50.555250 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:50.555415 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:24:50.555599 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:24:50.555761 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:24:50.555931 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:24:50.556114 1086826 main.go:141] libmachine: Using SSH client type: native
	I0603 12:24:50.556316 1086826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0603 12:24:50.556330 1086826 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0603 12:24:50.656762 1086826 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:24:50.656797 1086826 main.go:141] libmachine: Detecting the provisioner...
	I0603 12:24:50.656809 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:24:50.659643 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:50.660034 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:24:50.660063 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:50.660274 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:24:50.660454 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:24:50.660743 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:24:50.660933 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:24:50.661128 1086826 main.go:141] libmachine: Using SSH client type: native
	I0603 12:24:50.661342 1086826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0603 12:24:50.661355 1086826 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0603 12:24:50.763353 1086826 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0603 12:24:50.763438 1086826 main.go:141] libmachine: found compatible host: buildroot
	I0603 12:24:50.763447 1086826 main.go:141] libmachine: Provisioning with buildroot...
	I0603 12:24:50.763457 1086826 main.go:141] libmachine: (addons-699562) Calling .GetMachineName
	I0603 12:24:50.763772 1086826 buildroot.go:166] provisioning hostname "addons-699562"
	I0603 12:24:50.763801 1086826 main.go:141] libmachine: (addons-699562) Calling .GetMachineName
	I0603 12:24:50.764045 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:24:50.766806 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:50.767124 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:24:50.767155 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:50.767267 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:24:50.767455 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:24:50.767658 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:24:50.767810 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:24:50.768006 1086826 main.go:141] libmachine: Using SSH client type: native
	I0603 12:24:50.768174 1086826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0603 12:24:50.768185 1086826 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-699562 && echo "addons-699562" | sudo tee /etc/hostname
	I0603 12:24:50.884135 1086826 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-699562
	
	I0603 12:24:50.884174 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:24:50.886988 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:50.887370 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:24:50.887399 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:50.887522 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:24:50.887747 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:24:50.887929 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:24:50.888065 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:24:50.888223 1086826 main.go:141] libmachine: Using SSH client type: native
	I0603 12:24:50.888400 1086826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0603 12:24:50.888415 1086826 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-699562' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-699562/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-699562' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 12:24:50.994596 1086826 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:24:50.994627 1086826 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19011-1078924/.minikube CaCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19011-1078924/.minikube}
	I0603 12:24:50.994678 1086826 buildroot.go:174] setting up certificates
	I0603 12:24:50.994690 1086826 provision.go:84] configureAuth start
	I0603 12:24:50.994705 1086826 main.go:141] libmachine: (addons-699562) Calling .GetMachineName
	I0603 12:24:50.994999 1086826 main.go:141] libmachine: (addons-699562) Calling .GetIP
	I0603 12:24:50.997877 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:50.998222 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:24:50.998250 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:50.998373 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:24:51.000223 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:51.000545 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:24:51.000580 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:51.000681 1086826 provision.go:143] copyHostCerts
	I0603 12:24:51.000769 1086826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem (1675 bytes)
	I0603 12:24:51.000904 1086826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem (1078 bytes)
	I0603 12:24:51.000977 1086826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem (1123 bytes)
	I0603 12:24:51.001042 1086826 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem org=jenkins.addons-699562 san=[127.0.0.1 192.168.39.241 addons-699562 localhost minikube]
	I0603 12:24:51.342081 1086826 provision.go:177] copyRemoteCerts
	I0603 12:24:51.342147 1086826 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 12:24:51.342179 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:24:51.344885 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:51.345246 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:24:51.345285 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:51.345439 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:24:51.345638 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:24:51.345834 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:24:51.345944 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
	I0603 12:24:51.424011 1086826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 12:24:51.448763 1086826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0603 12:24:51.472279 1086826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 12:24:51.495752 1086826 provision.go:87] duration metric: took 501.043641ms to configureAuth
	I0603 12:24:51.495788 1086826 buildroot.go:189] setting minikube options for container-runtime
	I0603 12:24:51.495998 1086826 config.go:182] Loaded profile config "addons-699562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:24:51.496096 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:24:51.498510 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:51.498896 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:24:51.498926 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:51.499093 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:24:51.499296 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:24:51.499463 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:24:51.499633 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:24:51.499826 1086826 main.go:141] libmachine: Using SSH client type: native
	I0603 12:24:51.500031 1086826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0603 12:24:51.500047 1086826 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 12:24:51.754663 1086826 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 12:24:51.754694 1086826 main.go:141] libmachine: Checking connection to Docker...
	I0603 12:24:51.754702 1086826 main.go:141] libmachine: (addons-699562) Calling .GetURL
	I0603 12:24:51.756019 1086826 main.go:141] libmachine: (addons-699562) DBG | Using libvirt version 6000000
	I0603 12:24:51.758172 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:51.758540 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:24:51.758570 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:51.758742 1086826 main.go:141] libmachine: Docker is up and running!
	I0603 12:24:51.758758 1086826 main.go:141] libmachine: Reticulating splines...
	I0603 12:24:51.758766 1086826 client.go:171] duration metric: took 27.276637808s to LocalClient.Create
	I0603 12:24:51.758788 1086826 start.go:167] duration metric: took 27.276710156s to libmachine.API.Create "addons-699562"
	I0603 12:24:51.758798 1086826 start.go:293] postStartSetup for "addons-699562" (driver="kvm2")
	I0603 12:24:51.758807 1086826 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 12:24:51.758824 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
	I0603 12:24:51.759114 1086826 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 12:24:51.759147 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:24:51.761475 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:51.761749 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:24:51.761772 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:51.761911 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:24:51.762082 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:24:51.762241 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:24:51.762381 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
	I0603 12:24:51.844343 1086826 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 12:24:51.848752 1086826 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 12:24:51.848851 1086826 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/addons for local assets ...
	I0603 12:24:51.848922 1086826 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/files for local assets ...
	I0603 12:24:51.848944 1086826 start.go:296] duration metric: took 90.142044ms for postStartSetup
	I0603 12:24:51.848980 1086826 main.go:141] libmachine: (addons-699562) Calling .GetConfigRaw
	I0603 12:24:51.849638 1086826 main.go:141] libmachine: (addons-699562) Calling .GetIP
	I0603 12:24:51.852138 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:51.852481 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:24:51.852518 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:51.852718 1086826 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/config.json ...
	I0603 12:24:51.852881 1086826 start.go:128] duration metric: took 27.388970368s to createHost
	I0603 12:24:51.852902 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:24:51.854845 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:51.855095 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:24:51.855124 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:51.855228 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:24:51.855393 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:24:51.855524 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:24:51.855619 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:24:51.855730 1086826 main.go:141] libmachine: Using SSH client type: native
	I0603 12:24:51.855934 1086826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0603 12:24:51.855948 1086826 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 12:24:51.954192 1086826 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717417491.933231742
	
	I0603 12:24:51.954226 1086826 fix.go:216] guest clock: 1717417491.933231742
	I0603 12:24:51.954242 1086826 fix.go:229] Guest: 2024-06-03 12:24:51.933231742 +0000 UTC Remote: 2024-06-03 12:24:51.852891604 +0000 UTC m=+27.492675075 (delta=80.340138ms)
	I0603 12:24:51.954295 1086826 fix.go:200] guest clock delta is within tolerance: 80.340138ms
	I0603 12:24:51.954303 1086826 start.go:83] releasing machines lock for "addons-699562", held for 27.490498582s
	I0603 12:24:51.954328 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
	I0603 12:24:51.954622 1086826 main.go:141] libmachine: (addons-699562) Calling .GetIP
	I0603 12:24:51.957183 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:51.957520 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:24:51.957549 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:51.957708 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
	I0603 12:24:51.958214 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
	I0603 12:24:51.958399 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
	I0603 12:24:51.958512 1086826 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 12:24:51.958570 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:24:51.958629 1086826 ssh_runner.go:195] Run: cat /version.json
	I0603 12:24:51.958653 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:24:51.961231 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:51.961545 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:24:51.961572 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:51.961595 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:51.961831 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:24:51.961927 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:24:51.961956 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:51.961990 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:24:51.962080 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:24:51.962171 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:24:51.962217 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:24:51.962347 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:24:51.962424 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
	I0603 12:24:51.962504 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
	I0603 12:24:52.034613 1086826 ssh_runner.go:195] Run: systemctl --version
	I0603 12:24:52.061172 1086826 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 12:24:52.225052 1086826 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 12:24:52.231581 1086826 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 12:24:52.231658 1086826 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 12:24:52.250882 1086826 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 12:24:52.250910 1086826 start.go:494] detecting cgroup driver to use...
	I0603 12:24:52.250982 1086826 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 12:24:52.271994 1086826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:24:52.288519 1086826 docker.go:217] disabling cri-docker service (if available) ...
	I0603 12:24:52.288600 1086826 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 12:24:52.304066 1086826 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 12:24:52.318357 1086826 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 12:24:52.444110 1086826 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 12:24:52.607807 1086826 docker.go:233] disabling docker service ...
	I0603 12:24:52.607888 1086826 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 12:24:52.622983 1086826 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 12:24:52.635763 1086826 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 12:24:52.756045 1086826 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 12:24:52.870974 1086826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 12:24:52.885958 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:24:52.909939 1086826 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 12:24:52.910019 1086826 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:24:52.920976 1086826 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 12:24:52.921043 1086826 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:24:52.932075 1086826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:24:52.943370 1086826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:24:52.954401 1086826 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 12:24:52.965823 1086826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:24:52.976875 1086826 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:24:52.994792 1086826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:24:53.006108 1086826 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 12:24:53.016055 1086826 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 12:24:53.016127 1086826 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 12:24:53.030064 1086826 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 12:24:53.039928 1086826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:24:53.156311 1086826 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 12:24:53.297085 1086826 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 12:24:53.297199 1086826 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 12:24:53.302466 1086826 start.go:562] Will wait 60s for crictl version
	I0603 12:24:53.302559 1086826 ssh_runner.go:195] Run: which crictl
	I0603 12:24:53.306379 1086826 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 12:24:53.351831 1086826 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 12:24:53.351927 1086826 ssh_runner.go:195] Run: crio --version
	I0603 12:24:53.380029 1086826 ssh_runner.go:195] Run: crio --version
	I0603 12:24:53.410556 1086826 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 12:24:53.411804 1086826 main.go:141] libmachine: (addons-699562) Calling .GetIP
	I0603 12:24:53.414687 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:53.415038 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:24:53.415065 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:24:53.415276 1086826 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0603 12:24:53.419753 1086826 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:24:53.432681 1086826 kubeadm.go:877] updating cluster {Name:addons-699562 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:addons-699562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.241 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 12:24:53.432799 1086826 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 12:24:53.432842 1086826 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:24:53.466485 1086826 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0603 12:24:53.466571 1086826 ssh_runner.go:195] Run: which lz4
	I0603 12:24:53.470862 1086826 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 12:24:53.475112 1086826 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 12:24:53.475150 1086826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0603 12:24:54.805803 1086826 crio.go:462] duration metric: took 1.334972428s to copy over tarball
	I0603 12:24:54.805891 1086826 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 12:24:57.079171 1086826 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.273232417s)
	I0603 12:24:57.079222 1086826 crio.go:469] duration metric: took 2.273384926s to extract the tarball
	I0603 12:24:57.079239 1086826 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 12:24:57.118848 1086826 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:24:57.171954 1086826 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 12:24:57.171984 1086826 cache_images.go:84] Images are preloaded, skipping loading
	I0603 12:24:57.171995 1086826 kubeadm.go:928] updating node { 192.168.39.241 8443 v1.30.1 crio true true} ...
	I0603 12:24:57.172114 1086826 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-699562 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.241
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:addons-699562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 12:24:57.172180 1086826 ssh_runner.go:195] Run: crio config
	I0603 12:24:57.226884 1086826 cni.go:84] Creating CNI manager for ""
	I0603 12:24:57.226908 1086826 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:24:57.226918 1086826 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 12:24:57.226941 1086826 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.241 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-699562 NodeName:addons-699562 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.241"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.241 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 12:24:57.227076 1086826 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.241
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-699562"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.241
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.241"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 12:24:57.227137 1086826 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 12:24:57.239219 1086826 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 12:24:57.239289 1086826 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 12:24:57.250643 1086826 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0603 12:24:57.269452 1086826 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 12:24:57.289045 1086826 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0603 12:24:57.308060 1086826 ssh_runner.go:195] Run: grep 192.168.39.241	control-plane.minikube.internal$ /etc/hosts
	I0603 12:24:57.312353 1086826 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.241	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:24:57.326672 1086826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:24:57.469263 1086826 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:24:57.487461 1086826 certs.go:68] Setting up /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562 for IP: 192.168.39.241
	I0603 12:24:57.487489 1086826 certs.go:194] generating shared ca certs ...
	I0603 12:24:57.487508 1086826 certs.go:226] acquiring lock for ca certs: {Name:mkeec5aabce7c9540fcb31b78e4f96c2851d54f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:24:57.487662 1086826 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key
	I0603 12:24:57.796136 1086826 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt ...
	I0603 12:24:57.796173 1086826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt: {Name:mkf6899bfed4ad6512f084e6101d8170b87aa8c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:24:57.796347 1086826 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key ...
	I0603 12:24:57.796359 1086826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key: {Name:mkb9d4ed66614d50db2e65010103ad18fc38392f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:24:57.796434 1086826 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key
	I0603 12:24:57.988064 1086826 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt ...
	I0603 12:24:57.988093 1086826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt: {Name:mkab0d8277f7066917c19f74ecac4b98f17efe97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:24:57.988258 1086826 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key ...
	I0603 12:24:57.988269 1086826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key: {Name:mkfdedf65267e5b22a2568e9daa9efca1f06a694 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:24:57.988340 1086826 certs.go:256] generating profile certs ...
	I0603 12:24:57.988401 1086826 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.key
	I0603 12:24:57.988418 1086826 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.crt with IP's: []
	I0603 12:24:58.169717 1086826 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.crt ...
	I0603 12:24:58.169748 1086826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.crt: {Name:mk0332016de9f15436fb308f06459566b4755678 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:24:58.169912 1086826 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.key ...
	I0603 12:24:58.169924 1086826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.key: {Name:mkbd821f9271c2b7a33d746cd213fabc96fbeca6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:24:58.169995 1086826 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/apiserver.key.fe8c4ec0
	I0603 12:24:58.170014 1086826 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/apiserver.crt.fe8c4ec0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.241]
	I0603 12:24:58.353654 1086826 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/apiserver.crt.fe8c4ec0 ...
	I0603 12:24:58.353688 1086826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/apiserver.crt.fe8c4ec0: {Name:mk2efdc33db4a931854f6a87476a9e7c076c4560 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:24:58.353848 1086826 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/apiserver.key.fe8c4ec0 ...
	I0603 12:24:58.353862 1086826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/apiserver.key.fe8c4ec0: {Name:mkbfd6a19ac77e29694cc3e059a9a211b4a91c26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:24:58.353944 1086826 certs.go:381] copying /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/apiserver.crt.fe8c4ec0 -> /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/apiserver.crt
	I0603 12:24:58.354032 1086826 certs.go:385] copying /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/apiserver.key.fe8c4ec0 -> /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/apiserver.key
	I0603 12:24:58.354086 1086826 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/proxy-client.key
	I0603 12:24:58.354106 1086826 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/proxy-client.crt with IP's: []
	I0603 12:24:58.527806 1086826 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/proxy-client.crt ...
	I0603 12:24:58.527842 1086826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/proxy-client.crt: {Name:mkaadea5326f9442ed664027a21a81b1f09a2cbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:24:58.528017 1086826 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/proxy-client.key ...
	I0603 12:24:58.528030 1086826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/proxy-client.key: {Name:mk8d4a5cdfed9257e413dc25422f47f0d4704dc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:24:58.528204 1086826 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 12:24:58.528243 1086826 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem (1078 bytes)
	I0603 12:24:58.528269 1086826 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem (1123 bytes)
	I0603 12:24:58.528291 1086826 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem (1675 bytes)
	I0603 12:24:58.528908 1086826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 12:24:58.555501 1086826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 12:24:58.579639 1086826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 12:24:58.603222 1086826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0603 12:24:58.626441 1086826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0603 12:24:58.650027 1086826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0603 12:24:58.673722 1086826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 12:24:58.696976 1086826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 12:24:58.720137 1086826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 12:24:58.743306 1086826 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 12:24:58.760135 1086826 ssh_runner.go:195] Run: openssl version
	I0603 12:24:58.766118 1086826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 12:24:58.777446 1086826 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:24:58.781938 1086826 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:24 /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:24:58.781984 1086826 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:24:58.787982 1086826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 12:24:58.799110 1086826 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 12:24:58.803751 1086826 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 12:24:58.803822 1086826 kubeadm.go:391] StartCluster: {Name:addons-699562 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 C
lusterName:addons-699562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.241 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:24:58.803923 1086826 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 12:24:58.803994 1086826 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:24:58.845118 1086826 cri.go:89] found id: ""
	I0603 12:24:58.845210 1086826 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0603 12:24:58.855452 1086826 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:24:58.866068 1086826 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:24:58.876236 1086826 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:24:58.876269 1086826 kubeadm.go:156] found existing configuration files:
	
	I0603 12:24:58.876322 1086826 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 12:24:58.885660 1086826 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:24:58.885722 1086826 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:24:58.895484 1086826 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 12:24:58.904797 1086826 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:24:58.904849 1086826 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:24:58.914443 1086826 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 12:24:58.923913 1086826 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:24:58.923979 1086826 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:24:58.936380 1086826 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 12:24:58.946194 1086826 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:24:58.946246 1086826 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:24:58.968281 1086826 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 12:24:59.030885 1086826 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0603 12:24:59.030942 1086826 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 12:24:59.154648 1086826 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 12:24:59.154824 1086826 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 12:24:59.154989 1086826 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 12:24:59.386626 1086826 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 12:24:59.388555 1086826 out.go:204]   - Generating certificates and keys ...
	I0603 12:24:59.388648 1086826 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 12:24:59.388729 1086826 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 12:24:59.509601 1086826 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0603 12:24:59.635592 1086826 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0603 12:24:59.705913 1086826 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0603 12:24:59.780001 1086826 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0603 12:24:59.863390 1086826 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0603 12:24:59.863715 1086826 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-699562 localhost] and IPs [192.168.39.241 127.0.0.1 ::1]
	I0603 12:24:59.965490 1086826 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0603 12:24:59.965718 1086826 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-699562 localhost] and IPs [192.168.39.241 127.0.0.1 ::1]
	I0603 12:25:00.170107 1086826 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0603 12:25:00.327566 1086826 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0603 12:25:00.439543 1086826 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0603 12:25:00.439669 1086826 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 12:25:00.535598 1086826 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 12:25:00.754190 1086826 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0603 12:25:00.905712 1086826 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 12:25:01.465978 1086826 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 12:25:01.632676 1086826 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 12:25:01.633547 1086826 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 12:25:01.637277 1086826 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 12:25:01.639011 1086826 out.go:204]   - Booting up control plane ...
	I0603 12:25:01.639143 1086826 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 12:25:01.639247 1086826 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 12:25:01.639361 1086826 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 12:25:01.655395 1086826 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 12:25:01.656324 1086826 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 12:25:01.656395 1086826 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 12:25:01.797299 1086826 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0603 12:25:01.797451 1086826 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0603 12:25:02.797820 1086826 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.00116594s
	I0603 12:25:02.797972 1086826 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0603 12:25:07.796996 1086826 kubeadm.go:309] [api-check] The API server is healthy after 5.001434435s
	I0603 12:25:07.809118 1086826 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0603 12:25:07.824366 1086826 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0603 12:25:07.858549 1086826 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0603 12:25:07.858769 1086826 kubeadm.go:309] [mark-control-plane] Marking the node addons-699562 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0603 12:25:07.872341 1086826 kubeadm.go:309] [bootstrap-token] Using token: 949ojx.jojr63h99myrhn1a
	I0603 12:25:07.873773 1086826 out.go:204]   - Configuring RBAC rules ...
	I0603 12:25:07.873890 1086826 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0603 12:25:07.890269 1086826 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0603 12:25:07.901714 1086826 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0603 12:25:07.905951 1086826 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0603 12:25:07.910910 1086826 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0603 12:25:07.915270 1086826 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0603 12:25:08.203573 1086826 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0603 12:25:08.642159 1086826 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0603 12:25:09.203978 1086826 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0603 12:25:09.206070 1086826 kubeadm.go:309] 
	I0603 12:25:09.206152 1086826 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0603 12:25:09.206165 1086826 kubeadm.go:309] 
	I0603 12:25:09.206239 1086826 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0603 12:25:09.206250 1086826 kubeadm.go:309] 
	I0603 12:25:09.206294 1086826 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0603 12:25:09.206383 1086826 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0603 12:25:09.206468 1086826 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0603 12:25:09.206510 1086826 kubeadm.go:309] 
	I0603 12:25:09.206601 1086826 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0603 12:25:09.206613 1086826 kubeadm.go:309] 
	I0603 12:25:09.206679 1086826 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0603 12:25:09.206690 1086826 kubeadm.go:309] 
	I0603 12:25:09.206752 1086826 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0603 12:25:09.206864 1086826 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0603 12:25:09.206964 1086826 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0603 12:25:09.207036 1086826 kubeadm.go:309] 
	I0603 12:25:09.207149 1086826 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0603 12:25:09.207264 1086826 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0603 12:25:09.207280 1086826 kubeadm.go:309] 
	I0603 12:25:09.207387 1086826 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 949ojx.jojr63h99myrhn1a \
	I0603 12:25:09.207541 1086826 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c33e9516f6d05db03b44f9194bafe44692a1b8ae1d860b8bc74f77578e93fdb1 \
	I0603 12:25:09.207575 1086826 kubeadm.go:309] 	--control-plane 
	I0603 12:25:09.207586 1086826 kubeadm.go:309] 
	I0603 12:25:09.207685 1086826 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0603 12:25:09.207694 1086826 kubeadm.go:309] 
	I0603 12:25:09.207813 1086826 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 949ojx.jojr63h99myrhn1a \
	I0603 12:25:09.207922 1086826 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c33e9516f6d05db03b44f9194bafe44692a1b8ae1d860b8bc74f77578e93fdb1 
	I0603 12:25:09.209051 1086826 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 12:25:09.209091 1086826 cni.go:84] Creating CNI manager for ""
	I0603 12:25:09.209105 1086826 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:25:09.211191 1086826 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 12:25:09.212411 1086826 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 12:25:09.223015 1086826 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 12:25:09.241025 1086826 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 12:25:09.241111 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-699562 minikube.k8s.io/updated_at=2024_06_03T12_25_09_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354 minikube.k8s.io/name=addons-699562 minikube.k8s.io/primary=true
	I0603 12:25:09.241113 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:09.273153 1086826 ops.go:34] apiserver oom_adj: -16
	I0603 12:25:09.379202 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:09.879303 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:10.379958 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:10.880244 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:11.380084 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:11.879530 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:12.380197 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:12.879382 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:13.379614 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:13.879715 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:14.379835 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:14.879796 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:15.380103 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:15.879378 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:16.379576 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:16.879485 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:17.379812 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:17.879440 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:18.379509 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:18.879789 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:19.379568 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:19.880105 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:20.380021 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:20.880130 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:21.380225 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:21.879752 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:22.380331 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:22.880058 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:25:23.038068 1086826 kubeadm.go:1107] duration metric: took 13.797019699s to wait for elevateKubeSystemPrivileges
	W0603 12:25:23.038132 1086826 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0603 12:25:23.038145 1086826 kubeadm.go:393] duration metric: took 24.234331356s to StartCluster
	I0603 12:25:23.038180 1086826 settings.go:142] acquiring lock: {Name:mka7155af15d143794eb08b8670f7d850f44839e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:25:23.038355 1086826 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 12:25:23.038990 1086826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/kubeconfig: {Name:mk082a4c41fd0f4876b4085806e1bc5ef6533b14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:25:23.039268 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0603 12:25:23.039288 1086826 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.241 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 12:25:23.041364 1086826 out.go:177] * Verifying Kubernetes components...
	I0603 12:25:23.039372 1086826 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0603 12:25:23.039478 1086826 config.go:182] Loaded profile config "addons-699562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:25:23.042813 1086826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:25:23.042835 1086826 addons.go:69] Setting yakd=true in profile "addons-699562"
	I0603 12:25:23.042844 1086826 addons.go:69] Setting cloud-spanner=true in profile "addons-699562"
	I0603 12:25:23.042871 1086826 addons.go:234] Setting addon cloud-spanner=true in "addons-699562"
	I0603 12:25:23.042879 1086826 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-699562"
	I0603 12:25:23.042882 1086826 addons.go:69] Setting metrics-server=true in profile "addons-699562"
	I0603 12:25:23.042895 1086826 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-699562"
	I0603 12:25:23.042929 1086826 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-699562"
	I0603 12:25:23.042943 1086826 addons.go:69] Setting volcano=true in profile "addons-699562"
	I0603 12:25:23.042955 1086826 addons.go:69] Setting storage-provisioner=true in profile "addons-699562"
	I0603 12:25:23.042965 1086826 addons.go:69] Setting registry=true in profile "addons-699562"
	I0603 12:25:23.042969 1086826 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-699562"
	I0603 12:25:23.042983 1086826 addons.go:234] Setting addon registry=true in "addons-699562"
	I0603 12:25:23.042971 1086826 addons.go:234] Setting addon volcano=true in "addons-699562"
	I0603 12:25:23.043048 1086826 host.go:66] Checking if "addons-699562" exists ...
	I0603 12:25:23.043107 1086826 host.go:66] Checking if "addons-699562" exists ...
	I0603 12:25:23.042871 1086826 addons.go:234] Setting addon yakd=true in "addons-699562"
	I0603 12:25:23.043183 1086826 host.go:66] Checking if "addons-699562" exists ...
	I0603 12:25:23.042936 1086826 addons.go:69] Setting gcp-auth=true in profile "addons-699562"
	I0603 12:25:23.043240 1086826 mustload.go:65] Loading cluster: addons-699562
	I0603 12:25:23.043428 1086826 config.go:182] Loaded profile config "addons-699562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:25:23.043518 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.043550 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.043568 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.043579 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.043598 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.043609 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.043628 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.043668 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.042915 1086826 host.go:66] Checking if "addons-699562" exists ...
	I0603 12:25:23.042907 1086826 addons.go:234] Setting addon metrics-server=true in "addons-699562"
	I0603 12:25:23.044377 1086826 host.go:66] Checking if "addons-699562" exists ...
	I0603 12:25:23.042917 1086826 addons.go:69] Setting default-storageclass=true in profile "addons-699562"
	I0603 12:25:23.044579 1086826 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-699562"
	I0603 12:25:23.045076 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.045155 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.045255 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.045307 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.042925 1086826 addons.go:69] Setting ingress=true in profile "addons-699562"
	I0603 12:25:23.045756 1086826 addons.go:234] Setting addon ingress=true in "addons-699562"
	I0603 12:25:23.045822 1086826 host.go:66] Checking if "addons-699562" exists ...
	I0603 12:25:23.046281 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.046315 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.042989 1086826 host.go:66] Checking if "addons-699562" exists ...
	I0603 12:25:23.042936 1086826 addons.go:69] Setting inspektor-gadget=true in profile "addons-699562"
	I0603 12:25:23.047032 1086826 addons.go:234] Setting addon inspektor-gadget=true in "addons-699562"
	I0603 12:25:23.047071 1086826 host.go:66] Checking if "addons-699562" exists ...
	I0603 12:25:23.042943 1086826 addons.go:69] Setting helm-tiller=true in profile "addons-699562"
	I0603 12:25:23.047185 1086826 addons.go:234] Setting addon helm-tiller=true in "addons-699562"
	I0603 12:25:23.047212 1086826 host.go:66] Checking if "addons-699562" exists ...
	I0603 12:25:23.047273 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.047353 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.047374 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.042988 1086826 addons.go:234] Setting addon storage-provisioner=true in "addons-699562"
	I0603 12:25:23.047674 1086826 host.go:66] Checking if "addons-699562" exists ...
	I0603 12:25:23.042972 1086826 addons.go:69] Setting volumesnapshots=true in profile "addons-699562"
	I0603 12:25:23.047777 1086826 addons.go:234] Setting addon volumesnapshots=true in "addons-699562"
	I0603 12:25:23.047830 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.047846 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.047862 1086826 host.go:66] Checking if "addons-699562" exists ...
	I0603 12:25:23.047866 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.047882 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.044547 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.042832 1086826 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-699562"
	I0603 12:25:23.048354 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.042928 1086826 addons.go:69] Setting ingress-dns=true in profile "addons-699562"
	I0603 12:25:23.048397 1086826 addons.go:234] Setting addon ingress-dns=true in "addons-699562"
	I0603 12:25:23.048437 1086826 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-699562"
	I0603 12:25:23.048446 1086826 host.go:66] Checking if "addons-699562" exists ...
	I0603 12:25:23.048477 1086826 host.go:66] Checking if "addons-699562" exists ...
	I0603 12:25:23.047570 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.049263 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.049297 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.053615 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.054040 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.069220 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44425
	I0603 12:25:23.073731 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33585
	I0603 12:25:23.073786 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42327
	I0603 12:25:23.073889 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41657
	I0603 12:25:23.074001 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34815
	I0603 12:25:23.074277 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.074328 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.074963 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.075006 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.075941 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.076110 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.076230 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.076319 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.076403 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.077634 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.077660 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.077833 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.077853 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.077978 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.077989 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.078025 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.078054 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.078121 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.078545 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.078588 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.079218 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.079264 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.089204 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36147
	I0603 12:25:23.089281 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.089382 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
	I0603 12:25:23.089488 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.089515 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.090485 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.090522 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.091428 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.091439 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.091998 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.092019 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.092018 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.092061 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.092561 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.092601 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.104451 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34635
	I0603 12:25:23.104719 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46329
	I0603 12:25:23.105270 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.105974 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37029
	I0603 12:25:23.106145 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.106224 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42751
	I0603 12:25:23.106264 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.106336 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36991
	I0603 12:25:23.106543 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.106556 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.106884 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.106914 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.107363 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.107374 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.107456 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.108069 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.108094 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.108126 1086826 addons.go:234] Setting addon default-storageclass=true in "addons-699562"
	I0603 12:25:23.108164 1086826 host.go:66] Checking if "addons-699562" exists ...
	I0603 12:25:23.108242 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.108255 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.108522 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.108553 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.108768 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.108838 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35381
	I0603 12:25:23.109309 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.109376 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.110138 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.110159 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.110602 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.110637 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.111158 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.111762 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.111797 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.113967 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.114019 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.114729 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.114755 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.115240 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.115820 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.115863 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.116533 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.116565 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.116932 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.117491 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.117527 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.121971 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.122294 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
	I0603 12:25:23.125513 1086826 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-699562"
	I0603 12:25:23.125565 1086826 host.go:66] Checking if "addons-699562" exists ...
	I0603 12:25:23.125951 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.126003 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.127933 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39777
	I0603 12:25:23.128409 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.129023 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.129046 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.133336 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38839
	I0603 12:25:23.133978 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.134032 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.134207 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
	I0603 12:25:23.134828 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.134855 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.135234 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.135477 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
	I0603 12:25:23.137625 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
	I0603 12:25:23.139847 1086826 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0603 12:25:23.138376 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
	I0603 12:25:23.140132 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46499
	I0603 12:25:23.142584 1086826 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0603 12:25:23.140984 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40225
	I0603 12:25:23.141908 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.143938 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34971
	I0603 12:25:23.144587 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40201
	I0603 12:25:23.144635 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.145368 1086826 out.go:177]   - Using image docker.io/registry:2.8.3
	I0603 12:25:23.145978 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.146582 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.146643 1086826 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0603 12:25:23.148445 1086826 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0603 12:25:23.148467 1086826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0603 12:25:23.148489 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:25:23.146662 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39681
	I0603 12:25:23.146051 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.147210 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.147313 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.147468 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.149199 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.150420 1086826 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0603 12:25:23.150511 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.151060 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.152055 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.151064 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.152103 1086826 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0603 12:25:23.152121 1086826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0603 12:25:23.152142 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:25:23.152172 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.152107 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.151702 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
	I0603 12:25:23.151488 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.152265 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.152469 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.152541 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.152630 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.153074 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.153116 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.153228 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.153262 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.153264 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.153299 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.153701 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.153771 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:25:23.153798 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.153986 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
	I0603 12:25:23.154082 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:25:23.154329 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:25:23.154563 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:25:23.154740 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
	I0603 12:25:23.155872 1086826 host.go:66] Checking if "addons-699562" exists ...
	I0603 12:25:23.156261 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.156302 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.156862 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
	I0603 12:25:23.157120 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.158786 1086826 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0603 12:25:23.157660 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:25:23.157984 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:25:23.160125 1086826 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0603 12:25:23.160139 1086826 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0603 12:25:23.160159 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:25:23.160204 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.160399 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:25:23.160579 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:25:23.160730 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
	I0603 12:25:23.161513 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34551
	I0603 12:25:23.162147 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.162765 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.162792 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.163193 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.163488 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
	I0603 12:25:23.163614 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42687
	I0603 12:25:23.164160 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.164193 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.164587 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:25:23.164614 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.164750 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.164763 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.164805 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:25:23.164974 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:25:23.165184 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:25:23.165234 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.165357 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
	I0603 12:25:23.165664 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
	I0603 12:25:23.166081 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
	I0603 12:25:23.166343 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:23.166357 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:23.166891 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:23.166905 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:23.166914 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:23.166921 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:23.167166 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:23.167199 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:23.167207 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	W0603 12:25:23.167307 1086826 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0603 12:25:23.167541 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
	I0603 12:25:23.169814 1086826 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0603 12:25:23.169073 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40559
	I0603 12:25:23.171125 1086826 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0603 12:25:23.172468 1086826 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0603 12:25:23.171517 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.175038 1086826 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0603 12:25:23.174482 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.176419 1086826 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0603 12:25:23.176434 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.177888 1086826 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0603 12:25:23.180053 1086826 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0603 12:25:23.178457 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.180020 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36625
	I0603 12:25:23.183356 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36483
	I0603 12:25:23.183365 1086826 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0603 12:25:23.181980 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
	I0603 12:25:23.182398 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.182436 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38145
	I0603 12:25:23.182860 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46285
	I0603 12:25:23.183773 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.185766 1086826 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0603 12:25:23.185787 1086826 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0603 12:25:23.185818 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:25:23.184597 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36141
	I0603 12:25:23.185463 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45951
	I0603 12:25:23.186847 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.186864 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.186875 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.186883 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.187209 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.187318 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.187375 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.187761 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.187866 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
	I0603 12:25:23.188143 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.188161 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.188291 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
	I0603 12:25:23.188305 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.188319 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.188849 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.190945 1086826 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.29.0
	I0603 12:25:23.189483 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.189515 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.189541 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
	I0603 12:25:23.189703 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38189
	I0603 12:25:23.190094 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.190186 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38531
	I0603 12:25:23.190263 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
	I0603 12:25:23.190561 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.191601 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.192351 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:25:23.193396 1086826 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0603 12:25:23.193467 1086826 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0603 12:25:23.193473 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:25:23.193494 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:25:23.193500 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.193539 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.194368 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:25:23.194396 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.194450 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.196235 1086826 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:25:23.194512 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
	I0603 12:25:23.194692 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:25:23.195573 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.195618 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
	I0603 12:25:23.195621 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.195676 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38341
	I0603 12:25:23.195711 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.195757 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.196201 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
	I0603 12:25:23.197205 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.197890 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:25:23.197916 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.197972 1086826 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 12:25:23.197983 1086826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 12:25:23.198001 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:25:23.198024 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.198035 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.197833 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:25:23.198131 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
	I0603 12:25:23.198185 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.198474 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
	I0603 12:25:23.198547 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.200480 1086826 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0603 12:25:23.199048 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.199079 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:25:23.199482 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.199522 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.199896 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.201123 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
	I0603 12:25:23.201303 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
	I0603 12:25:23.201308 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
	I0603 12:25:23.201884 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.202138 1086826 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0603 12:25:23.202157 1086826 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0603 12:25:23.202175 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:25:23.202289 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.202978 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
	I0603 12:25:23.203030 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
	I0603 12:25:23.204877 1086826 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0603 12:25:23.203059 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:25:23.203097 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:25:23.203124 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:25:23.203254 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.204827 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
	I0603 12:25:23.205130 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44327
	I0603 12:25:23.205471 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
	I0603 12:25:23.206028 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.206324 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.206371 1086826 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0603 12:25:23.206656 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:25:23.206651 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
	I0603 12:25:23.206709 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:25:23.207699 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.207618 1086826 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0603 12:25:23.207788 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:25:23.207805 1086826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0603 12:25:23.209168 1086826 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0603 12:25:23.209191 1086826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0603 12:25:23.209215 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:25:23.209173 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:25:23.208391 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.208423 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:25:23.208443 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:25:23.210668 1086826 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0603 12:25:23.208815 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.209149 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.207681 1086826 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0603 12:25:23.209933 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:25:23.209956 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
	I0603 12:25:23.209992 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
	I0603 12:25:23.212401 1086826 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0603 12:25:23.212427 1086826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0603 12:25:23.212450 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:25:23.212513 1086826 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0603 12:25:23.214086 1086826 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0603 12:25:23.214106 1086826 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0603 12:25:23.214136 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:25:23.212668 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.214207 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:25:23.214231 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.215694 1086826 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0603 12:25:23.213493 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
	I0603 12:25:23.215710 1086826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0603 12:25:23.215726 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:25:23.213529 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:25:23.213766 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.216403 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.213791 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.215381 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:25:23.216476 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:25:23.216335 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:25:23.216495 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.216348 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.216526 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:25:23.216541 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.216699 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:25:23.216705 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:25:23.216755 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:25:23.216872 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:25:23.216928 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
	I0603 12:25:23.216996 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:25:23.217163 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:25:23.217220 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.217347 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
	I0603 12:25:23.217506 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
	I0603 12:25:23.218087 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:23.218182 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:23.218381 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.218727 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:25:23.218766 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.218941 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:25:23.219129 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:25:23.221916 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:25:23.221953 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.221958 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:25:23.221979 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:25:23.221998 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.222143 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
	I0603 12:25:23.222152 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:25:23.222332 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:25:23.222454 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
	I0603 12:25:23.225717 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33403
	I0603 12:25:23.254144 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.254702 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.254723 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	W0603 12:25:23.255054 1086826 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:43442->192.168.39.241:22: read: connection reset by peer
	I0603 12:25:23.255084 1086826 retry.go:31] will retry after 338.902816ms: ssh: handshake failed: read tcp 192.168.39.1:43442->192.168.39.241:22: read: connection reset by peer
	I0603 12:25:23.255125 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.255337 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
	I0603 12:25:23.256992 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
	I0603 12:25:23.257254 1086826 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 12:25:23.257274 1086826 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 12:25:23.257293 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:25:23.260592 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.261091 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:25:23.261117 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.261326 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:25:23.261563 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:25:23.261725 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:25:23.261873 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
	I0603 12:25:23.271376 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33833
	I0603 12:25:23.271765 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:23.272295 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:23.272319 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:23.272678 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:23.272874 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
	I0603 12:25:23.274522 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
	I0603 12:25:23.276422 1086826 out.go:177]   - Using image docker.io/busybox:stable
	I0603 12:25:23.278088 1086826 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0603 12:25:23.279525 1086826 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0603 12:25:23.279549 1086826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0603 12:25:23.279573 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:25:23.282585 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.283142 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:25:23.283175 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:23.283311 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:25:23.283518 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:25:23.283758 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:25:23.283941 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
	I0603 12:25:23.664725 1086826 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0603 12:25:23.664753 1086826 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0603 12:25:23.703703 1086826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0603 12:25:23.707713 1086826 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0603 12:25:23.707747 1086826 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0603 12:25:23.751402 1086826 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0603 12:25:23.751436 1086826 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0603 12:25:23.753543 1086826 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:25:23.755796 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0603 12:25:23.759663 1086826 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0603 12:25:23.759693 1086826 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0603 12:25:23.770791 1086826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 12:25:23.831389 1086826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0603 12:25:23.841828 1086826 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0603 12:25:23.841855 1086826 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0603 12:25:23.852519 1086826 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0603 12:25:23.852587 1086826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0603 12:25:23.862595 1086826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0603 12:25:23.878492 1086826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0603 12:25:23.880249 1086826 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0603 12:25:23.880280 1086826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0603 12:25:23.888423 1086826 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0603 12:25:23.888449 1086826 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0603 12:25:23.929569 1086826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 12:25:23.954649 1086826 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0603 12:25:23.954686 1086826 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0603 12:25:24.003881 1086826 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0603 12:25:24.003919 1086826 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0603 12:25:24.012187 1086826 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0603 12:25:24.012219 1086826 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0603 12:25:24.069615 1086826 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0603 12:25:24.069647 1086826 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0603 12:25:24.071510 1086826 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0603 12:25:24.071533 1086826 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0603 12:25:24.090048 1086826 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0603 12:25:24.090085 1086826 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0603 12:25:24.094186 1086826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0603 12:25:24.142839 1086826 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 12:25:24.142888 1086826 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0603 12:25:24.188372 1086826 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0603 12:25:24.188411 1086826 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0603 12:25:24.191010 1086826 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0603 12:25:24.191034 1086826 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0603 12:25:24.196505 1086826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0603 12:25:24.242688 1086826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0603 12:25:24.295016 1086826 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0603 12:25:24.295050 1086826 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0603 12:25:24.313366 1086826 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0603 12:25:24.313401 1086826 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0603 12:25:24.327127 1086826 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0603 12:25:24.327158 1086826 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0603 12:25:24.368069 1086826 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0603 12:25:24.368097 1086826 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0603 12:25:24.430388 1086826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 12:25:24.468724 1086826 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0603 12:25:24.468763 1086826 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0603 12:25:24.491688 1086826 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0603 12:25:24.491709 1086826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0603 12:25:24.507202 1086826 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0603 12:25:24.507226 1086826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0603 12:25:24.573395 1086826 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0603 12:25:24.573434 1086826 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0603 12:25:24.673069 1086826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0603 12:25:24.710678 1086826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0603 12:25:24.740006 1086826 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0603 12:25:24.740041 1086826 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0603 12:25:24.774202 1086826 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0603 12:25:24.774240 1086826 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0603 12:25:25.120454 1086826 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0603 12:25:25.120483 1086826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0603 12:25:25.139484 1086826 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0603 12:25:25.139513 1086826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0603 12:25:25.367527 1086826 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0603 12:25:25.367559 1086826 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0603 12:25:25.399703 1086826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0603 12:25:25.664579 1086826 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0603 12:25:25.664608 1086826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0603 12:25:25.965006 1086826 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0603 12:25:25.965039 1086826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0603 12:25:26.478180 1086826 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0603 12:25:26.478220 1086826 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0603 12:25:26.694814 1086826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0603 12:25:30.234960 1086826 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0603 12:25:30.235007 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:25:30.238190 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:30.238600 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:25:30.238635 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:30.238823 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:25:30.239054 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:25:30.239222 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:25:30.239392 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
	I0603 12:25:30.740050 1086826 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0603 12:25:30.851470 1086826 addons.go:234] Setting addon gcp-auth=true in "addons-699562"
	I0603 12:25:30.851557 1086826 host.go:66] Checking if "addons-699562" exists ...
	I0603 12:25:30.852058 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:30.852094 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:30.868185 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42573
	I0603 12:25:30.868773 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:30.869431 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:30.869461 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:30.869815 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:30.870344 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:25:30.870377 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:25:30.886448 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46435
	I0603 12:25:30.886966 1086826 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:25:30.887481 1086826 main.go:141] libmachine: Using API Version  1
	I0603 12:25:30.887505 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:25:30.887824 1086826 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:25:30.888034 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
	I0603 12:25:30.889619 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
	I0603 12:25:30.889859 1086826 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0603 12:25:30.889888 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:25:30.892565 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:30.893052 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:25:30.893120 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:25:30.893241 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:25:30.893420 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:25:30.893579 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:25:30.893836 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
	I0603 12:25:31.588516 1086826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.884760346s)
	I0603 12:25:31.588543 1086826 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.834961071s)
	I0603 12:25:31.588585 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.588588 1086826 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.832761998s)
	I0603 12:25:31.588599 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.588607 1086826 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0603 12:25:31.588647 1086826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.817821512s)
	I0603 12:25:31.588690 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.588707 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.588761 1086826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.72613988s)
	I0603 12:25:31.588790 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.588798 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.588816 1086826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.710295183s)
	I0603 12:25:31.588707 1086826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.757290038s)
	I0603 12:25:31.588849 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.588855 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.588835 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.588888 1086826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.659283987s)
	I0603 12:25:31.588892 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.588903 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.588912 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.588951 1086826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.494737371s)
	I0603 12:25:31.588970 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.588977 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.588991 1086826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.392463973s)
	I0603 12:25:31.589009 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.589017 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.589046 1086826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.346321083s)
	I0603 12:25:31.589063 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.589073 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.589124 1086826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.158682793s)
	I0603 12:25:31.589141 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.589151 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.589293 1086826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.916193022s)
	W0603 12:25:31.589322 1086826 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0603 12:25:31.589348 1086826 retry.go:31] will retry after 269.565045ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0603 12:25:31.589443 1086826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.878723157s)
	I0603 12:25:31.589471 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.589483 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.589514 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:31.589540 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:31.589573 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.589583 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:31.589592 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.589600 1086826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.18985866s)
	I0603 12:25:31.589630 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:31.589633 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.589651 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.589609 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.589720 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.589732 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:31.589736 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:31.589739 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.589746 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.589767 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.589777 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:31.589785 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.589791 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:31.589792 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.589802 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:31.589812 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.589815 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.589819 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.589822 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:31.589830 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.589836 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.589792 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.589877 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:31.589899 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.589906 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:31.589915 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.589922 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.589987 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:31.590005 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:31.590022 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.590027 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:31.590181 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.590194 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:31.590201 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.590207 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.590256 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:31.590277 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.590287 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:31.590294 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.590303 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.590339 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:31.590361 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.590374 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:31.590381 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.590388 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.590432 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:31.590455 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.590462 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:31.590469 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.590475 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.590535 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:31.590556 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.590563 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:31.590572 1086826 addons.go:475] Verifying addon metrics-server=true in "addons-699562"
	I0603 12:25:31.592647 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:31.592688 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.592699 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:31.593151 1086826 node_ready.go:35] waiting up to 6m0s for node "addons-699562" to be "Ready" ...
	I0603 12:25:31.593360 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:31.593391 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.593401 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:31.593427 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.593437 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.593482 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:31.593507 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.593513 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:31.595143 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:31.595175 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.595183 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:31.595271 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:31.595289 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.595296 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:31.595303 1086826 addons.go:475] Verifying addon ingress=true in "addons-699562"
	I0603 12:25:31.598115 1086826 out.go:177] * Verifying ingress addon...
	I0603 12:25:31.595658 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:31.595682 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.595698 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:31.595717 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.595736 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:31.595755 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.595770 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:31.595787 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.596138 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:31.596162 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.596337 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.596353 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:31.599578 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:31.599608 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:31.599609 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:31.599614 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:31.599672 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:31.599618 1086826 addons.go:475] Verifying addon registry=true in "addons-699562"
	I0603 12:25:31.599705 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:31.601327 1086826 out.go:177] * Verifying registry addon...
	I0603 12:25:31.599623 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.600355 1086826 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0603 12:25:31.603078 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.603348 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:31.603401 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.603419 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:31.604909 1086826 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-699562 service yakd-dashboard -n yakd-dashboard
	
	I0603 12:25:31.603845 1086826 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0603 12:25:31.618188 1086826 node_ready.go:49] node "addons-699562" has status "Ready":"True"
	I0603 12:25:31.618210 1086826 node_ready.go:38] duration metric: took 25.032701ms for node "addons-699562" to be "Ready" ...
	I0603 12:25:31.618219 1086826 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:25:31.619958 1086826 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0603 12:25:31.619977 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:31.629749 1086826 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0603 12:25:31.629771 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:31.643074 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.643101 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.643239 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:31.643263 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:31.643396 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.643421 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:31.643530 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:31.643580 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:31.643589 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	W0603 12:25:31.643687 1086826 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0603 12:25:31.661700 1086826 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hmhdl" in "kube-system" namespace to be "Ready" ...
	I0603 12:25:31.709539 1086826 pod_ready.go:92] pod "coredns-7db6d8ff4d-hmhdl" in "kube-system" namespace has status "Ready":"True"
	I0603 12:25:31.709561 1086826 pod_ready.go:81] duration metric: took 47.835085ms for pod "coredns-7db6d8ff4d-hmhdl" in "kube-system" namespace to be "Ready" ...
	I0603 12:25:31.709572 1086826 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-qjklp" in "kube-system" namespace to be "Ready" ...
	I0603 12:25:31.749241 1086826 pod_ready.go:92] pod "coredns-7db6d8ff4d-qjklp" in "kube-system" namespace has status "Ready":"True"
	I0603 12:25:31.749267 1086826 pod_ready.go:81] duration metric: took 39.689686ms for pod "coredns-7db6d8ff4d-qjklp" in "kube-system" namespace to be "Ready" ...
	I0603 12:25:31.749278 1086826 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-699562" in "kube-system" namespace to be "Ready" ...
	I0603 12:25:31.782613 1086826 pod_ready.go:92] pod "etcd-addons-699562" in "kube-system" namespace has status "Ready":"True"
	I0603 12:25:31.782641 1086826 pod_ready.go:81] duration metric: took 33.356992ms for pod "etcd-addons-699562" in "kube-system" namespace to be "Ready" ...
	I0603 12:25:31.782651 1086826 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-699562" in "kube-system" namespace to be "Ready" ...
	I0603 12:25:31.827401 1086826 pod_ready.go:92] pod "kube-apiserver-addons-699562" in "kube-system" namespace has status "Ready":"True"
	I0603 12:25:31.827426 1086826 pod_ready.go:81] duration metric: took 44.767786ms for pod "kube-apiserver-addons-699562" in "kube-system" namespace to be "Ready" ...
	I0603 12:25:31.827438 1086826 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-699562" in "kube-system" namespace to be "Ready" ...
	I0603 12:25:31.860140 1086826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0603 12:25:31.996499 1086826 pod_ready.go:92] pod "kube-controller-manager-addons-699562" in "kube-system" namespace has status "Ready":"True"
	I0603 12:25:31.996535 1086826 pod_ready.go:81] duration metric: took 169.090158ms for pod "kube-controller-manager-addons-699562" in "kube-system" namespace to be "Ready" ...
	I0603 12:25:31.996551 1086826 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6ssr8" in "kube-system" namespace to be "Ready" ...
	I0603 12:25:32.092808 1086826 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-699562" context rescaled to 1 replicas
	I0603 12:25:32.107781 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:32.113006 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:32.397400 1086826 pod_ready.go:92] pod "kube-proxy-6ssr8" in "kube-system" namespace has status "Ready":"True"
	I0603 12:25:32.397456 1086826 pod_ready.go:81] duration metric: took 400.897369ms for pod "kube-proxy-6ssr8" in "kube-system" namespace to be "Ready" ...
	I0603 12:25:32.397471 1086826 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-699562" in "kube-system" namespace to be "Ready" ...
	I0603 12:25:32.609145 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:32.625118 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:32.804548 1086826 pod_ready.go:92] pod "kube-scheduler-addons-699562" in "kube-system" namespace has status "Ready":"True"
	I0603 12:25:32.804582 1086826 pod_ready.go:81] duration metric: took 407.101572ms for pod "kube-scheduler-addons-699562" in "kube-system" namespace to be "Ready" ...
	I0603 12:25:32.804597 1086826 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace to be "Ready" ...
	I0603 12:25:33.108552 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:33.128901 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:33.534656 1086826 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.644766775s)
	I0603 12:25:33.536193 1086826 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0603 12:25:33.534859 1086826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.839987769s)
	I0603 12:25:33.536260 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:33.536276 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:33.539162 1086826 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0603 12:25:33.538004 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:33.538036 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:33.540446 1086826 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0603 12:25:33.540459 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:33.540472 1086826 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0603 12:25:33.540478 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:33.540491 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:33.540734 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:33.540752 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:33.540765 1086826 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-699562"
	I0603 12:25:33.542102 1086826 out.go:177] * Verifying csi-hostpath-driver addon...
	I0603 12:25:33.544077 1086826 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0603 12:25:33.568601 1086826 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0603 12:25:33.568623 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:33.641891 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:33.645693 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:33.791417 1086826 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0603 12:25:33.791441 1086826 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0603 12:25:33.842537 1086826 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0603 12:25:33.842563 1086826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0603 12:25:33.962934 1086826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0603 12:25:34.049552 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:34.107653 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:34.110594 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:34.205263 1086826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.345061511s)
	I0603 12:25:34.205315 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:34.205328 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:34.205726 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:34.205744 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:34.205755 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:34.205786 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:34.205838 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:34.206122 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:34.206140 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:34.206142 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:34.564029 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:34.621424 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:34.621952 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:34.812209 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:25:35.053298 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:35.111227 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:35.115219 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:35.560207 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:35.607579 1086826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.644597027s)
	I0603 12:25:35.607651 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:35.607672 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:35.608000 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:35.608020 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:35.608030 1086826 main.go:141] libmachine: Making call to close driver server
	I0603 12:25:35.608038 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:25:35.608049 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:35.608290 1086826 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:25:35.608305 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:25:35.608315 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:25:35.609668 1086826 addons.go:475] Verifying addon gcp-auth=true in "addons-699562"
	I0603 12:25:35.611353 1086826 out.go:177] * Verifying gcp-auth addon...
	I0603 12:25:35.613544 1086826 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0603 12:25:35.622615 1086826 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0603 12:25:35.622636 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:35.622740 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:35.630159 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:36.049713 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:36.107880 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:36.111569 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:36.116514 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:36.550083 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:36.607537 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:36.610472 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:36.616565 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:37.049483 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:37.108073 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:37.112270 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:37.116999 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:37.310101 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:25:37.551354 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:37.607806 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:37.611159 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:37.616391 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:38.050593 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:38.108207 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:38.110598 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:38.116688 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:38.550813 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:38.608089 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:38.610368 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:38.617203 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:39.050555 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:39.109214 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:39.111875 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:39.117025 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:39.311090 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:25:39.550464 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:39.609235 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:39.619822 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:39.623829 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:40.050197 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:40.107940 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:40.111079 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:40.117190 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:40.555709 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:40.606871 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:40.610515 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:40.617821 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:41.051097 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:41.108305 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:41.112441 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:41.120952 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:41.550698 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:41.607766 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:41.611170 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:41.616572 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:41.812095 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:25:42.056884 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:42.107517 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:42.110628 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:42.117448 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:42.550228 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:42.607367 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:42.610776 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:42.617304 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:43.050537 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:43.107475 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:43.110791 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:43.117489 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:43.549629 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:43.607992 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:43.610648 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:43.617442 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:44.050927 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:44.108088 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:44.111037 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:44.117569 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:44.311412 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:25:44.549239 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:44.607460 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:44.610239 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:44.616396 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:45.050428 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:45.108292 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:45.111223 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:45.116803 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:45.549667 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:45.608159 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:45.611071 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:45.617369 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:46.049740 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:46.108356 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:46.111863 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:46.117184 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:46.550035 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:46.610665 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:46.616929 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:46.617766 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:46.811474 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:25:47.050811 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:47.108517 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:47.111277 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:47.118121 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:47.717272 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:47.718579 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:47.720052 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:47.720582 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:48.050550 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:48.107795 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:48.124102 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:48.125107 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:48.550864 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:48.608389 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:48.612915 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:48.617073 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:48.812147 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:25:49.050012 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:49.108020 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:49.111903 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:49.117394 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:49.553646 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:49.616003 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:49.616158 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:49.617707 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:50.050251 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:50.107666 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:50.110733 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:50.117753 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:50.551307 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:50.608304 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:50.611064 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:50.619295 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:50.812287 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:25:51.049733 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:51.108750 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:51.111336 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:51.117075 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:51.549537 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:51.607284 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:51.616857 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:51.621130 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:52.050226 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:52.107358 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:52.109850 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:52.117931 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:52.550300 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:52.607639 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:52.610451 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:52.616746 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:53.050021 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:53.107896 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:53.110487 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:53.118939 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:53.310161 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:25:53.549054 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:53.608773 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:53.611208 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:53.616121 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:54.051892 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:54.107514 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:54.110875 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:54.117146 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:54.550002 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:54.613315 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:54.614243 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:54.617980 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:55.050714 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:55.108031 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:55.114500 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:55.116547 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:55.311779 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:25:55.549866 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:55.607101 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:55.609799 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:55.616946 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:56.050001 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:56.107340 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:56.110621 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:56.116795 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:56.549343 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:56.606883 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:56.611566 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:56.616628 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:57.049643 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:57.107787 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:57.110720 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:57.116984 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:57.552490 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:57.608047 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:57.610865 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:57.617100 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:57.811362 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:25:58.050515 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:58.107939 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:58.111269 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:58.116768 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:58.550559 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:58.607094 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:58.610875 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:58.624472 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:59.050561 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:59.107350 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:59.112122 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:59.116553 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:59.550259 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:25:59.607876 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:25:59.611375 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:25:59.616611 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:25:59.813733 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:00.050436 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:00.107324 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:00.110757 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:00.116870 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:00.551058 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:00.607390 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:00.611966 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:01.121735 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:01.121924 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:01.126130 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:01.126983 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:01.129553 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:01.550366 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:01.607513 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:01.613604 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:01.617595 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:02.050994 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:02.107780 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:02.112490 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:02.116676 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:02.311086 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:02.550413 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:02.607651 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:02.611082 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:02.616850 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:03.057400 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:03.112385 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:03.120791 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:03.123378 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:03.564652 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:03.608348 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:03.613237 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:03.617465 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:04.049337 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:04.108077 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:04.111136 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:04.116450 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:04.312631 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:04.550826 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:04.607574 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:04.610189 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:04.616596 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:05.049987 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:05.107654 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:05.112022 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:05.116491 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:05.558654 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:05.610678 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:05.612773 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:05.616256 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:06.050837 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:06.108894 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:06.116558 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:06.117868 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:06.318490 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:06.552708 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:06.607804 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:06.611009 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:06.616728 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:07.050266 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:07.108163 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:07.110480 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:07.116995 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:07.549887 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:07.607248 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:07.609801 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:07.616886 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:08.050013 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:08.107356 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:08.110936 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:08.117445 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:08.549795 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:08.608572 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:08.612546 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:08.618812 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:08.809645 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:09.050140 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:09.108213 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:09.110707 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:09.126656 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:09.627655 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:09.628075 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:09.629523 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:09.632992 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:10.052780 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:10.107687 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:10.115155 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:10.117087 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:10.559545 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:10.618077 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:10.618827 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:10.624128 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:10.809946 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:11.051024 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:11.107444 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:11.119674 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:11.129911 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:11.551452 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:11.608709 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:11.612387 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:11.618669 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:12.051203 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:12.108483 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:12.113638 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:12.117352 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:12.550419 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:12.606898 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:12.615006 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:12.622892 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:12.810639 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:13.050044 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:13.107552 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:13.110151 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:13.116169 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:13.549565 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:13.607597 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:13.610397 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:13.616616 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:14.075866 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:14.107785 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:14.110642 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:14.116976 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:14.550402 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:14.607903 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:14.624896 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:14.625760 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:14.811698 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:15.050201 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:15.107696 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:15.110185 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0603 12:26:15.116396 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:15.549935 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:15.608025 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:15.613716 1086826 kapi.go:107] duration metric: took 44.009864907s to wait for kubernetes.io/minikube-addons=registry ...
	I0603 12:26:15.616045 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:16.050623 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:16.107997 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:16.118286 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:16.550566 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:16.607168 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:16.617337 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:17.049985 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:17.111754 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:17.117847 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:17.310623 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:17.550155 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:17.607335 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:17.617935 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:18.050331 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:18.107929 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:18.117159 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:18.549743 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:18.608683 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:18.616912 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:19.051032 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:19.107395 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:19.117733 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:19.312689 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:19.551709 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:19.607931 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:19.618005 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:20.056111 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:20.120487 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:20.123371 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:20.549077 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:20.607149 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:20.617546 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:21.049157 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:21.107033 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:21.117469 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:21.553377 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:21.607131 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:21.617199 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:21.811213 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:22.049945 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:22.107589 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:22.117235 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:22.550601 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:22.748260 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:22.748410 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:23.051046 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:23.107427 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:23.116770 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:23.552644 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:23.607528 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:23.616536 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:23.815297 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:24.049690 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:24.107818 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:24.116850 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:24.549590 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:24.607702 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:24.616744 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:25.051117 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:25.110714 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:25.118175 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:25.549997 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:25.607989 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:25.616932 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:26.049772 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:26.107839 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:26.117258 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:26.313163 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:26.550215 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:26.609188 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:26.617513 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:27.050025 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:27.110354 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:27.121036 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:27.552177 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:27.609081 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:27.617129 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:28.063267 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:28.111656 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:28.140408 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:28.554049 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:28.607648 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:28.617599 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:28.815776 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:29.050702 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:29.108722 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:29.117641 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:29.549232 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:29.612125 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:29.619247 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:30.050328 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:30.107579 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:30.116842 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:30.549987 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:30.607456 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:30.617024 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:31.049548 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:31.107580 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:31.117334 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:31.310166 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:31.549604 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:31.610474 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:31.619704 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:32.049274 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:32.107173 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:32.117440 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:32.549145 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:32.607102 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:32.617211 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:33.050797 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:33.107940 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:33.117288 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:33.314001 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:33.553058 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:33.607307 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:33.625683 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:34.051432 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:34.108409 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:34.119806 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:34.549419 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:34.608318 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:34.618266 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:35.049502 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0603 12:26:35.107005 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:35.116950 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:35.549499 1086826 kapi.go:107] duration metric: took 1m2.005422319s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0603 12:26:35.607675 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:35.616842 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:35.810331 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:36.108106 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:36.117479 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:36.609257 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:36.617584 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:37.108854 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:37.116821 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:37.792868 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:37.793314 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:37.811615 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:38.107926 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:38.117239 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:38.607378 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:38.616370 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:39.107642 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:39.117063 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:39.609297 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:39.616591 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:40.107745 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:40.117791 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:40.319439 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:40.986440 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:41.000146 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:41.108887 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:41.119076 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:41.607823 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:41.617096 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:42.108419 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:42.117525 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:42.608967 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0603 12:26:42.617643 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:42.811899 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:43.108162 1086826 kapi.go:107] duration metric: took 1m11.5078006s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0603 12:26:43.117460 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:43.617102 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:44.306990 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:44.618442 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0603 12:26:45.117494 1086826 kapi.go:107] duration metric: took 1m9.503943568s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0603 12:26:45.119711 1086826 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-699562 cluster.
	I0603 12:26:45.121144 1086826 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0603 12:26:45.122532 1086826 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0603 12:26:45.123925 1086826 out.go:177] * Enabled addons: metrics-server, storage-provisioner, ingress-dns, nvidia-device-plugin, cloud-spanner, inspektor-gadget, helm-tiller, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0603 12:26:45.125165 1086826 addons.go:510] duration metric: took 1m22.085788168s for enable addons: enabled=[metrics-server storage-provisioner ingress-dns nvidia-device-plugin cloud-spanner inspektor-gadget helm-tiller yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0603 12:26:45.311068 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:47.318344 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:49.810186 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:51.811646 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:54.311268 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:56.311669 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:26:58.311958 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:27:00.811116 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:27:02.812103 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:27:05.311540 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:27:07.811048 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
	I0603 12:27:09.811356 1086826 pod_ready.go:92] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"True"
	I0603 12:27:09.811384 1086826 pod_ready.go:81] duration metric: took 1m37.006778734s for pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace to be "Ready" ...
	I0603 12:27:09.811395 1086826 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-2sw5z" in "kube-system" namespace to be "Ready" ...
	I0603 12:27:09.817624 1086826 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-2sw5z" in "kube-system" namespace has status "Ready":"True"
	I0603 12:27:09.817647 1086826 pod_ready.go:81] duration metric: took 6.245626ms for pod "nvidia-device-plugin-daemonset-2sw5z" in "kube-system" namespace to be "Ready" ...
	I0603 12:27:09.817666 1086826 pod_ready.go:38] duration metric: took 1m38.19943696s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:27:09.817688 1086826 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:27:09.817740 1086826 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:27:09.817800 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:27:09.866770 1086826 cri.go:89] found id: "ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4"
	I0603 12:27:09.866803 1086826 cri.go:89] found id: ""
	I0603 12:27:09.866814 1086826 logs.go:276] 1 containers: [ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4]
	I0603 12:27:09.866881 1086826 ssh_runner.go:195] Run: which crictl
	I0603 12:27:09.871934 1086826 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:27:09.872023 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:27:09.912368 1086826 cri.go:89] found id: "0c7a1cc6df31c0c301fee639aa62ce868d9a11802928a59d2d19c941e0c51514"
	I0603 12:27:09.912393 1086826 cri.go:89] found id: ""
	I0603 12:27:09.912402 1086826 logs.go:276] 1 containers: [0c7a1cc6df31c0c301fee639aa62ce868d9a11802928a59d2d19c941e0c51514]
	I0603 12:27:09.912466 1086826 ssh_runner.go:195] Run: which crictl
	I0603 12:27:09.917195 1086826 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:27:09.917259 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:27:09.966345 1086826 cri.go:89] found id: "35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3"
	I0603 12:27:09.966367 1086826 cri.go:89] found id: ""
	I0603 12:27:09.966376 1086826 logs.go:276] 1 containers: [35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3]
	I0603 12:27:09.966437 1086826 ssh_runner.go:195] Run: which crictl
	I0603 12:27:09.970794 1086826 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:27:09.970861 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:27:10.010220 1086826 cri.go:89] found id: "92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b"
	I0603 12:27:10.010243 1086826 cri.go:89] found id: ""
	I0603 12:27:10.010252 1086826 logs.go:276] 1 containers: [92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b]
	I0603 12:27:10.010307 1086826 ssh_runner.go:195] Run: which crictl
	I0603 12:27:10.014883 1086826 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:27:10.014938 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:27:10.056002 1086826 cri.go:89] found id: "6add0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e"
	I0603 12:27:10.056034 1086826 cri.go:89] found id: ""
	I0603 12:27:10.056046 1086826 logs.go:276] 1 containers: [6add0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e]
	I0603 12:27:10.056117 1086826 ssh_runner.go:195] Run: which crictl
	I0603 12:27:10.060632 1086826 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:27:10.060698 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:27:10.098738 1086826 cri.go:89] found id: "5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8"
	I0603 12:27:10.098762 1086826 cri.go:89] found id: ""
	I0603 12:27:10.098770 1086826 logs.go:276] 1 containers: [5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8]
	I0603 12:27:10.098819 1086826 ssh_runner.go:195] Run: which crictl
	I0603 12:27:10.103199 1086826 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:27:10.103278 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:27:10.152873 1086826 cri.go:89] found id: ""
	I0603 12:27:10.152908 1086826 logs.go:276] 0 containers: []
	W0603 12:27:10.152919 1086826 logs.go:278] No container was found matching "kindnet"
	I0603 12:27:10.152934 1086826 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:27:10.152953 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:27:11.259298 1086826 logs.go:123] Gathering logs for container status ...
	I0603 12:27:11.259359 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:27:11.306971 1086826 logs.go:123] Gathering logs for kubelet ...
	I0603 12:27:11.307009 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0603 12:27:11.359963 1086826 logs.go:138] Found kubelet problem: Jun 03 12:25:29 addons-699562 kubelet[1268]: W0603 12:25:29.345897    1268 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-699562" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-699562' and this object
	W0603 12:27:11.360135 1086826 logs.go:138] Found kubelet problem: Jun 03 12:25:29 addons-699562 kubelet[1268]: E0603 12:25:29.346036    1268 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-699562" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-699562' and this object
	I0603 12:27:11.392921 1086826 logs.go:123] Gathering logs for etcd [0c7a1cc6df31c0c301fee639aa62ce868d9a11802928a59d2d19c941e0c51514] ...
	I0603 12:27:11.392964 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c7a1cc6df31c0c301fee639aa62ce868d9a11802928a59d2d19c941e0c51514"
	I0603 12:27:11.453617 1086826 logs.go:123] Gathering logs for kube-scheduler [92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b] ...
	I0603 12:27:11.453666 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b"
	I0603 12:27:11.502128 1086826 logs.go:123] Gathering logs for kube-controller-manager [5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8] ...
	I0603 12:27:11.502170 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8"
	I0603 12:27:11.563524 1086826 logs.go:123] Gathering logs for kube-proxy [6add0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e] ...
	I0603 12:27:11.563569 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6add0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e"
	I0603 12:27:11.600560 1086826 logs.go:123] Gathering logs for dmesg ...
	I0603 12:27:11.600598 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:27:11.616410 1086826 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:27:11.616449 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 12:27:11.743210 1086826 logs.go:123] Gathering logs for kube-apiserver [ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4] ...
	I0603 12:27:11.743245 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4"
	I0603 12:27:11.790432 1086826 logs.go:123] Gathering logs for coredns [35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3] ...
	I0603 12:27:11.790480 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3"
	I0603 12:27:11.832586 1086826 out.go:304] Setting ErrFile to fd 2...
	I0603 12:27:11.832622 1086826 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0603 12:27:11.832717 1086826 out.go:239] X Problems detected in kubelet:
	W0603 12:27:11.832734 1086826 out.go:239]   Jun 03 12:25:29 addons-699562 kubelet[1268]: W0603 12:25:29.345897    1268 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-699562" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-699562' and this object
	W0603 12:27:11.832747 1086826 out.go:239]   Jun 03 12:25:29 addons-699562 kubelet[1268]: E0603 12:25:29.346036    1268 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-699562" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-699562' and this object
	I0603 12:27:11.832762 1086826 out.go:304] Setting ErrFile to fd 2...
	I0603 12:27:11.832772 1086826 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:27:21.834706 1086826 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:27:21.854545 1086826 api_server.go:72] duration metric: took 1m58.8152147s to wait for apiserver process to appear ...
	I0603 12:27:21.854577 1086826 api_server.go:88] waiting for apiserver healthz status ...
	I0603 12:27:21.854630 1086826 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:27:21.854692 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:27:21.895375 1086826 cri.go:89] found id: "ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4"
	I0603 12:27:21.895398 1086826 cri.go:89] found id: ""
	I0603 12:27:21.895406 1086826 logs.go:276] 1 containers: [ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4]
	I0603 12:27:21.895460 1086826 ssh_runner.go:195] Run: which crictl
	I0603 12:27:21.900015 1086826 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:27:21.900067 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:27:21.943623 1086826 cri.go:89] found id: "0c7a1cc6df31c0c301fee639aa62ce868d9a11802928a59d2d19c941e0c51514"
	I0603 12:27:21.943658 1086826 cri.go:89] found id: ""
	I0603 12:27:21.943667 1086826 logs.go:276] 1 containers: [0c7a1cc6df31c0c301fee639aa62ce868d9a11802928a59d2d19c941e0c51514]
	I0603 12:27:21.943731 1086826 ssh_runner.go:195] Run: which crictl
	I0603 12:27:21.948627 1086826 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:27:21.948694 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:27:21.992699 1086826 cri.go:89] found id: "35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3"
	I0603 12:27:21.992725 1086826 cri.go:89] found id: ""
	I0603 12:27:21.992735 1086826 logs.go:276] 1 containers: [35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3]
	I0603 12:27:21.992800 1086826 ssh_runner.go:195] Run: which crictl
	I0603 12:27:21.998808 1086826 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:27:21.998885 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:27:22.044523 1086826 cri.go:89] found id: "92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b"
	I0603 12:27:22.044551 1086826 cri.go:89] found id: ""
	I0603 12:27:22.044562 1086826 logs.go:276] 1 containers: [92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b]
	I0603 12:27:22.044631 1086826 ssh_runner.go:195] Run: which crictl
	I0603 12:27:22.049328 1086826 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:27:22.049401 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:27:22.091373 1086826 cri.go:89] found id: "6add0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e"
	I0603 12:27:22.091398 1086826 cri.go:89] found id: ""
	I0603 12:27:22.091406 1086826 logs.go:276] 1 containers: [6add0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e]
	I0603 12:27:22.091468 1086826 ssh_runner.go:195] Run: which crictl
	I0603 12:27:22.095823 1086826 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:27:22.095878 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:27:22.134585 1086826 cri.go:89] found id: "5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8"
	I0603 12:27:22.134616 1086826 cri.go:89] found id: ""
	I0603 12:27:22.134627 1086826 logs.go:276] 1 containers: [5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8]
	I0603 12:27:22.134682 1086826 ssh_runner.go:195] Run: which crictl
	I0603 12:27:22.138852 1086826 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:27:22.138911 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:27:22.183843 1086826 cri.go:89] found id: ""
	I0603 12:27:22.183868 1086826 logs.go:276] 0 containers: []
	W0603 12:27:22.183876 1086826 logs.go:278] No container was found matching "kindnet"
	I0603 12:27:22.183886 1086826 logs.go:123] Gathering logs for dmesg ...
	I0603 12:27:22.183900 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:27:22.199319 1086826 logs.go:123] Gathering logs for kube-apiserver [ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4] ...
	I0603 12:27:22.199362 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4"
	I0603 12:27:22.255816 1086826 logs.go:123] Gathering logs for etcd [0c7a1cc6df31c0c301fee639aa62ce868d9a11802928a59d2d19c941e0c51514] ...
	I0603 12:27:22.255848 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c7a1cc6df31c0c301fee639aa62ce868d9a11802928a59d2d19c941e0c51514"
	I0603 12:27:22.309345 1086826 logs.go:123] Gathering logs for kube-proxy [6add0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e] ...
	I0603 12:27:22.309387 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6add0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e"
	I0603 12:27:22.349280 1086826 logs.go:123] Gathering logs for kube-controller-manager [5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8] ...
	I0603 12:27:22.349321 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8"
	I0603 12:27:22.418782 1086826 logs.go:123] Gathering logs for kubelet ...
	I0603 12:27:22.418821 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0603 12:27:22.478644 1086826 logs.go:138] Found kubelet problem: Jun 03 12:25:29 addons-699562 kubelet[1268]: W0603 12:25:29.345897    1268 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-699562" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-699562' and this object
	W0603 12:27:22.478888 1086826 logs.go:138] Found kubelet problem: Jun 03 12:25:29 addons-699562 kubelet[1268]: E0603 12:25:29.346036    1268 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-699562" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-699562' and this object
	I0603 12:27:22.516116 1086826 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:27:22.516161 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 12:27:22.649308 1086826 logs.go:123] Gathering logs for coredns [35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3] ...
	I0603 12:27:22.649341 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3"
	I0603 12:27:22.689734 1086826 logs.go:123] Gathering logs for kube-scheduler [92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b] ...
	I0603 12:27:22.689785 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b"
	I0603 12:27:22.733915 1086826 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:27:22.733952 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:27:23.479507 1086826 logs.go:123] Gathering logs for container status ...
	I0603 12:27:23.479570 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:27:23.533200 1086826 out.go:304] Setting ErrFile to fd 2...
	I0603 12:27:23.533234 1086826 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0603 12:27:23.533305 1086826 out.go:239] X Problems detected in kubelet:
	W0603 12:27:23.533317 1086826 out.go:239]   Jun 03 12:25:29 addons-699562 kubelet[1268]: W0603 12:25:29.345897    1268 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-699562" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-699562' and this object
	W0603 12:27:23.533324 1086826 out.go:239]   Jun 03 12:25:29 addons-699562 kubelet[1268]: E0603 12:25:29.346036    1268 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-699562" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-699562' and this object
	I0603 12:27:23.533336 1086826 out.go:304] Setting ErrFile to fd 2...
	I0603 12:27:23.533346 1086826 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:27:33.534751 1086826 api_server.go:253] Checking apiserver healthz at https://192.168.39.241:8443/healthz ...
	I0603 12:27:33.539341 1086826 api_server.go:279] https://192.168.39.241:8443/healthz returned 200:
	ok
	I0603 12:27:33.540654 1086826 api_server.go:141] control plane version: v1.30.1
	I0603 12:27:33.540682 1086826 api_server.go:131] duration metric: took 11.686097199s to wait for apiserver health ...
	I0603 12:27:33.540692 1086826 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 12:27:33.540725 1086826 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 12:27:33.540790 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 12:27:33.580781 1086826 cri.go:89] found id: "ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4"
	I0603 12:27:33.580805 1086826 cri.go:89] found id: ""
	I0603 12:27:33.580815 1086826 logs.go:276] 1 containers: [ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4]
	I0603 12:27:33.580884 1086826 ssh_runner.go:195] Run: which crictl
	I0603 12:27:33.585293 1086826 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 12:27:33.585356 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 12:27:33.624776 1086826 cri.go:89] found id: "0c7a1cc6df31c0c301fee639aa62ce868d9a11802928a59d2d19c941e0c51514"
	I0603 12:27:33.624799 1086826 cri.go:89] found id: ""
	I0603 12:27:33.624807 1086826 logs.go:276] 1 containers: [0c7a1cc6df31c0c301fee639aa62ce868d9a11802928a59d2d19c941e0c51514]
	I0603 12:27:33.624855 1086826 ssh_runner.go:195] Run: which crictl
	I0603 12:27:33.629338 1086826 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 12:27:33.629437 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 12:27:33.676960 1086826 cri.go:89] found id: "35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3"
	I0603 12:27:33.676995 1086826 cri.go:89] found id: ""
	I0603 12:27:33.677007 1086826 logs.go:276] 1 containers: [35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3]
	I0603 12:27:33.677078 1086826 ssh_runner.go:195] Run: which crictl
	I0603 12:27:33.681551 1086826 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 12:27:33.681615 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 12:27:33.719279 1086826 cri.go:89] found id: "92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b"
	I0603 12:27:33.719302 1086826 cri.go:89] found id: ""
	I0603 12:27:33.719311 1086826 logs.go:276] 1 containers: [92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b]
	I0603 12:27:33.719384 1086826 ssh_runner.go:195] Run: which crictl
	I0603 12:27:33.723688 1086826 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 12:27:33.723743 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 12:27:33.760181 1086826 cri.go:89] found id: "6add0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e"
	I0603 12:27:33.760210 1086826 cri.go:89] found id: ""
	I0603 12:27:33.760221 1086826 logs.go:276] 1 containers: [6add0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e]
	I0603 12:27:33.760283 1086826 ssh_runner.go:195] Run: which crictl
	I0603 12:27:33.764788 1086826 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 12:27:33.764868 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 12:27:33.805979 1086826 cri.go:89] found id: "5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8"
	I0603 12:27:33.806017 1086826 cri.go:89] found id: ""
	I0603 12:27:33.806030 1086826 logs.go:276] 1 containers: [5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8]
	I0603 12:27:33.806117 1086826 ssh_runner.go:195] Run: which crictl
	I0603 12:27:33.810640 1086826 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 12:27:33.810719 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 12:27:33.860436 1086826 cri.go:89] found id: ""
	I0603 12:27:33.860478 1086826 logs.go:276] 0 containers: []
	W0603 12:27:33.860490 1086826 logs.go:278] No container was found matching "kindnet"
	I0603 12:27:33.860503 1086826 logs.go:123] Gathering logs for kubelet ...
	I0603 12:27:33.860523 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0603 12:27:33.912867 1086826 logs.go:138] Found kubelet problem: Jun 03 12:25:29 addons-699562 kubelet[1268]: W0603 12:25:29.345897    1268 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-699562" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-699562' and this object
	W0603 12:27:33.913116 1086826 logs.go:138] Found kubelet problem: Jun 03 12:25:29 addons-699562 kubelet[1268]: E0603 12:25:29.346036    1268 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-699562" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-699562' and this object
	I0603 12:27:33.946641 1086826 logs.go:123] Gathering logs for kube-apiserver [ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4] ...
	I0603 12:27:33.946688 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4"
	I0603 12:27:33.995447 1086826 logs.go:123] Gathering logs for etcd [0c7a1cc6df31c0c301fee639aa62ce868d9a11802928a59d2d19c941e0c51514] ...
	I0603 12:27:33.995490 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c7a1cc6df31c0c301fee639aa62ce868d9a11802928a59d2d19c941e0c51514"
	I0603 12:27:34.053247 1086826 logs.go:123] Gathering logs for kube-proxy [6add0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e] ...
	I0603 12:27:34.053293 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6add0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e"
	I0603 12:27:34.092640 1086826 logs.go:123] Gathering logs for kube-controller-manager [5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8] ...
	I0603 12:27:34.092671 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8"
	I0603 12:27:34.161946 1086826 logs.go:123] Gathering logs for CRI-O ...
	I0603 12:27:34.161991 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 12:27:35.152385 1086826 logs.go:123] Gathering logs for container status ...
	I0603 12:27:35.152441 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 12:27:35.199230 1086826 logs.go:123] Gathering logs for dmesg ...
	I0603 12:27:35.199272 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 12:27:35.215226 1086826 logs.go:123] Gathering logs for describe nodes ...
	I0603 12:27:35.215263 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 12:27:35.337967 1086826 logs.go:123] Gathering logs for coredns [35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3] ...
	I0603 12:27:35.338010 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3"
	I0603 12:27:35.380440 1086826 logs.go:123] Gathering logs for kube-scheduler [92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b] ...
	I0603 12:27:35.380475 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b"
	I0603 12:27:35.427393 1086826 out.go:304] Setting ErrFile to fd 2...
	I0603 12:27:35.427426 1086826 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0603 12:27:35.427490 1086826 out.go:239] X Problems detected in kubelet:
	W0603 12:27:35.427499 1086826 out.go:239]   Jun 03 12:25:29 addons-699562 kubelet[1268]: W0603 12:25:29.345897    1268 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-699562" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-699562' and this object
	W0603 12:27:35.427506 1086826 out.go:239]   Jun 03 12:25:29 addons-699562 kubelet[1268]: E0603 12:25:29.346036    1268 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-699562" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-699562' and this object
	I0603 12:27:35.427513 1086826 out.go:304] Setting ErrFile to fd 2...
	I0603 12:27:35.427520 1086826 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:27:45.442234 1086826 system_pods.go:59] 18 kube-system pods found
	I0603 12:27:45.442278 1086826 system_pods.go:61] "coredns-7db6d8ff4d-hmhdl" [c3cfe166-99f3-4ac9-9905-8be76bcb511d] Running
	I0603 12:27:45.442285 1086826 system_pods.go:61] "csi-hostpath-attacher-0" [c6efaa50-400a-4e2d-9610-290b08ca0e27] Running
	I0603 12:27:45.442291 1086826 system_pods.go:61] "csi-hostpath-resizer-0" [d102455a-acf1-4067-b512-3e7d24676733] Running
	I0603 12:27:45.442296 1086826 system_pods.go:61] "csi-hostpathplugin-ldcdv" [db932b0d-726d-4b8d-b47c-dcbc1657a70d] Running
	I0603 12:27:45.442300 1086826 system_pods.go:61] "etcd-addons-699562" [90cdaf3f-ae75-439a-84f1-78cba28a6085] Running
	I0603 12:27:45.442305 1086826 system_pods.go:61] "kube-apiserver-addons-699562" [08e077e7-849e-40e9-bbb8-d3d5857a87bb] Running
	I0603 12:27:45.442309 1086826 system_pods.go:61] "kube-controller-manager-addons-699562" [1fb0b7db-5179-43de-bdea-0a9c8666d1dd] Running
	I0603 12:27:45.442313 1086826 system_pods.go:61] "kube-ingress-dns-minikube" [21a1c096-2479-4d10-864a-8b202b08a284] Running
	I0603 12:27:45.442318 1086826 system_pods.go:61] "kube-proxy-6ssr8" [609d1553-86b5-46ea-b503-bdfd9f291571] Running
	I0603 12:27:45.442323 1086826 system_pods.go:61] "kube-scheduler-addons-699562" [d5748ac9-a1c8-496a-aa0f-8a75c6a8b12c] Running
	I0603 12:27:45.442327 1086826 system_pods.go:61] "metrics-server-c59844bb4-pl8qk" [26f4580a-9514-47c0-aa22-11c454eaca32] Running
	I0603 12:27:45.442332 1086826 system_pods.go:61] "nvidia-device-plugin-daemonset-2sw5z" [3ad1866a-b3d5-4783-b2dd-557082180d8f] Running
	I0603 12:27:45.442337 1086826 system_pods.go:61] "registry-jrrh7" [af432feb-b699-477a-8cd5-ff109071d13d] Running
	I0603 12:27:45.442342 1086826 system_pods.go:61] "registry-proxy-n8265" [343bbd2c-1a4b-4796-8401-ebd3686c0a61] Running
	I0603 12:27:45.442348 1086826 system_pods.go:61] "snapshot-controller-745499f584-dk5sk" [e74e33d1-7eaf-46d7-bcb2-2a088a1687bd] Running
	I0603 12:27:45.442356 1086826 system_pods.go:61] "snapshot-controller-745499f584-nkg59" [dd8cffdf-f15c-405a-95d3-fa13eb7a4908] Running
	I0603 12:27:45.442361 1086826 system_pods.go:61] "storage-provisioner" [c3d92bc5-3f10-47e3-84a9-f532f14deae4] Running
	I0603 12:27:45.442370 1086826 system_pods.go:61] "tiller-deploy-6677d64bcd-k4tt8" [0ecadef4-5251-4d11-a39c-77a196200334] Running
	I0603 12:27:45.442378 1086826 system_pods.go:74] duration metric: took 11.901678581s to wait for pod list to return data ...
	I0603 12:27:45.442391 1086826 default_sa.go:34] waiting for default service account to be created ...
	I0603 12:27:45.444510 1086826 default_sa.go:45] found service account: "default"
	I0603 12:27:45.444530 1086826 default_sa.go:55] duration metric: took 2.131961ms for default service account to be created ...
	I0603 12:27:45.444537 1086826 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 12:27:45.453736 1086826 system_pods.go:86] 18 kube-system pods found
	I0603 12:27:45.453760 1086826 system_pods.go:89] "coredns-7db6d8ff4d-hmhdl" [c3cfe166-99f3-4ac9-9905-8be76bcb511d] Running
	I0603 12:27:45.453766 1086826 system_pods.go:89] "csi-hostpath-attacher-0" [c6efaa50-400a-4e2d-9610-290b08ca0e27] Running
	I0603 12:27:45.453770 1086826 system_pods.go:89] "csi-hostpath-resizer-0" [d102455a-acf1-4067-b512-3e7d24676733] Running
	I0603 12:27:45.453774 1086826 system_pods.go:89] "csi-hostpathplugin-ldcdv" [db932b0d-726d-4b8d-b47c-dcbc1657a70d] Running
	I0603 12:27:45.453778 1086826 system_pods.go:89] "etcd-addons-699562" [90cdaf3f-ae75-439a-84f1-78cba28a6085] Running
	I0603 12:27:45.453782 1086826 system_pods.go:89] "kube-apiserver-addons-699562" [08e077e7-849e-40e9-bbb8-d3d5857a87bb] Running
	I0603 12:27:45.453786 1086826 system_pods.go:89] "kube-controller-manager-addons-699562" [1fb0b7db-5179-43de-bdea-0a9c8666d1dd] Running
	I0603 12:27:45.453791 1086826 system_pods.go:89] "kube-ingress-dns-minikube" [21a1c096-2479-4d10-864a-8b202b08a284] Running
	I0603 12:27:45.453795 1086826 system_pods.go:89] "kube-proxy-6ssr8" [609d1553-86b5-46ea-b503-bdfd9f291571] Running
	I0603 12:27:45.453799 1086826 system_pods.go:89] "kube-scheduler-addons-699562" [d5748ac9-a1c8-496a-aa0f-8a75c6a8b12c] Running
	I0603 12:27:45.453805 1086826 system_pods.go:89] "metrics-server-c59844bb4-pl8qk" [26f4580a-9514-47c0-aa22-11c454eaca32] Running
	I0603 12:27:45.453809 1086826 system_pods.go:89] "nvidia-device-plugin-daemonset-2sw5z" [3ad1866a-b3d5-4783-b2dd-557082180d8f] Running
	I0603 12:27:45.453814 1086826 system_pods.go:89] "registry-jrrh7" [af432feb-b699-477a-8cd5-ff109071d13d] Running
	I0603 12:27:45.453818 1086826 system_pods.go:89] "registry-proxy-n8265" [343bbd2c-1a4b-4796-8401-ebd3686c0a61] Running
	I0603 12:27:45.453821 1086826 system_pods.go:89] "snapshot-controller-745499f584-dk5sk" [e74e33d1-7eaf-46d7-bcb2-2a088a1687bd] Running
	I0603 12:27:45.453828 1086826 system_pods.go:89] "snapshot-controller-745499f584-nkg59" [dd8cffdf-f15c-405a-95d3-fa13eb7a4908] Running
	I0603 12:27:45.453832 1086826 system_pods.go:89] "storage-provisioner" [c3d92bc5-3f10-47e3-84a9-f532f14deae4] Running
	I0603 12:27:45.453835 1086826 system_pods.go:89] "tiller-deploy-6677d64bcd-k4tt8" [0ecadef4-5251-4d11-a39c-77a196200334] Running
	I0603 12:27:45.453842 1086826 system_pods.go:126] duration metric: took 9.30001ms to wait for k8s-apps to be running ...
	I0603 12:27:45.453849 1086826 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 12:27:45.453893 1086826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:27:45.472770 1086826 system_svc.go:56] duration metric: took 18.912332ms WaitForService to wait for kubelet
	I0603 12:27:45.472793 1086826 kubeadm.go:576] duration metric: took 2m22.433473354s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 12:27:45.472813 1086826 node_conditions.go:102] verifying NodePressure condition ...
	I0603 12:27:45.476327 1086826 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 12:27:45.476351 1086826 node_conditions.go:123] node cpu capacity is 2
	I0603 12:27:45.476365 1086826 node_conditions.go:105] duration metric: took 3.54603ms to run NodePressure ...
	I0603 12:27:45.476377 1086826 start.go:240] waiting for startup goroutines ...
	I0603 12:27:45.476384 1086826 start.go:245] waiting for cluster config update ...
	I0603 12:27:45.476401 1086826 start.go:254] writing updated cluster config ...
	I0603 12:27:45.476702 1086826 ssh_runner.go:195] Run: rm -f paused
	I0603 12:27:45.526801 1086826 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 12:27:45.529608 1086826 out.go:177] * Done! kubectl is now configured to use "addons-699562" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jun 03 12:33:24 addons-699562 crio[679]: time="2024-06-03 12:33:24.794679313Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:2cd7a7a28e0a544856b8d5555606d91ef93f12a0d206dd883ea24422a0b3358b,Metadata:&PodSandboxMetadata{Name:hello-world-app-86c47465fc-79c22,Uid:084158b3-1687-4f4c-b741-cbab7ca11858,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717417822757287870,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-86c47465fc-79c22,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 084158b3-1687-4f4c-b741-cbab7ca11858,pod-template-hash: 86c47465fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03T12:30:22.442780902Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f58876a06d48db0d426b93948a20a6d66b18f312f72772da56715a105e6fb466,Metadata:&PodSandboxMetadata{Name:nginx,Uid:22eac9e0-47f1-46a1-9745-87ca515de64e,Namespace:default,Attempt:0,}
,State:SANDBOX_READY,CreatedAt:1717417680394820409,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 22eac9e0-47f1-46a1-9745-87ca515de64e,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03T12:28:00.073978393Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:83a0e5827ce1a87f9a28b80c3e8aef138aa1aafbe0be947094b5660af09e3673,Metadata:&PodSandboxMetadata{Name:headlamp-68456f997b-tpgtj,Uid:c02f3cb7-dd75-4d83-89fe-082ca6c80805,Namespace:headlamp,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717417666970222723,Labels:map[string]string{app.kubernetes.io/instance: headlamp,app.kubernetes.io/name: headlamp,io.kubernetes.container.name: POD,io.kubernetes.pod.name: headlamp-68456f997b-tpgtj,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: c02f3cb7-dd75-4d83-89fe-082ca6c80805,pod-template-hash: 68456f997b,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-
06-03T12:27:46.655997091Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:266b9c9ff3c4be3b23861b9133bb076fc65c831d3b9d14a733790d25dd14cecb,Metadata:&PodSandboxMetadata{Name:gcp-auth-5db96cd9b4-vq6sn,Uid:d4773645-1a91-48bc-a27e-61822e3eb944,Namespace:gcp-auth,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717417600982240406,Labels:map[string]string{app: gcp-auth,io.kubernetes.container.name: POD,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-vq6sn,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: d4773645-1a91-48bc-a27e-61822e3eb944,kubernetes.io/minikube-addons: gcp-auth,pod-template-hash: 5db96cd9b4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03T12:25:35.514569952Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3c836ea529a741dc02ac68dcdc31ac8e0d959d76d24d1fa21e9a52f7f15d92d9,Metadata:&PodSandboxMetadata{Name:yakd-dashboard-5ddbf7d777-th7qj,Uid:cb66a0b3-53cb-493e-8010-d545cc1dc5b8,Namespace:yakd-dashboard,Attempt:0,},State:SANDBOX_READY,Creat
edAt:1717417533513409121,Labels:map[string]string{app.kubernetes.io/instance: yakd-dashboard,app.kubernetes.io/name: yakd-dashboard,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-th7qj,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: cb66a0b3-53cb-493e-8010-d545cc1dc5b8,pod-template-hash: 5ddbf7d777,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03T12:25:29.332253164Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b76bfaf676bbeb8b006b7032d5ac92a16463266e80b93e0e52cda1863a458b9c,Metadata:&PodSandboxMetadata{Name:local-path-provisioner-8d985888d-2trqm,Uid:1f4740f5-01f4-413e-8b79-311c67526d69,Namespace:local-path-storage,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717417530208265447,Labels:map[string]string{app: local-path-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: local-path-provisioner-8d985888d-2trqm,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.p
od.uid: 1f4740f5-01f4-413e-8b79-311c67526d69,pod-template-hash: 8d985888d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03T12:25:29.373204258Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c808b7e546b606d9b24586ce1db0e971c4e35cf5bc1eae84ffb5fa24b44cfbf6,Metadata:&PodSandboxMetadata{Name:metrics-server-c59844bb4-pl8qk,Uid:26f4580a-9514-47c0-aa22-11c454eaca32,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717417529566249171,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-c59844bb4-pl8qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f4580a-9514-47c0-aa22-11c454eaca32,k8s-app: metrics-server,pod-template-hash: c59844bb4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03T12:25:28.882314292Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:81961c6a37d61c8b612f41f7f942f2b0a4c108ba966128c4208ecab42f3fe95c,Metadata:&PodSandboxMetadata{Name:storage-prov
isioner,Uid:c3d92bc5-3f10-47e3-84a9-f532f14deae4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717417528893070141,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3d92bc5-3f10-47e3-84a9-f532f14deae4,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"
serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-06-03T12:25:28.280109133Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0bfe8f416027409f7e1eac5af8acd40c936317be61611534f1284947fa2ef9f2,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-hmhdl,Uid:c3cfe166-99f3-4ac9-9905-8be76bcb511d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717417523126060581,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-hmhdl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3cfe166-99f3-4ac9-9905-8be76bcb511d,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03T12:25:22.802339483Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bdb166637cc76b778ba00bf3d396efbe8bd2978f9e621874b1bb0fb2220aff46,Metadata:&PodSandboxMetadata{Na
me:kube-proxy-6ssr8,Uid:609d1553-86b5-46ea-b503-bdfd9f291571,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717417522887843367,Labels:map[string]string{controller-revision-hash: 5dbf89796d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-6ssr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 609d1553-86b5-46ea-b503-bdfd9f291571,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03T12:25:22.577038329Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0b594b2f837fe04b34ea1200fca819f9b4bc408fed28f0e293849d18e3e2d779,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-699562,Uid:c2e264d67def89fa6266f980f6f77444,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717417503068073556,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kuber
netes.pod.uid: c2e264d67def89fa6266f980f6f77444,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c2e264d67def89fa6266f980f6f77444,kubernetes.io/config.seen: 2024-06-03T12:25:02.387225383Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:96186e4c50e5eb41ec7257ba2b4ec8474fc6064c8a935839054727baa8b306a9,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-699562,Uid:6c2e93774694bec0d9f39543e1c101b0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717417503059535365,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c2e93774694bec0d9f39543e1c101b0,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.241:8443,kubernetes.io/config.hash: 6c2e93774694bec0d9f39543e1c101b0,kubernetes.io/config.seen: 2024-06-03T12:25:02.387223251
Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:dfa5c4cb4bc79b6324610ebb8427a1121ff9130a3754c8522018aadb5bc2e443,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-699562,Uid:65e7e1f6b5fb520ef619dd246fd97035,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717417503045857012,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65e7e1f6b5fb520ef619dd246fd97035,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 65e7e1f6b5fb520ef619dd246fd97035,kubernetes.io/config.seen: 2024-06-03T12:25:02.387224406Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b7cc010079adda6957a2caae2c510628a876664b8dca66867c8c5a8f08ddc1c5,Metadata:&PodSandboxMetadata{Name:etcd-addons-699562,Uid:628fee2574e8e2e94faacdc70733c8af,Namespace:kube-system,Attempt:0,},State:SANDBOX_
READY,CreatedAt:1717417503040987799,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 628fee2574e8e2e94faacdc70733c8af,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.241:2379,kubernetes.io/config.hash: 628fee2574e8e2e94faacdc70733c8af,kubernetes.io/config.seen: 2024-06-03T12:25:02.387219511Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=30df3fb3-8193-4dcb-8425-f8a4529e7b6f name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 03 12:33:24 addons-699562 crio[679]: time="2024-06-03 12:33:24.795415686Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7d45ddd4-8b1d-42ab-a636-c08800ab362f name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:33:24 addons-699562 crio[679]: time="2024-06-03 12:33:24.795473110Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7d45ddd4-8b1d-42ab-a636-c08800ab362f name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:33:24 addons-699562 crio[679]: time="2024-06-03 12:33:24.795844921Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8d642bc3271163ac07b67d5d065df03b7f8ec966349c684f21b1bd59704f0e69,PodSandboxId:2cd7a7a28e0a544856b8d5555606d91ef93f12a0d206dd883ea24422a0b3358b,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1717417824993167174,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-79c22,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 084158b3-1687-4f4c-b741-cbab7ca11858,},Annotations:map[string]string{io.kubernetes.container.hash: 29cd3655,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a173411215156f8b18d5fbf4880f8e8fdde156ec2a9e410913aa0c571553461a,PodSandboxId:f58876a06d48db0d426b93948a20a6d66b18f312f72772da56715a105e6fb466,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4,State:CONTAINER_RUNNING,CreatedAt:1717417684144009266,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 22eac9e0-47f1-46a1-9745-87ca515de64e,},Annotations:map[string]string{io.kubern
etes.container.hash: d250beef,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bf5932194780bd3cb97f9edc1a375d5a81fda4f3a462fe7b477ade5bb3d2ef1,PodSandboxId:83a0e5827ce1a87f9a28b80c3e8aef138aa1aafbe0be947094b5660af09e3673,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bd42824d488ce58074f6d54cb051437d0dc2669f3f96a4d9b3b72a8d7ddda679,State:CONTAINER_RUNNING,CreatedAt:1717417672655915191,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-68456f997b-tpgtj,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: c02f3cb7-dd75-4d83-89fe-082ca6c80805,},Annotations:map[string]string{io.kubernetes.container.hash: 22246b3d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f787a95dc6ea2d78d819bc9e3ce31d217271f40af9c989319ddc466faa542c4,PodSandboxId:266b9c9ff3c4be3b23861b9133bb076fc65c831d3b9d14a733790d25dd14cecb,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1717417604379198722,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-vq6sn,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: d4773645-1a91-48bc-a27e-61822e3eb944,},Annotations:map[string]string{io.kubernetes.container.hash: 7c46e196,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08062fd585905d420205076b1d3f748399b18214282181c986eb4e0dcdcb686f,PodSandboxId:3c836ea529a741dc02ac68dcdc31ac8e0d959d76d24d1fa21e9a52f7f15d92d9,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:17174
17579443576098,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-th7qj,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: cb66a0b3-53cb-493e-8010-d545cc1dc5b8,},Annotations:map[string]string{io.kubernetes.container.hash: 32e0c41d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff24eb8563b0cdf092e0477748bf3b7abddc2958ee31f2fce5b90d2987e09ab0,PodSandboxId:b76bfaf676bbeb8b006b7032d5ac92a16463266e80b93e0e52cda1863a458b9c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1717417569721394616,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-2trqm,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 1f4740f5-01f4-413e-8b79-311c67526d69,},Annotations:map[string]string{io.kubernetes.container.hash: df90b885,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:071b33296d63e35493453ab9868ec545daa42d19a1436dbc8c4e22d7983162fa,PodSandboxId:c808b7e546b606d9b24586ce1db0e971c4e35cf5bc1eae84ffb5fa24b44cfbf6,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e41
2e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1717417565412297941,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-pl8qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f4580a-9514-47c0-aa22-11c454eaca32,},Annotations:map[string]string{io.kubernetes.container.hash: 382214a7,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a9104d810266c5a8079eeaf8d0c23a2e4538617523b6b90bff538c0454bd06,PodSandboxId:81961c6a37d61c8b612f41f7f942f2b0a4c108ba966128c4208ecab42f3fe95c,Metadata:&ContainerMetadata{Name:storage-pr
ovisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717417533319293443,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3d92bc5-3f10-47e3-84a9-f532f14deae4,},Annotations:map[string]string{io.kubernetes.container.hash: ed5f337c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3,PodSandboxId:0bfe8f416027409f7e1eac5af8acd40c936317be61611534f1284947fa2ef9f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Im
age:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717417526164116856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hmhdl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3cfe166-99f3-4ac9-9905-8be76bcb511d,},Annotations:map[string]string{io.kubernetes.container.hash: b559e7d2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ad
d0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e,PodSandboxId:bdb166637cc76b778ba00bf3d396efbe8bd2978f9e621874b1bb0fb2220aff46,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717417523013109940,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6ssr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 609d1553-86b5-46ea-b503-bdfd9f291571,},Annotations:map[string]string{io.kubernetes.container.hash: dde3c0ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c7a1cc6df31c0c301fee639aa62ce868d9a
11802928a59d2d19c941e0c51514,PodSandboxId:b7cc010079adda6957a2caae2c510628a876664b8dca66867c8c5a8f08ddc1c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717417503291175049,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 628fee2574e8e2e94faacdc70733c8af,},Annotations:map[string]string{io.kubernetes.container.hash: eee46468,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8,PodSandboxId:dfa5c
4cb4bc79b6324610ebb8427a1121ff9130a3754c8522018aadb5bc2e443,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717417503268941281,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65e7e1f6b5fb520ef619dd246fd97035,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4,PodSandb
oxId:96186e4c50e5eb41ec7257ba2b4ec8474fc6064c8a935839054727baa8b306a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717417503266848986,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c2e93774694bec0d9f39543e1c101b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6bfe1b2c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b,PodSandboxId:0b594b2f837f
e04b34ea1200fca819f9b4bc408fed28f0e293849d18e3e2d779,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717417503202321229,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2e264d67def89fa6266f980f6f77444,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7d45ddd4-8b1d-42ab-a636-c08800ab362f name=/runtime.v1.RuntimeService/Lis
tContainers
	Jun 03 12:33:24 addons-699562 crio[679]: time="2024-06-03 12:33:24.800835962Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=82f7cfdd-8a75-48b8-9ff2-8df0c863ec69 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:33:24 addons-699562 crio[679]: time="2024-06-03 12:33:24.800949214Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=82f7cfdd-8a75-48b8-9ff2-8df0c863ec69 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:33:24 addons-699562 crio[679]: time="2024-06-03 12:33:24.802289884Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fd111e94-b168-474b-bdd8-b6612c2c6c49 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:33:24 addons-699562 crio[679]: time="2024-06-03 12:33:24.803546073Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717418004803522303,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584738,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fd111e94-b168-474b-bdd8-b6612c2c6c49 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:33:24 addons-699562 crio[679]: time="2024-06-03 12:33:24.804794636Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=90ce57e4-a3a5-4041-a625-52d2a3cc9be6 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:33:24 addons-699562 crio[679]: time="2024-06-03 12:33:24.804860780Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=90ce57e4-a3a5-4041-a625-52d2a3cc9be6 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:33:24 addons-699562 crio[679]: time="2024-06-03 12:33:24.805142890Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8d642bc3271163ac07b67d5d065df03b7f8ec966349c684f21b1bd59704f0e69,PodSandboxId:2cd7a7a28e0a544856b8d5555606d91ef93f12a0d206dd883ea24422a0b3358b,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1717417824993167174,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-79c22,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 084158b3-1687-4f4c-b741-cbab7ca11858,},Annotations:map[string]string{io.kubernetes.container.hash: 29cd3655,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a173411215156f8b18d5fbf4880f8e8fdde156ec2a9e410913aa0c571553461a,PodSandboxId:f58876a06d48db0d426b93948a20a6d66b18f312f72772da56715a105e6fb466,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4,State:CONTAINER_RUNNING,CreatedAt:1717417684144009266,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 22eac9e0-47f1-46a1-9745-87ca515de64e,},Annotations:map[string]string{io.kubern
etes.container.hash: d250beef,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bf5932194780bd3cb97f9edc1a375d5a81fda4f3a462fe7b477ade5bb3d2ef1,PodSandboxId:83a0e5827ce1a87f9a28b80c3e8aef138aa1aafbe0be947094b5660af09e3673,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bd42824d488ce58074f6d54cb051437d0dc2669f3f96a4d9b3b72a8d7ddda679,State:CONTAINER_RUNNING,CreatedAt:1717417672655915191,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-68456f997b-tpgtj,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: c02f3cb7-dd75-4d83-89fe-082ca6c80805,},Annotations:map[string]string{io.kubernetes.container.hash: 22246b3d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f787a95dc6ea2d78d819bc9e3ce31d217271f40af9c989319ddc466faa542c4,PodSandboxId:266b9c9ff3c4be3b23861b9133bb076fc65c831d3b9d14a733790d25dd14cecb,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1717417604379198722,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-vq6sn,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: d4773645-1a91-48bc-a27e-61822e3eb944,},Annotations:map[string]string{io.kubernetes.container.hash: 7c46e196,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08062fd585905d420205076b1d3f748399b18214282181c986eb4e0dcdcb686f,PodSandboxId:3c836ea529a741dc02ac68dcdc31ac8e0d959d76d24d1fa21e9a52f7f15d92d9,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:17174
17579443576098,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-th7qj,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: cb66a0b3-53cb-493e-8010-d545cc1dc5b8,},Annotations:map[string]string{io.kubernetes.container.hash: 32e0c41d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff24eb8563b0cdf092e0477748bf3b7abddc2958ee31f2fce5b90d2987e09ab0,PodSandboxId:b76bfaf676bbeb8b006b7032d5ac92a16463266e80b93e0e52cda1863a458b9c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1717417569721394616,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-2trqm,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 1f4740f5-01f4-413e-8b79-311c67526d69,},Annotations:map[string]string{io.kubernetes.container.hash: df90b885,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:071b33296d63e35493453ab9868ec545daa42d19a1436dbc8c4e22d7983162fa,PodSandboxId:c808b7e546b606d9b24586ce1db0e971c4e35cf5bc1eae84ffb5fa24b44cfbf6,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e41
2e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1717417565412297941,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-pl8qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f4580a-9514-47c0-aa22-11c454eaca32,},Annotations:map[string]string{io.kubernetes.container.hash: 382214a7,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a9104d810266c5a8079eeaf8d0c23a2e4538617523b6b90bff538c0454bd06,PodSandboxId:81961c6a37d61c8b612f41f7f942f2b0a4c108ba966128c4208ecab42f3fe95c,Metadata:&ContainerMetadata{Name:storage-pr
ovisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717417533319293443,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3d92bc5-3f10-47e3-84a9-f532f14deae4,},Annotations:map[string]string{io.kubernetes.container.hash: ed5f337c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3,PodSandboxId:0bfe8f416027409f7e1eac5af8acd40c936317be61611534f1284947fa2ef9f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Im
age:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717417526164116856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hmhdl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3cfe166-99f3-4ac9-9905-8be76bcb511d,},Annotations:map[string]string{io.kubernetes.container.hash: b559e7d2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ad
d0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e,PodSandboxId:bdb166637cc76b778ba00bf3d396efbe8bd2978f9e621874b1bb0fb2220aff46,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717417523013109940,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6ssr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 609d1553-86b5-46ea-b503-bdfd9f291571,},Annotations:map[string]string{io.kubernetes.container.hash: dde3c0ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c7a1cc6df31c0c301fee639aa62ce868d9a
11802928a59d2d19c941e0c51514,PodSandboxId:b7cc010079adda6957a2caae2c510628a876664b8dca66867c8c5a8f08ddc1c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717417503291175049,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 628fee2574e8e2e94faacdc70733c8af,},Annotations:map[string]string{io.kubernetes.container.hash: eee46468,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8,PodSandboxId:dfa5c
4cb4bc79b6324610ebb8427a1121ff9130a3754c8522018aadb5bc2e443,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717417503268941281,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65e7e1f6b5fb520ef619dd246fd97035,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4,PodSandb
oxId:96186e4c50e5eb41ec7257ba2b4ec8474fc6064c8a935839054727baa8b306a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717417503266848986,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c2e93774694bec0d9f39543e1c101b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6bfe1b2c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b,PodSandboxId:0b594b2f837f
e04b34ea1200fca819f9b4bc408fed28f0e293849d18e3e2d779,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717417503202321229,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2e264d67def89fa6266f980f6f77444,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=90ce57e4-a3a5-4041-a625-52d2a3cc9be6 name=/runtime.v1.RuntimeService/Lis
tContainers
	Jun 03 12:33:24 addons-699562 crio[679]: time="2024-06-03 12:33:24.838273569Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cdff43de-8a59-45e7-be6b-e93472d165d2 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:33:24 addons-699562 crio[679]: time="2024-06-03 12:33:24.838342820Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cdff43de-8a59-45e7-be6b-e93472d165d2 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:33:24 addons-699562 crio[679]: time="2024-06-03 12:33:24.839553234Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=97eb2204-bb16-4afe-93d5-aac3485bd24f name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:33:24 addons-699562 crio[679]: time="2024-06-03 12:33:24.841098039Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717418004841073038,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584738,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=97eb2204-bb16-4afe-93d5-aac3485bd24f name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:33:24 addons-699562 crio[679]: time="2024-06-03 12:33:24.841813827Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cdc42005-05b8-4cff-a265-8986cb4795fb name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:33:24 addons-699562 crio[679]: time="2024-06-03 12:33:24.841869031Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cdc42005-05b8-4cff-a265-8986cb4795fb name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:33:24 addons-699562 crio[679]: time="2024-06-03 12:33:24.842420022Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8d642bc3271163ac07b67d5d065df03b7f8ec966349c684f21b1bd59704f0e69,PodSandboxId:2cd7a7a28e0a544856b8d5555606d91ef93f12a0d206dd883ea24422a0b3358b,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1717417824993167174,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-79c22,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 084158b3-1687-4f4c-b741-cbab7ca11858,},Annotations:map[string]string{io.kubernetes.container.hash: 29cd3655,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a173411215156f8b18d5fbf4880f8e8fdde156ec2a9e410913aa0c571553461a,PodSandboxId:f58876a06d48db0d426b93948a20a6d66b18f312f72772da56715a105e6fb466,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4,State:CONTAINER_RUNNING,CreatedAt:1717417684144009266,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 22eac9e0-47f1-46a1-9745-87ca515de64e,},Annotations:map[string]string{io.kubern
etes.container.hash: d250beef,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bf5932194780bd3cb97f9edc1a375d5a81fda4f3a462fe7b477ade5bb3d2ef1,PodSandboxId:83a0e5827ce1a87f9a28b80c3e8aef138aa1aafbe0be947094b5660af09e3673,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bd42824d488ce58074f6d54cb051437d0dc2669f3f96a4d9b3b72a8d7ddda679,State:CONTAINER_RUNNING,CreatedAt:1717417672655915191,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-68456f997b-tpgtj,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: c02f3cb7-dd75-4d83-89fe-082ca6c80805,},Annotations:map[string]string{io.kubernetes.container.hash: 22246b3d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f787a95dc6ea2d78d819bc9e3ce31d217271f40af9c989319ddc466faa542c4,PodSandboxId:266b9c9ff3c4be3b23861b9133bb076fc65c831d3b9d14a733790d25dd14cecb,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1717417604379198722,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-vq6sn,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: d4773645-1a91-48bc-a27e-61822e3eb944,},Annotations:map[string]string{io.kubernetes.container.hash: 7c46e196,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08062fd585905d420205076b1d3f748399b18214282181c986eb4e0dcdcb686f,PodSandboxId:3c836ea529a741dc02ac68dcdc31ac8e0d959d76d24d1fa21e9a52f7f15d92d9,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:17174
17579443576098,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-th7qj,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: cb66a0b3-53cb-493e-8010-d545cc1dc5b8,},Annotations:map[string]string{io.kubernetes.container.hash: 32e0c41d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff24eb8563b0cdf092e0477748bf3b7abddc2958ee31f2fce5b90d2987e09ab0,PodSandboxId:b76bfaf676bbeb8b006b7032d5ac92a16463266e80b93e0e52cda1863a458b9c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1717417569721394616,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-2trqm,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 1f4740f5-01f4-413e-8b79-311c67526d69,},Annotations:map[string]string{io.kubernetes.container.hash: df90b885,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:071b33296d63e35493453ab9868ec545daa42d19a1436dbc8c4e22d7983162fa,PodSandboxId:c808b7e546b606d9b24586ce1db0e971c4e35cf5bc1eae84ffb5fa24b44cfbf6,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e41
2e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1717417565412297941,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-pl8qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f4580a-9514-47c0-aa22-11c454eaca32,},Annotations:map[string]string{io.kubernetes.container.hash: 382214a7,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a9104d810266c5a8079eeaf8d0c23a2e4538617523b6b90bff538c0454bd06,PodSandboxId:81961c6a37d61c8b612f41f7f942f2b0a4c108ba966128c4208ecab42f3fe95c,Metadata:&ContainerMetadata{Name:storage-pr
ovisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717417533319293443,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3d92bc5-3f10-47e3-84a9-f532f14deae4,},Annotations:map[string]string{io.kubernetes.container.hash: ed5f337c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3,PodSandboxId:0bfe8f416027409f7e1eac5af8acd40c936317be61611534f1284947fa2ef9f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Im
age:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717417526164116856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hmhdl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3cfe166-99f3-4ac9-9905-8be76bcb511d,},Annotations:map[string]string{io.kubernetes.container.hash: b559e7d2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ad
d0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e,PodSandboxId:bdb166637cc76b778ba00bf3d396efbe8bd2978f9e621874b1bb0fb2220aff46,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717417523013109940,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6ssr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 609d1553-86b5-46ea-b503-bdfd9f291571,},Annotations:map[string]string{io.kubernetes.container.hash: dde3c0ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c7a1cc6df31c0c301fee639aa62ce868d9a
11802928a59d2d19c941e0c51514,PodSandboxId:b7cc010079adda6957a2caae2c510628a876664b8dca66867c8c5a8f08ddc1c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717417503291175049,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 628fee2574e8e2e94faacdc70733c8af,},Annotations:map[string]string{io.kubernetes.container.hash: eee46468,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8,PodSandboxId:dfa5c
4cb4bc79b6324610ebb8427a1121ff9130a3754c8522018aadb5bc2e443,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717417503268941281,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65e7e1f6b5fb520ef619dd246fd97035,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4,PodSandb
oxId:96186e4c50e5eb41ec7257ba2b4ec8474fc6064c8a935839054727baa8b306a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717417503266848986,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c2e93774694bec0d9f39543e1c101b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6bfe1b2c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b,PodSandboxId:0b594b2f837f
e04b34ea1200fca819f9b4bc408fed28f0e293849d18e3e2d779,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717417503202321229,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2e264d67def89fa6266f980f6f77444,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cdc42005-05b8-4cff-a265-8986cb4795fb name=/runtime.v1.RuntimeService/Lis
tContainers
	Jun 03 12:33:24 addons-699562 crio[679]: time="2024-06-03 12:33:24.889404664Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=169505b7-fb4b-4301-84be-60ac4e3249a4 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:33:24 addons-699562 crio[679]: time="2024-06-03 12:33:24.889532134Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=169505b7-fb4b-4301-84be-60ac4e3249a4 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:33:24 addons-699562 crio[679]: time="2024-06-03 12:33:24.890754118Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=26ce23ca-6f1f-4761-a556-9e174e5f3aa9 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:33:24 addons-699562 crio[679]: time="2024-06-03 12:33:24.892243098Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717418004892213761,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584738,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=26ce23ca-6f1f-4761-a556-9e174e5f3aa9 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:33:24 addons-699562 crio[679]: time="2024-06-03 12:33:24.893091538Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8a4dcd7e-b151-482e-b347-b9c9d4d2c756 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:33:24 addons-699562 crio[679]: time="2024-06-03 12:33:24.893165180Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8a4dcd7e-b151-482e-b347-b9c9d4d2c756 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:33:24 addons-699562 crio[679]: time="2024-06-03 12:33:24.893470570Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8d642bc3271163ac07b67d5d065df03b7f8ec966349c684f21b1bd59704f0e69,PodSandboxId:2cd7a7a28e0a544856b8d5555606d91ef93f12a0d206dd883ea24422a0b3358b,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1717417824993167174,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-79c22,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 084158b3-1687-4f4c-b741-cbab7ca11858,},Annotations:map[string]string{io.kubernetes.container.hash: 29cd3655,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a173411215156f8b18d5fbf4880f8e8fdde156ec2a9e410913aa0c571553461a,PodSandboxId:f58876a06d48db0d426b93948a20a6d66b18f312f72772da56715a105e6fb466,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4,State:CONTAINER_RUNNING,CreatedAt:1717417684144009266,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 22eac9e0-47f1-46a1-9745-87ca515de64e,},Annotations:map[string]string{io.kubern
etes.container.hash: d250beef,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bf5932194780bd3cb97f9edc1a375d5a81fda4f3a462fe7b477ade5bb3d2ef1,PodSandboxId:83a0e5827ce1a87f9a28b80c3e8aef138aa1aafbe0be947094b5660af09e3673,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bd42824d488ce58074f6d54cb051437d0dc2669f3f96a4d9b3b72a8d7ddda679,State:CONTAINER_RUNNING,CreatedAt:1717417672655915191,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-68456f997b-tpgtj,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: c02f3cb7-dd75-4d83-89fe-082ca6c80805,},Annotations:map[string]string{io.kubernetes.container.hash: 22246b3d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f787a95dc6ea2d78d819bc9e3ce31d217271f40af9c989319ddc466faa542c4,PodSandboxId:266b9c9ff3c4be3b23861b9133bb076fc65c831d3b9d14a733790d25dd14cecb,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1717417604379198722,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-vq6sn,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: d4773645-1a91-48bc-a27e-61822e3eb944,},Annotations:map[string]string{io.kubernetes.container.hash: 7c46e196,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08062fd585905d420205076b1d3f748399b18214282181c986eb4e0dcdcb686f,PodSandboxId:3c836ea529a741dc02ac68dcdc31ac8e0d959d76d24d1fa21e9a52f7f15d92d9,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:17174
17579443576098,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-th7qj,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: cb66a0b3-53cb-493e-8010-d545cc1dc5b8,},Annotations:map[string]string{io.kubernetes.container.hash: 32e0c41d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff24eb8563b0cdf092e0477748bf3b7abddc2958ee31f2fce5b90d2987e09ab0,PodSandboxId:b76bfaf676bbeb8b006b7032d5ac92a16463266e80b93e0e52cda1863a458b9c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1717417569721394616,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-2trqm,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 1f4740f5-01f4-413e-8b79-311c67526d69,},Annotations:map[string]string{io.kubernetes.container.hash: df90b885,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:071b33296d63e35493453ab9868ec545daa42d19a1436dbc8c4e22d7983162fa,PodSandboxId:c808b7e546b606d9b24586ce1db0e971c4e35cf5bc1eae84ffb5fa24b44cfbf6,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e41
2e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1717417565412297941,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-pl8qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f4580a-9514-47c0-aa22-11c454eaca32,},Annotations:map[string]string{io.kubernetes.container.hash: 382214a7,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a9104d810266c5a8079eeaf8d0c23a2e4538617523b6b90bff538c0454bd06,PodSandboxId:81961c6a37d61c8b612f41f7f942f2b0a4c108ba966128c4208ecab42f3fe95c,Metadata:&ContainerMetadata{Name:storage-pr
ovisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717417533319293443,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3d92bc5-3f10-47e3-84a9-f532f14deae4,},Annotations:map[string]string{io.kubernetes.container.hash: ed5f337c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3,PodSandboxId:0bfe8f416027409f7e1eac5af8acd40c936317be61611534f1284947fa2ef9f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Im
age:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717417526164116856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hmhdl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3cfe166-99f3-4ac9-9905-8be76bcb511d,},Annotations:map[string]string{io.kubernetes.container.hash: b559e7d2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ad
d0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e,PodSandboxId:bdb166637cc76b778ba00bf3d396efbe8bd2978f9e621874b1bb0fb2220aff46,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717417523013109940,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6ssr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 609d1553-86b5-46ea-b503-bdfd9f291571,},Annotations:map[string]string{io.kubernetes.container.hash: dde3c0ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c7a1cc6df31c0c301fee639aa62ce868d9a
11802928a59d2d19c941e0c51514,PodSandboxId:b7cc010079adda6957a2caae2c510628a876664b8dca66867c8c5a8f08ddc1c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717417503291175049,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 628fee2574e8e2e94faacdc70733c8af,},Annotations:map[string]string{io.kubernetes.container.hash: eee46468,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8,PodSandboxId:dfa5c
4cb4bc79b6324610ebb8427a1121ff9130a3754c8522018aadb5bc2e443,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717417503268941281,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65e7e1f6b5fb520ef619dd246fd97035,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4,PodSandb
oxId:96186e4c50e5eb41ec7257ba2b4ec8474fc6064c8a935839054727baa8b306a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717417503266848986,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c2e93774694bec0d9f39543e1c101b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6bfe1b2c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b,PodSandboxId:0b594b2f837f
e04b34ea1200fca819f9b4bc408fed28f0e293849d18e3e2d779,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717417503202321229,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2e264d67def89fa6266f980f6f77444,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8a4dcd7e-b151-482e-b347-b9c9d4d2c756 name=/runtime.v1.RuntimeService/Lis
tContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8d642bc327116       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                 2 minutes ago       Running             hello-world-app           0                   2cd7a7a28e0a5       hello-world-app-86c47465fc-79c22
	a173411215156       docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa                         5 minutes ago       Running             nginx                     0                   f58876a06d48d       nginx
	9bf5932194780       ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474                   5 minutes ago       Running             headlamp                  0                   83a0e5827ce1a       headlamp-68456f997b-tpgtj
	8f787a95dc6ea       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            6 minutes ago       Running             gcp-auth                  0                   266b9c9ff3c4b       gcp-auth-5db96cd9b4-vq6sn
	08062fd585905       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                         7 minutes ago       Running             yakd                      0                   3c836ea529a74       yakd-dashboard-5ddbf7d777-th7qj
	ff24eb8563b0c       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef        7 minutes ago       Running             local-path-provisioner    0                   b76bfaf676bbe       local-path-provisioner-8d985888d-2trqm
	071b33296d63e       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   7 minutes ago       Exited              metrics-server            0                   c808b7e546b60       metrics-server-c59844bb4-pl8qk
	17a9104d81026       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       0                   81961c6a37d61       storage-provisioner
	35f4eaf8d81f1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        7 minutes ago       Running             coredns                   0                   0bfe8f4160274       coredns-7db6d8ff4d-hmhdl
	6add0233edc94       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                                        8 minutes ago       Running             kube-proxy                0                   bdb166637cc76       kube-proxy-6ssr8
	0c7a1cc6df31c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        8 minutes ago       Running             etcd                      0                   b7cc010079add       etcd-addons-699562
	5dacc96e3a0d6       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                                        8 minutes ago       Running             kube-controller-manager   0                   dfa5c4cb4bc79       kube-controller-manager-addons-699562
	ff21db0353955       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                                        8 minutes ago       Running             kube-apiserver            0                   96186e4c50e5e       kube-apiserver-addons-699562
	92e20bf314646       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                                        8 minutes ago       Running             kube-scheduler            0                   0b594b2f837fe       kube-scheduler-addons-699562
	
	
	==> coredns [35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3] <==
	[INFO] 10.244.0.8:53029 - 52615 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000329373s
	[INFO] 10.244.0.8:53749 - 1624 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000062672s
	[INFO] 10.244.0.8:53749 - 24926 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00015104s
	[INFO] 10.244.0.8:58411 - 17668 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000212275s
	[INFO] 10.244.0.8:58411 - 11274 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0000595s
	[INFO] 10.244.0.8:59239 - 53735 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00017039s
	[INFO] 10.244.0.8:59239 - 37605 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000201028s
	[INFO] 10.244.0.8:52190 - 44357 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000260199s
	[INFO] 10.244.0.8:52190 - 54344 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000203243s
	[INFO] 10.244.0.8:40017 - 29233 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000095616s
	[INFO] 10.244.0.8:40017 - 50748 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00023833s
	[INFO] 10.244.0.8:40407 - 532 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000075947s
	[INFO] 10.244.0.8:40407 - 24106 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000107276s
	[INFO] 10.244.0.8:55786 - 11074 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000068658s
	[INFO] 10.244.0.8:55786 - 40000 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000154175s
	[INFO] 10.244.0.22:48426 - 37810 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000911895s
	[INFO] 10.244.0.22:55143 - 37903 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000924721s
	[INFO] 10.244.0.22:54175 - 35195 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000155679s
	[INFO] 10.244.0.22:46392 - 19652 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000056593s
	[INFO] 10.244.0.22:44105 - 37037 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000209248s
	[INFO] 10.244.0.22:58175 - 33620 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000073278s
	[INFO] 10.244.0.22:48829 - 15494 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001483557s
	[INFO] 10.244.0.22:45600 - 59491 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.00177612s
	[INFO] 10.244.0.27:52018 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000487785s
	[INFO] 10.244.0.27:44227 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000146213s
	
	
	==> describe nodes <==
	Name:               addons-699562
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-699562
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	                    minikube.k8s.io/name=addons-699562
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_03T12_25_09_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-699562
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 12:25:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-699562
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 12:33:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 12:30:46 +0000   Mon, 03 Jun 2024 12:25:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 12:30:46 +0000   Mon, 03 Jun 2024 12:25:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 12:30:46 +0000   Mon, 03 Jun 2024 12:25:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 12:30:46 +0000   Mon, 03 Jun 2024 12:25:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.241
	  Hostname:    addons-699562
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 84ef91a2c9524e6487a854dd506d694c
	  System UUID:                84ef91a2-c952-4e64-87a8-54dd506d694c
	  Boot ID:                    af6edd86-d456-43e7-97d1-dac4dba15c8e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                      ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-79c22          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m3s
	  default                     nginx                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m25s
	  gcp-auth                    gcp-auth-5db96cd9b4-vq6sn                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m50s
	  headlamp                    headlamp-68456f997b-tpgtj                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m39s
	  kube-system                 coredns-7db6d8ff4d-hmhdl                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     8m3s
	  kube-system                 etcd-addons-699562                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         8m17s
	  kube-system                 kube-apiserver-addons-699562              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m17s
	  kube-system                 kube-controller-manager-addons-699562     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m17s
	  kube-system                 kube-proxy-6ssr8                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m3s
	  kube-system                 kube-scheduler-addons-699562              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m17s
	  kube-system                 storage-provisioner                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m57s
	  local-path-storage          local-path-provisioner-8d985888d-2trqm    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m56s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-th7qj           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     7m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m1s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  8m23s (x8 over 8m23s)  kubelet          Node addons-699562 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m23s (x8 over 8m23s)  kubelet          Node addons-699562 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m23s (x7 over 8m23s)  kubelet          Node addons-699562 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 8m17s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m17s                  kubelet          Node addons-699562 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m17s                  kubelet          Node addons-699562 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m17s                  kubelet          Node addons-699562 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m16s                  kubelet          Node addons-699562 status is now: NodeReady
	  Normal  RegisteredNode           8m4s                   node-controller  Node addons-699562 event: Registered Node addons-699562 in Controller
	
	
	==> dmesg <==
	[  +4.797363] kauditd_printk_skb: 96 callbacks suppressed
	[  +5.070667] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.543858] kauditd_printk_skb: 115 callbacks suppressed
	[  +8.740436] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.913062] kauditd_printk_skb: 2 callbacks suppressed
	[Jun 3 12:26] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.135899] kauditd_printk_skb: 40 callbacks suppressed
	[ +11.704931] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.044616] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.006333] kauditd_printk_skb: 85 callbacks suppressed
	[  +9.474385] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.925812] kauditd_printk_skb: 18 callbacks suppressed
	[  +8.811764] kauditd_printk_skb: 24 callbacks suppressed
	[Jun 3 12:27] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.419790] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.060102] kauditd_printk_skb: 35 callbacks suppressed
	[Jun 3 12:28] kauditd_printk_skb: 82 callbacks suppressed
	[  +6.441502] kauditd_printk_skb: 56 callbacks suppressed
	[  +5.709911] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.585647] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.428593] kauditd_printk_skb: 3 callbacks suppressed
	[ +15.030336] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.159254] kauditd_printk_skb: 33 callbacks suppressed
	[Jun 3 12:30] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.068631] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [0c7a1cc6df31c0c301fee639aa62ce868d9a11802928a59d2d19c941e0c51514] <==
	{"level":"info","ts":"2024-06-03T12:26:40.969184Z","caller":"traceutil/trace.go:171","msg":"trace[875911949] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-c59844bb4-pl8qk; range_end:; response_count:1; response_revision:1164; }","duration":"172.808766ms","start":"2024-06-03T12:26:40.796368Z","end":"2024-06-03T12:26:40.969176Z","steps":["trace[875911949] 'agreement among raft nodes before linearized reading'  (duration: 172.731177ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T12:26:40.969083Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"167.658256ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices/\" range_end:\"/registry/apiregistration.k8s.io/apiservices0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-06-03T12:26:40.969383Z","caller":"traceutil/trace.go:171","msg":"trace[1638708360] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/; range_end:/registry/apiregistration.k8s.io/apiservices0; response_count:0; response_revision:1164; }","duration":"168.010019ms","start":"2024-06-03T12:26:40.801363Z","end":"2024-06-03T12:26:40.969373Z","steps":["trace[1638708360] 'agreement among raft nodes before linearized reading'  (duration: 167.606225ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T12:26:40.969317Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"363.010904ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11453"}
	{"level":"info","ts":"2024-06-03T12:26:40.969599Z","caller":"traceutil/trace.go:171","msg":"trace[494188914] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1164; }","duration":"363.307211ms","start":"2024-06-03T12:26:40.606284Z","end":"2024-06-03T12:26:40.969591Z","steps":["trace[494188914] 'agreement among raft nodes before linearized reading'  (duration: 362.985664ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T12:26:40.969676Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-03T12:26:40.60627Z","time spent":"363.34474ms","remote":"127.0.0.1:33146","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":3,"response size":11475,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"info","ts":"2024-06-03T12:26:44.2914Z","caller":"traceutil/trace.go:171","msg":"trace[27234936] linearizableReadLoop","detail":"{readStateIndex:1214; appliedIndex:1213; }","duration":"329.42933ms","start":"2024-06-03T12:26:43.961874Z","end":"2024-06-03T12:26:44.291304Z","steps":["trace[27234936] 'read index received'  (duration: 329.000447ms)","trace[27234936] 'applied index is now lower than readState.Index'  (duration: 428.24µs)"],"step_count":2}
	{"level":"info","ts":"2024-06-03T12:26:44.291596Z","caller":"traceutil/trace.go:171","msg":"trace[1309044197] transaction","detail":"{read_only:false; response_revision:1180; number_of_response:1; }","duration":"436.33847ms","start":"2024-06-03T12:26:43.85524Z","end":"2024-06-03T12:26:44.291579Z","steps":["trace[1309044197] 'process raft request'  (duration: 435.849089ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T12:26:44.29339Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-03T12:26:43.855226Z","time spent":"438.045751ms","remote":"127.0.0.1:33140","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1166 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-06-03T12:26:44.293718Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.692606ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11453"}
	{"level":"info","ts":"2024-06-03T12:26:44.293762Z","caller":"traceutil/trace.go:171","msg":"trace[49774894] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1181; }","duration":"187.765631ms","start":"2024-06-03T12:26:44.10599Z","end":"2024-06-03T12:26:44.293756Z","steps":["trace[49774894] 'agreement among raft nodes before linearized reading'  (duration: 187.507223ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T12:26:44.293953Z","caller":"traceutil/trace.go:171","msg":"trace[1996245162] transaction","detail":"{read_only:false; response_revision:1181; number_of_response:1; }","duration":"289.652518ms","start":"2024-06-03T12:26:44.004294Z","end":"2024-06-03T12:26:44.293946Z","steps":["trace[1996245162] 'process raft request'  (duration: 289.152911ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T12:26:44.291718Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"329.823247ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-03T12:26:44.294214Z","caller":"traceutil/trace.go:171","msg":"trace[1501958648] range","detail":"{range_begin:/registry/leases/ingress-nginx/ingress-nginx-leader; range_end:; response_count:0; response_revision:1180; }","duration":"332.361109ms","start":"2024-06-03T12:26:43.961847Z","end":"2024-06-03T12:26:44.294208Z","steps":["trace[1501958648] 'agreement among raft nodes before linearized reading'  (duration: 329.829315ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T12:26:44.294236Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-03T12:26:43.961834Z","time spent":"332.394927ms","remote":"127.0.0.1:33232","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":0,"response size":27,"request content":"key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" "}
	{"level":"info","ts":"2024-06-03T12:28:03.350249Z","caller":"traceutil/trace.go:171","msg":"trace[112141829] linearizableReadLoop","detail":"{readStateIndex:1550; appliedIndex:1549; }","duration":"344.411339ms","start":"2024-06-03T12:28:03.005804Z","end":"2024-06-03T12:28:03.350215Z","steps":["trace[112141829] 'read index received'  (duration: 344.171196ms)","trace[112141829] 'applied index is now lower than readState.Index'  (duration: 239.674µs)"],"step_count":2}
	{"level":"info","ts":"2024-06-03T12:28:03.350549Z","caller":"traceutil/trace.go:171","msg":"trace[525061229] transaction","detail":"{read_only:false; response_revision:1493; number_of_response:1; }","duration":"407.785729ms","start":"2024-06-03T12:28:02.942753Z","end":"2024-06-03T12:28:03.350539Z","steps":["trace[525061229] 'process raft request'  (duration: 407.359787ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T12:28:03.350779Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-03T12:28:02.942735Z","time spent":"407.849445ms","remote":"127.0.0.1:33146","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4125,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/registry-proxy-n8265\" mod_revision:1490 > success:<request_put:<key:\"/registry/pods/kube-system/registry-proxy-n8265\" value_size:4070 >> failure:<request_range:<key:\"/registry/pods/kube-system/registry-proxy-n8265\" > >"}
	{"level":"warn","ts":"2024-06-03T12:28:03.350953Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"345.148304ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-06-03T12:28:03.350975Z","caller":"traceutil/trace.go:171","msg":"trace[166667215] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1493; }","duration":"345.228477ms","start":"2024-06-03T12:28:03.00574Z","end":"2024-06-03T12:28:03.350969Z","steps":["trace[166667215] 'agreement among raft nodes before linearized reading'  (duration: 345.150589ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T12:28:03.35099Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-03T12:28:03.005727Z","time spent":"345.260644ms","remote":"127.0.0.1:33232","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":1,"response size":521,"request content":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" "}
	{"level":"warn","ts":"2024-06-03T12:28:03.351095Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"223.791048ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:6051"}
	{"level":"info","ts":"2024-06-03T12:28:03.351136Z","caller":"traceutil/trace.go:171","msg":"trace[2033513505] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1493; }","duration":"223.850857ms","start":"2024-06-03T12:28:03.12728Z","end":"2024-06-03T12:28:03.35113Z","steps":["trace[2033513505] 'agreement among raft nodes before linearized reading'  (duration: 223.776833ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T12:28:19.240319Z","caller":"traceutil/trace.go:171","msg":"trace[1745785475] transaction","detail":"{read_only:false; response_revision:1593; number_of_response:1; }","duration":"377.328726ms","start":"2024-06-03T12:28:18.862975Z","end":"2024-06-03T12:28:19.240304Z","steps":["trace[1745785475] 'process raft request'  (duration: 377.221796ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T12:28:19.240445Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-03T12:28:18.862954Z","time spent":"377.437463ms","remote":"127.0.0.1:33140","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1589 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	
	==> gcp-auth [8f787a95dc6ea2d78d819bc9e3ce31d217271f40af9c989319ddc466faa542c4] <==
	2024/06/03 12:26:44 GCP Auth Webhook started!
	2024/06/03 12:27:45 Ready to marshal response ...
	2024/06/03 12:27:45 Ready to write response ...
	2024/06/03 12:27:45 Ready to marshal response ...
	2024/06/03 12:27:45 Ready to write response ...
	2024/06/03 12:27:46 Ready to marshal response ...
	2024/06/03 12:27:46 Ready to write response ...
	2024/06/03 12:27:46 Ready to marshal response ...
	2024/06/03 12:27:46 Ready to write response ...
	2024/06/03 12:27:46 Ready to marshal response ...
	2024/06/03 12:27:46 Ready to write response ...
	2024/06/03 12:27:51 Ready to marshal response ...
	2024/06/03 12:27:51 Ready to write response ...
	2024/06/03 12:27:57 Ready to marshal response ...
	2024/06/03 12:27:57 Ready to write response ...
	2024/06/03 12:27:58 Ready to marshal response ...
	2024/06/03 12:27:58 Ready to write response ...
	2024/06/03 12:28:00 Ready to marshal response ...
	2024/06/03 12:28:00 Ready to write response ...
	2024/06/03 12:28:13 Ready to marshal response ...
	2024/06/03 12:28:13 Ready to write response ...
	2024/06/03 12:28:35 Ready to marshal response ...
	2024/06/03 12:28:35 Ready to write response ...
	2024/06/03 12:30:22 Ready to marshal response ...
	2024/06/03 12:30:22 Ready to write response ...
	
	
	==> kernel <==
	 12:33:25 up 8 min,  0 users,  load average: 0.04, 0.65, 0.54
	Linux addons-699562 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4] <==
	E0603 12:27:09.544611       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.164.223:443/apis/metrics.k8s.io/v1beta1: Get "https://10.111.164.223:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.111.164.223:443: connect: connection refused
	I0603 12:27:09.607112       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0603 12:27:46.582529       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.91.232"}
	I0603 12:27:59.941066       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0603 12:28:00.119538       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.213.114"}
	I0603 12:28:04.555445       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	E0603 12:28:05.573603       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"gadget\" not found]"
	E0603 12:28:05.580561       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"gadget\" not found]"
	W0603 12:28:05.588138       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0603 12:28:26.681238       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0603 12:28:50.936173       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0603 12:28:50.936287       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0603 12:28:50.965267       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0603 12:28:50.965333       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0603 12:28:50.975534       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0603 12:28:50.975583       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0603 12:28:50.987547       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0603 12:28:50.987604       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0603 12:28:51.026275       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0603 12:28:51.029483       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0603 12:28:51.975974       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0603 12:28:52.027110       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0603 12:28:52.037480       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0603 12:30:22.605713       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.102.62.100"}
	E0603 12:30:25.751190       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8] <==
	W0603 12:31:34.622859       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0603 12:31:34.622886       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0603 12:31:37.105101       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0603 12:31:37.105249       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0603 12:31:38.550169       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0603 12:31:38.550263       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0603 12:32:12.349851       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0603 12:32:12.349938       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0603 12:32:13.824880       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0603 12:32:13.824935       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0603 12:32:14.429791       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0603 12:32:14.429894       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0603 12:32:28.395249       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0603 12:32:28.395459       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0603 12:32:43.577194       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0603 12:32:43.577222       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0603 12:32:48.812796       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0603 12:32:48.812842       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0603 12:33:07.814809       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0603 12:33:07.814909       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0603 12:33:13.449711       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0603 12:33:13.449918       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0603 12:33:15.262470       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0603 12:33:15.262735       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0603 12:33:23.778088       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="9.1µs"
	
	
	==> kube-proxy [6add0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e] <==
	I0603 12:25:23.585524       1 server_linux.go:69] "Using iptables proxy"
	I0603 12:25:23.608160       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.241"]
	I0603 12:25:23.755662       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 12:25:23.755712       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 12:25:23.755727       1 server_linux.go:165] "Using iptables Proxier"
	I0603 12:25:23.759852       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 12:25:23.760062       1 server.go:872] "Version info" version="v1.30.1"
	I0603 12:25:23.760076       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 12:25:23.764482       1 config.go:192] "Starting service config controller"
	I0603 12:25:23.764500       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 12:25:23.764539       1 config.go:101] "Starting endpoint slice config controller"
	I0603 12:25:23.764543       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 12:25:23.766909       1 config.go:319] "Starting node config controller"
	I0603 12:25:23.766944       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 12:25:23.865029       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 12:25:23.865071       1 shared_informer.go:320] Caches are synced for service config
	I0603 12:25:23.867400       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b] <==
	W0603 12:25:05.882911       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0603 12:25:05.883016       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0603 12:25:05.883736       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0603 12:25:05.884966       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0603 12:25:05.884245       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0603 12:25:05.884352       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0603 12:25:05.884720       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0603 12:25:05.884860       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0603 12:25:06.694024       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0603 12:25:06.694053       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0603 12:25:06.778297       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0603 12:25:06.778385       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0603 12:25:06.850846       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0603 12:25:06.850894       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0603 12:25:06.890880       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0603 12:25:06.891766       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0603 12:25:06.929739       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0603 12:25:06.929827       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0603 12:25:06.932321       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0603 12:25:06.932367       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0603 12:25:07.026054       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0603 12:25:07.026211       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0603 12:25:07.199563       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0603 12:25:07.199751       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 12:25:09.962396       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 03 12:30:30 addons-699562 kubelet[1268]: I0603 12:30:30.556191    1268 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="748f7279-00fd-4d10-aa57-2f4c60258fe2" path="/var/lib/kubelet/pods/748f7279-00fd-4d10-aa57-2f4c60258fe2/volumes"
	Jun 03 12:31:08 addons-699562 kubelet[1268]: E0603 12:31:08.570828    1268 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 12:31:08 addons-699562 kubelet[1268]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 12:31:08 addons-699562 kubelet[1268]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 12:31:08 addons-699562 kubelet[1268]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 12:31:08 addons-699562 kubelet[1268]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 12:31:09 addons-699562 kubelet[1268]: I0603 12:31:09.037415    1268 scope.go:117] "RemoveContainer" containerID="3d13bd5e73c30ae4165f06b2185319aeacf35e7ffaa0b56363eb04137f5f6968"
	Jun 03 12:31:09 addons-699562 kubelet[1268]: I0603 12:31:09.059495    1268 scope.go:117] "RemoveContainer" containerID="78062314a87041d03dc6e5e7132662b0ff33b7d83c2f19e08843de0216f60c0f"
	Jun 03 12:31:09 addons-699562 kubelet[1268]: I0603 12:31:09.079614    1268 scope.go:117] "RemoveContainer" containerID="435447885e6a3c602fae7bd77b056971c54bbc1ada0aa5e9e9f634db78fc7c0a"
	Jun 03 12:32:08 addons-699562 kubelet[1268]: E0603 12:32:08.569323    1268 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 12:32:08 addons-699562 kubelet[1268]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 12:32:08 addons-699562 kubelet[1268]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 12:32:08 addons-699562 kubelet[1268]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 12:32:08 addons-699562 kubelet[1268]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 12:33:08 addons-699562 kubelet[1268]: E0603 12:33:08.568200    1268 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 12:33:08 addons-699562 kubelet[1268]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 12:33:08 addons-699562 kubelet[1268]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 12:33:08 addons-699562 kubelet[1268]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 12:33:08 addons-699562 kubelet[1268]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 12:33:25 addons-699562 kubelet[1268]: I0603 12:33:25.125224    1268 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/26f4580a-9514-47c0-aa22-11c454eaca32-tmp-dir\") pod \"26f4580a-9514-47c0-aa22-11c454eaca32\" (UID: \"26f4580a-9514-47c0-aa22-11c454eaca32\") "
	Jun 03 12:33:25 addons-699562 kubelet[1268]: I0603 12:33:25.125415    1268 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfbpw\" (UniqueName: \"kubernetes.io/projected/26f4580a-9514-47c0-aa22-11c454eaca32-kube-api-access-zfbpw\") pod \"26f4580a-9514-47c0-aa22-11c454eaca32\" (UID: \"26f4580a-9514-47c0-aa22-11c454eaca32\") "
	Jun 03 12:33:25 addons-699562 kubelet[1268]: I0603 12:33:25.126111    1268 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26f4580a-9514-47c0-aa22-11c454eaca32-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "26f4580a-9514-47c0-aa22-11c454eaca32" (UID: "26f4580a-9514-47c0-aa22-11c454eaca32"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Jun 03 12:33:25 addons-699562 kubelet[1268]: I0603 12:33:25.137364    1268 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26f4580a-9514-47c0-aa22-11c454eaca32-kube-api-access-zfbpw" (OuterVolumeSpecName: "kube-api-access-zfbpw") pod "26f4580a-9514-47c0-aa22-11c454eaca32" (UID: "26f4580a-9514-47c0-aa22-11c454eaca32"). InnerVolumeSpecName "kube-api-access-zfbpw". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jun 03 12:33:25 addons-699562 kubelet[1268]: I0603 12:33:25.226797    1268 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-zfbpw\" (UniqueName: \"kubernetes.io/projected/26f4580a-9514-47c0-aa22-11c454eaca32-kube-api-access-zfbpw\") on node \"addons-699562\" DevicePath \"\""
	Jun 03 12:33:25 addons-699562 kubelet[1268]: I0603 12:33:25.226826    1268 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/26f4580a-9514-47c0-aa22-11c454eaca32-tmp-dir\") on node \"addons-699562\" DevicePath \"\""
	
	
	==> storage-provisioner [17a9104d810266c5a8079eeaf8d0c23a2e4538617523b6b90bff538c0454bd06] <==
	I0603 12:25:34.289349       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0603 12:25:34.302711       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0603 12:25:34.302770       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0603 12:25:34.321137       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0603 12:25:34.321265       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-699562_54717765-0edd-48a4-aaa9-cc3e6be606f3!
	I0603 12:25:34.323378       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6b63e30e-da74-48d8-b9d7-4d6f0eeb01ad", APIVersion:"v1", ResourceVersion:"827", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-699562_54717765-0edd-48a4-aaa9-cc3e6be606f3 became leader
	I0603 12:25:34.422282       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-699562_54717765-0edd-48a4-aaa9-cc3e6be606f3!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-699562 -n addons-699562
helpers_test.go:261: (dbg) Run:  kubectl --context addons-699562 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (328.63s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (13.37s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-699562 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-699562 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-699562 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-699562 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-699562 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-699562 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-699562 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-699562 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-699562 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-699562 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [5baf1b2b-68d4-4a28-b8e3-9dc144743f58] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [5baf1b2b-68d4-4a28-b8e3-9dc144743f58] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [5baf1b2b-68d4-4a28-b8e3-9dc144743f58] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003199406s
addons_test.go:992: (dbg) Run:  kubectl --context addons-699562 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-linux-amd64 -p addons-699562 ssh "cat /opt/local-path-provisioner/pvc-322948b5-f737-472a-a023-d147f813616b_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-699562 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-699562 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-linux-amd64 -p addons-699562 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-699562 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (500.464085ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0603 12:27:58.458351 1088227 out.go:291] Setting OutFile to fd 1 ...
	I0603 12:27:58.458632 1088227 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:27:58.458642 1088227 out.go:304] Setting ErrFile to fd 2...
	I0603 12:27:58.458646 1088227 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:27:58.458855 1088227 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
	I0603 12:27:58.459129 1088227 mustload.go:65] Loading cluster: addons-699562
	I0603 12:27:58.459436 1088227 config.go:182] Loaded profile config "addons-699562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:27:58.459466 1088227 addons.go:602] checking whether the cluster is paused
	I0603 12:27:58.459565 1088227 config.go:182] Loaded profile config "addons-699562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:27:58.459579 1088227 host.go:66] Checking if "addons-699562" exists ...
	I0603 12:27:58.459926 1088227 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:27:58.459981 1088227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:27:58.475068 1088227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40885
	I0603 12:27:58.475521 1088227 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:27:58.476176 1088227 main.go:141] libmachine: Using API Version  1
	I0603 12:27:58.476200 1088227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:27:58.476579 1088227 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:27:58.476812 1088227 main.go:141] libmachine: (addons-699562) Calling .GetState
	I0603 12:27:58.478448 1088227 main.go:141] libmachine: (addons-699562) Calling .DriverName
	I0603 12:27:58.478658 1088227 ssh_runner.go:195] Run: systemctl --version
	I0603 12:27:58.478682 1088227 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
	I0603 12:27:58.480955 1088227 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:27:58.481344 1088227 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
	I0603 12:27:58.481375 1088227 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
	I0603 12:27:58.481520 1088227 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
	I0603 12:27:58.481702 1088227 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
	I0603 12:27:58.481878 1088227 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
	I0603 12:27:58.482019 1088227 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
	I0603 12:27:58.581793 1088227 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 12:27:58.581898 1088227 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:27:58.652364 1088227 cri.go:89] found id: "659433e2cc33050cec546428601f88c30011f342c2d2278b4bb8dcaa5a692d26"
	I0603 12:27:58.652396 1088227 cri.go:89] found id: "a5c0b1a054df4bf5a25a426dcc75cb2256008ab8d36529386065a4a2f44336fc"
	I0603 12:27:58.652402 1088227 cri.go:89] found id: "3fd0d5b4335cd0d0b2cb32381afea409e4953361ede3481c618eb2270500e5a7"
	I0603 12:27:58.652407 1088227 cri.go:89] found id: "fce7a28722cad5a74608438abe46fa9c804224f28c7b1ee95fa8e0b3d04ae6a3"
	I0603 12:27:58.652411 1088227 cri.go:89] found id: "83df38b22611b47e4eb22229144a40ee0ac518c7182553260fdd3506913a715a"
	I0603 12:27:58.652417 1088227 cri.go:89] found id: "88083c200721740d29ba6f12f8d334781c3c680aa1267b421e27b7ecc2228735"
	I0603 12:27:58.652420 1088227 cri.go:89] found id: "6e1b69a132f80e141c423adbe3b465ea6524b2b0daa92fd8a19ee84181a6d570"
	I0603 12:27:58.652423 1088227 cri.go:89] found id: "33636c13794e2b05eef3abc510c5f831ccf0fc155b6b25c108bfdf1f3e4c77cd"
	I0603 12:27:58.652425 1088227 cri.go:89] found id: "d71670c695de5eb1d56ef15c8654d35bb321c4146f86e26435d2e02bdb7f8a08"
	I0603 12:27:58.652440 1088227 cri.go:89] found id: "2ba9fbfdbf3fc02681d4b082b8188b9785ac8915e853493f2c2acb9bb21e8bd7"
	I0603 12:27:58.652445 1088227 cri.go:89] found id: "93f3eaf3da901537d8d0c58ffa71ab5755ed34ae0079d7e76f02365901c0fa13"
	I0603 12:27:58.652449 1088227 cri.go:89] found id: "b0ebd889ee4f22e8cc7d15e22b0213582a0f402cc10660918f2fa705c5b8e6f5"
	I0603 12:27:58.652453 1088227 cri.go:89] found id: "071b33296d63e35493453ab9868ec545daa42d19a1436dbc8c4e22d7983162fa"
	I0603 12:27:58.652476 1088227 cri.go:89] found id: "7f1a9ae7d49b49e84bb4f52ab5458fdcd447d233b39b90cf7fdf20d4590734d0"
	I0603 12:27:58.652492 1088227 cri.go:89] found id: "f6fe81e7ca2fd0156b4ee75ecc0e1e8c74af378213364803f5a24bc855d2d61e"
	I0603 12:27:58.652496 1088227 cri.go:89] found id: "7d7bc0d2e0da8b3978562d567cb3ca7262a460f011a09f688ca5b581ddc1c075"
	I0603 12:27:58.652503 1088227 cri.go:89] found id: "435447885e6a3c602fae7bd77b056971c54bbc1ada0aa5e9e9f634db78fc7c0a"
	I0603 12:27:58.652518 1088227 cri.go:89] found id: "17a9104d810266c5a8079eeaf8d0c23a2e4538617523b6b90bff538c0454bd06"
	I0603 12:27:58.652525 1088227 cri.go:89] found id: "35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3"
	I0603 12:27:58.652528 1088227 cri.go:89] found id: "6add0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e"
	I0603 12:27:58.652535 1088227 cri.go:89] found id: "0c7a1cc6df31c0c301fee639aa62ce868d9a11802928a59d2d19c941e0c51514"
	I0603 12:27:58.652540 1088227 cri.go:89] found id: "5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8"
	I0603 12:27:58.652545 1088227 cri.go:89] found id: "ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4"
	I0603 12:27:58.652549 1088227 cri.go:89] found id: "92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b"
	I0603 12:27:58.652557 1088227 cri.go:89] found id: ""
	I0603 12:27:58.652620 1088227 ssh_runner.go:195] Run: sudo runc list -f json
	I0603 12:27:58.898162 1088227 main.go:141] libmachine: Making call to close driver server
	I0603 12:27:58.898188 1088227 main.go:141] libmachine: (addons-699562) Calling .Close
	I0603 12:27:58.898544 1088227 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
	I0603 12:27:58.898680 1088227 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:27:58.898713 1088227 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:27:58.901102 1088227 out.go:177] 
	W0603 12:27:58.902542 1088227 out.go:239] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-06-03T12:27:58Z" level=error msg="stat /run/runc/fa0cbff707dbf8a06cb9d6befcabd35358b7d135b537c0f755c18c74cc5c74f2: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-06-03T12:27:58Z" level=error msg="stat /run/runc/fa0cbff707dbf8a06cb9d6befcabd35358b7d135b537c0f755c18c74cc5c74f2: no such file or directory"
	
	W0603 12:27:58.902558 1088227 out.go:239] * 
	* 
	W0603 12:27:58.906580 1088227 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0603 12:27:58.908314 1088227 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:1023: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-699562 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (13.37s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.24s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-699562
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-699562: exit status 82 (2m0.464561333s)

                                                
                                                
-- stdout --
	* Stopping node "addons-699562"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-699562" : exit status 82
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-699562
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-699562: exit status 11 (21.491707831s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.241:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-699562" : exit status 11
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-699562
addons_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-699562: exit status 11 (6.143283067s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.241:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:184: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-699562" : exit status 11
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-699562
addons_test.go:187: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-699562: exit status 11 (6.143255046s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.241:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:189: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-699562" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (142.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 node stop m02 -v=7 --alsologtostderr
E0603 12:45:18.711220 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/functional-093300/client.crt: no such file or directory
E0603 12:45:39.191518 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/functional-093300/client.crt: no such file or directory
E0603 12:46:20.152223 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/functional-093300/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-220492 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.48146113s)

                                                
                                                
-- stdout --
	* Stopping node "ha-220492-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0603 12:45:12.518286 1100367 out.go:291] Setting OutFile to fd 1 ...
	I0603 12:45:12.518428 1100367 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:45:12.518440 1100367 out.go:304] Setting ErrFile to fd 2...
	I0603 12:45:12.518447 1100367 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:45:12.518629 1100367 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
	I0603 12:45:12.518865 1100367 mustload.go:65] Loading cluster: ha-220492
	I0603 12:45:12.519265 1100367 config.go:182] Loaded profile config "ha-220492": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:45:12.519283 1100367 stop.go:39] StopHost: ha-220492-m02
	I0603 12:45:12.519603 1100367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:45:12.519656 1100367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:45:12.537534 1100367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43781
	I0603 12:45:12.538371 1100367 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:45:12.539203 1100367 main.go:141] libmachine: Using API Version  1
	I0603 12:45:12.539228 1100367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:45:12.539673 1100367 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:45:12.541741 1100367 out.go:177] * Stopping node "ha-220492-m02"  ...
	I0603 12:45:12.543292 1100367 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0603 12:45:12.543338 1100367 main.go:141] libmachine: (ha-220492-m02) Calling .DriverName
	I0603 12:45:12.543590 1100367 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0603 12:45:12.543622 1100367 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHHostname
	I0603 12:45:12.546726 1100367 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:45:12.547357 1100367 main.go:141] libmachine: (ha-220492-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:56:2b", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:41:56 +0000 UTC Type:0 Mac:52:54:00:5d:56:2b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-220492-m02 Clientid:01:52:54:00:5d:56:2b}
	I0603 12:45:12.547387 1100367 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:45:12.547565 1100367 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHPort
	I0603 12:45:12.547814 1100367 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHKeyPath
	I0603 12:45:12.548102 1100367 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHUsername
	I0603 12:45:12.548302 1100367 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m02/id_rsa Username:docker}
	I0603 12:45:12.636284 1100367 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0603 12:45:12.691036 1100367 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0603 12:45:12.745279 1100367 main.go:141] libmachine: Stopping "ha-220492-m02"...
	I0603 12:45:12.745345 1100367 main.go:141] libmachine: (ha-220492-m02) Calling .GetState
	I0603 12:45:12.746904 1100367 main.go:141] libmachine: (ha-220492-m02) Calling .Stop
	I0603 12:45:12.750831 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 0/120
	I0603 12:45:13.752828 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 1/120
	I0603 12:45:14.754185 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 2/120
	I0603 12:45:15.755614 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 3/120
	I0603 12:45:16.757121 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 4/120
	I0603 12:45:17.758474 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 5/120
	I0603 12:45:18.759947 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 6/120
	I0603 12:45:19.761331 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 7/120
	I0603 12:45:20.762510 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 8/120
	I0603 12:45:21.764073 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 9/120
	I0603 12:45:22.766351 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 10/120
	I0603 12:45:23.767905 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 11/120
	I0603 12:45:24.769607 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 12/120
	I0603 12:45:25.771945 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 13/120
	I0603 12:45:26.773388 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 14/120
	I0603 12:45:27.775622 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 15/120
	I0603 12:45:28.777116 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 16/120
	I0603 12:45:29.778427 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 17/120
	I0603 12:45:30.779733 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 18/120
	I0603 12:45:31.781276 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 19/120
	I0603 12:45:32.782782 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 20/120
	I0603 12:45:33.784151 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 21/120
	I0603 12:45:34.785644 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 22/120
	I0603 12:45:35.787840 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 23/120
	I0603 12:45:36.789251 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 24/120
	I0603 12:45:37.791111 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 25/120
	I0603 12:45:38.792862 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 26/120
	I0603 12:45:39.794473 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 27/120
	I0603 12:45:40.796438 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 28/120
	I0603 12:45:41.798356 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 29/120
	I0603 12:45:42.800754 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 30/120
	I0603 12:45:43.802281 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 31/120
	I0603 12:45:44.804019 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 32/120
	I0603 12:45:45.805678 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 33/120
	I0603 12:45:46.808042 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 34/120
	I0603 12:45:47.809908 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 35/120
	I0603 12:45:48.812255 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 36/120
	I0603 12:45:49.813560 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 37/120
	I0603 12:45:50.815011 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 38/120
	I0603 12:45:51.816304 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 39/120
	I0603 12:45:52.818304 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 40/120
	I0603 12:45:53.820179 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 41/120
	I0603 12:45:54.821536 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 42/120
	I0603 12:45:55.822922 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 43/120
	I0603 12:45:56.824325 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 44/120
	I0603 12:45:57.826621 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 45/120
	I0603 12:45:58.828029 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 46/120
	I0603 12:45:59.829603 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 47/120
	I0603 12:46:00.831097 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 48/120
	I0603 12:46:01.832325 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 49/120
	I0603 12:46:02.834396 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 50/120
	I0603 12:46:03.836033 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 51/120
	I0603 12:46:04.837543 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 52/120
	I0603 12:46:05.839275 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 53/120
	I0603 12:46:06.841583 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 54/120
	I0603 12:46:07.843350 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 55/120
	I0603 12:46:08.844984 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 56/120
	I0603 12:46:09.846483 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 57/120
	I0603 12:46:10.847880 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 58/120
	I0603 12:46:11.849308 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 59/120
	I0603 12:46:12.851286 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 60/120
	I0603 12:46:13.852765 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 61/120
	I0603 12:46:14.854163 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 62/120
	I0603 12:46:15.855538 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 63/120
	I0603 12:46:16.856996 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 64/120
	I0603 12:46:17.858955 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 65/120
	I0603 12:46:18.860225 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 66/120
	I0603 12:46:19.862559 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 67/120
	I0603 12:46:20.865439 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 68/120
	I0603 12:46:21.867484 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 69/120
	I0603 12:46:22.869786 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 70/120
	I0603 12:46:23.872181 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 71/120
	I0603 12:46:24.873811 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 72/120
	I0603 12:46:25.876196 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 73/120
	I0603 12:46:26.877748 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 74/120
	I0603 12:46:27.879706 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 75/120
	I0603 12:46:28.881264 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 76/120
	I0603 12:46:29.882576 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 77/120
	I0603 12:46:30.884030 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 78/120
	I0603 12:46:31.885427 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 79/120
	I0603 12:46:32.887514 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 80/120
	I0603 12:46:33.888885 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 81/120
	I0603 12:46:34.890656 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 82/120
	I0603 12:46:35.891916 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 83/120
	I0603 12:46:36.893091 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 84/120
	I0603 12:46:37.895071 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 85/120
	I0603 12:46:38.896390 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 86/120
	I0603 12:46:39.898144 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 87/120
	I0603 12:46:40.900167 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 88/120
	I0603 12:46:41.901716 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 89/120
	I0603 12:46:42.903978 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 90/120
	I0603 12:46:43.905915 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 91/120
	I0603 12:46:44.907275 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 92/120
	I0603 12:46:45.909620 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 93/120
	I0603 12:46:46.910919 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 94/120
	I0603 12:46:47.912839 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 95/120
	I0603 12:46:48.914117 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 96/120
	I0603 12:46:49.915950 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 97/120
	I0603 12:46:50.917593 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 98/120
	I0603 12:46:51.919028 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 99/120
	I0603 12:46:52.921067 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 100/120
	I0603 12:46:53.923075 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 101/120
	I0603 12:46:54.924498 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 102/120
	I0603 12:46:55.925979 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 103/120
	I0603 12:46:56.928279 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 104/120
	I0603 12:46:57.930689 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 105/120
	I0603 12:46:58.931985 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 106/120
	I0603 12:46:59.933427 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 107/120
	I0603 12:47:00.934835 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 108/120
	I0603 12:47:01.936380 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 109/120
	I0603 12:47:02.937778 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 110/120
	I0603 12:47:03.939732 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 111/120
	I0603 12:47:04.940991 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 112/120
	I0603 12:47:05.942400 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 113/120
	I0603 12:47:06.944585 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 114/120
	I0603 12:47:07.946666 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 115/120
	I0603 12:47:08.948268 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 116/120
	I0603 12:47:09.949644 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 117/120
	I0603 12:47:10.952040 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 118/120
	I0603 12:47:11.953580 1100367 main.go:141] libmachine: (ha-220492-m02) Waiting for machine to stop 119/120
	I0603 12:47:12.954131 1100367 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0603 12:47:12.954292 1100367 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-220492 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-220492 status -v=7 --alsologtostderr: exit status 3 (19.241562155s)

                                                
                                                
-- stdout --
	ha-220492
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-220492-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-220492-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-220492-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0603 12:47:13.003514 1100814 out.go:291] Setting OutFile to fd 1 ...
	I0603 12:47:13.003635 1100814 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:47:13.003643 1100814 out.go:304] Setting ErrFile to fd 2...
	I0603 12:47:13.003648 1100814 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:47:13.003808 1100814 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
	I0603 12:47:13.004015 1100814 out.go:298] Setting JSON to false
	I0603 12:47:13.004043 1100814 mustload.go:65] Loading cluster: ha-220492
	I0603 12:47:13.004167 1100814 notify.go:220] Checking for updates...
	I0603 12:47:13.004402 1100814 config.go:182] Loaded profile config "ha-220492": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:47:13.004417 1100814 status.go:255] checking status of ha-220492 ...
	I0603 12:47:13.004788 1100814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:13.004857 1100814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:13.027966 1100814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44145
	I0603 12:47:13.028482 1100814 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:13.029144 1100814 main.go:141] libmachine: Using API Version  1
	I0603 12:47:13.029168 1100814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:13.029620 1100814 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:13.029799 1100814 main.go:141] libmachine: (ha-220492) Calling .GetState
	I0603 12:47:13.031482 1100814 status.go:330] ha-220492 host status = "Running" (err=<nil>)
	I0603 12:47:13.031504 1100814 host.go:66] Checking if "ha-220492" exists ...
	I0603 12:47:13.031905 1100814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:13.031946 1100814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:13.046907 1100814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43469
	I0603 12:47:13.047402 1100814 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:13.047892 1100814 main.go:141] libmachine: Using API Version  1
	I0603 12:47:13.047914 1100814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:13.048297 1100814 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:13.048498 1100814 main.go:141] libmachine: (ha-220492) Calling .GetIP
	I0603 12:47:13.051354 1100814 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:47:13.051746 1100814 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:47:13.051775 1100814 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:47:13.051953 1100814 host.go:66] Checking if "ha-220492" exists ...
	I0603 12:47:13.052258 1100814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:13.052300 1100814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:13.068064 1100814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32789
	I0603 12:47:13.068564 1100814 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:13.069083 1100814 main.go:141] libmachine: Using API Version  1
	I0603 12:47:13.069123 1100814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:13.069455 1100814 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:13.069635 1100814 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:47:13.069861 1100814 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 12:47:13.069889 1100814 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:47:13.072387 1100814 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:47:13.072837 1100814 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:47:13.072859 1100814 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:47:13.073247 1100814 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:47:13.073456 1100814 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:47:13.073643 1100814 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:47:13.073821 1100814 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/id_rsa Username:docker}
	I0603 12:47:13.158740 1100814 ssh_runner.go:195] Run: systemctl --version
	I0603 12:47:13.166015 1100814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:47:13.182670 1100814 kubeconfig.go:125] found "ha-220492" server: "https://192.168.39.254:8443"
	I0603 12:47:13.182712 1100814 api_server.go:166] Checking apiserver status ...
	I0603 12:47:13.182754 1100814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:47:13.198481 1100814 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1226/cgroup
	W0603 12:47:13.207592 1100814 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1226/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 12:47:13.207651 1100814 ssh_runner.go:195] Run: ls
	I0603 12:47:13.211991 1100814 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0603 12:47:13.217256 1100814 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0603 12:47:13.217280 1100814 status.go:422] ha-220492 apiserver status = Running (err=<nil>)
	I0603 12:47:13.217294 1100814 status.go:257] ha-220492 status: &{Name:ha-220492 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 12:47:13.217325 1100814 status.go:255] checking status of ha-220492-m02 ...
	I0603 12:47:13.217666 1100814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:13.217712 1100814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:13.233224 1100814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40177
	I0603 12:47:13.233703 1100814 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:13.234158 1100814 main.go:141] libmachine: Using API Version  1
	I0603 12:47:13.234179 1100814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:13.234519 1100814 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:13.234721 1100814 main.go:141] libmachine: (ha-220492-m02) Calling .GetState
	I0603 12:47:13.236316 1100814 status.go:330] ha-220492-m02 host status = "Running" (err=<nil>)
	I0603 12:47:13.236332 1100814 host.go:66] Checking if "ha-220492-m02" exists ...
	I0603 12:47:13.236614 1100814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:13.236644 1100814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:13.250982 1100814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41647
	I0603 12:47:13.251409 1100814 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:13.251883 1100814 main.go:141] libmachine: Using API Version  1
	I0603 12:47:13.251911 1100814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:13.252203 1100814 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:13.252416 1100814 main.go:141] libmachine: (ha-220492-m02) Calling .GetIP
	I0603 12:47:13.255163 1100814 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:47:13.255597 1100814 main.go:141] libmachine: (ha-220492-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:56:2b", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:41:56 +0000 UTC Type:0 Mac:52:54:00:5d:56:2b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-220492-m02 Clientid:01:52:54:00:5d:56:2b}
	I0603 12:47:13.255626 1100814 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:47:13.255747 1100814 host.go:66] Checking if "ha-220492-m02" exists ...
	I0603 12:47:13.256052 1100814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:13.256093 1100814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:13.273020 1100814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34843
	I0603 12:47:13.273449 1100814 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:13.273944 1100814 main.go:141] libmachine: Using API Version  1
	I0603 12:47:13.273968 1100814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:13.274316 1100814 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:13.274510 1100814 main.go:141] libmachine: (ha-220492-m02) Calling .DriverName
	I0603 12:47:13.274682 1100814 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 12:47:13.274700 1100814 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHHostname
	I0603 12:47:13.277268 1100814 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:47:13.277819 1100814 main.go:141] libmachine: (ha-220492-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:56:2b", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:41:56 +0000 UTC Type:0 Mac:52:54:00:5d:56:2b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-220492-m02 Clientid:01:52:54:00:5d:56:2b}
	I0603 12:47:13.277847 1100814 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:47:13.278007 1100814 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHPort
	I0603 12:47:13.278165 1100814 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHKeyPath
	I0603 12:47:13.278307 1100814 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHUsername
	I0603 12:47:13.278447 1100814 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m02/id_rsa Username:docker}
	W0603 12:47:31.817749 1100814 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.106:22: connect: no route to host
	W0603 12:47:31.817861 1100814 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.106:22: connect: no route to host
	E0603 12:47:31.817879 1100814 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.106:22: connect: no route to host
	I0603 12:47:31.817887 1100814 status.go:257] ha-220492-m02 status: &{Name:ha-220492-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0603 12:47:31.817927 1100814 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.106:22: connect: no route to host
	I0603 12:47:31.817934 1100814 status.go:255] checking status of ha-220492-m03 ...
	I0603 12:47:31.818264 1100814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:31.818321 1100814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:31.834681 1100814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42843
	I0603 12:47:31.835171 1100814 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:31.835716 1100814 main.go:141] libmachine: Using API Version  1
	I0603 12:47:31.835736 1100814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:31.836094 1100814 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:31.836346 1100814 main.go:141] libmachine: (ha-220492-m03) Calling .GetState
	I0603 12:47:31.838219 1100814 status.go:330] ha-220492-m03 host status = "Running" (err=<nil>)
	I0603 12:47:31.838241 1100814 host.go:66] Checking if "ha-220492-m03" exists ...
	I0603 12:47:31.838670 1100814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:31.838718 1100814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:31.854460 1100814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45329
	I0603 12:47:31.854905 1100814 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:31.855375 1100814 main.go:141] libmachine: Using API Version  1
	I0603 12:47:31.855397 1100814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:31.855736 1100814 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:31.855920 1100814 main.go:141] libmachine: (ha-220492-m03) Calling .GetIP
	I0603 12:47:31.858590 1100814 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:47:31.859065 1100814 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-220492-m03 Clientid:01:52:54:00:ae:60:87}
	I0603 12:47:31.859090 1100814 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:47:31.859214 1100814 host.go:66] Checking if "ha-220492-m03" exists ...
	I0603 12:47:31.859508 1100814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:31.859550 1100814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:31.874460 1100814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42721
	I0603 12:47:31.874847 1100814 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:31.875308 1100814 main.go:141] libmachine: Using API Version  1
	I0603 12:47:31.875331 1100814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:31.875647 1100814 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:31.875827 1100814 main.go:141] libmachine: (ha-220492-m03) Calling .DriverName
	I0603 12:47:31.875999 1100814 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 12:47:31.876018 1100814 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHHostname
	I0603 12:47:31.878809 1100814 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:47:31.879248 1100814 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-220492-m03 Clientid:01:52:54:00:ae:60:87}
	I0603 12:47:31.879277 1100814 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:47:31.879442 1100814 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHPort
	I0603 12:47:31.879599 1100814 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHKeyPath
	I0603 12:47:31.879749 1100814 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHUsername
	I0603 12:47:31.879895 1100814 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m03/id_rsa Username:docker}
	I0603 12:47:31.968205 1100814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:47:31.987503 1100814 kubeconfig.go:125] found "ha-220492" server: "https://192.168.39.254:8443"
	I0603 12:47:31.987535 1100814 api_server.go:166] Checking apiserver status ...
	I0603 12:47:31.987589 1100814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:47:32.004403 1100814 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1506/cgroup
	W0603 12:47:32.014879 1100814 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1506/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 12:47:32.014938 1100814 ssh_runner.go:195] Run: ls
	I0603 12:47:32.019805 1100814 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0603 12:47:32.029046 1100814 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0603 12:47:32.029075 1100814 status.go:422] ha-220492-m03 apiserver status = Running (err=<nil>)
	I0603 12:47:32.029085 1100814 status.go:257] ha-220492-m03 status: &{Name:ha-220492-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 12:47:32.029104 1100814 status.go:255] checking status of ha-220492-m04 ...
	I0603 12:47:32.029470 1100814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:32.029515 1100814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:32.045019 1100814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43743
	I0603 12:47:32.045472 1100814 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:32.046045 1100814 main.go:141] libmachine: Using API Version  1
	I0603 12:47:32.046080 1100814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:32.046457 1100814 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:32.046637 1100814 main.go:141] libmachine: (ha-220492-m04) Calling .GetState
	I0603 12:47:32.048240 1100814 status.go:330] ha-220492-m04 host status = "Running" (err=<nil>)
	I0603 12:47:32.048264 1100814 host.go:66] Checking if "ha-220492-m04" exists ...
	I0603 12:47:32.048544 1100814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:32.048579 1100814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:32.063781 1100814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37097
	I0603 12:47:32.064328 1100814 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:32.064972 1100814 main.go:141] libmachine: Using API Version  1
	I0603 12:47:32.064996 1100814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:32.065301 1100814 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:32.065517 1100814 main.go:141] libmachine: (ha-220492-m04) Calling .GetIP
	I0603 12:47:32.068268 1100814 main.go:141] libmachine: (ha-220492-m04) DBG | domain ha-220492-m04 has defined MAC address 52:54:00:ce:45:9f in network mk-ha-220492
	I0603 12:47:32.068679 1100814 main.go:141] libmachine: (ha-220492-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:45:9f", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:44:29 +0000 UTC Type:0 Mac:52:54:00:ce:45:9f Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-220492-m04 Clientid:01:52:54:00:ce:45:9f}
	I0603 12:47:32.068707 1100814 main.go:141] libmachine: (ha-220492-m04) DBG | domain ha-220492-m04 has defined IP address 192.168.39.76 and MAC address 52:54:00:ce:45:9f in network mk-ha-220492
	I0603 12:47:32.068864 1100814 host.go:66] Checking if "ha-220492-m04" exists ...
	I0603 12:47:32.069199 1100814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:32.069237 1100814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:32.084613 1100814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41555
	I0603 12:47:32.085088 1100814 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:32.085570 1100814 main.go:141] libmachine: Using API Version  1
	I0603 12:47:32.085593 1100814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:32.085893 1100814 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:32.086071 1100814 main.go:141] libmachine: (ha-220492-m04) Calling .DriverName
	I0603 12:47:32.086246 1100814 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 12:47:32.086264 1100814 main.go:141] libmachine: (ha-220492-m04) Calling .GetSSHHostname
	I0603 12:47:32.088935 1100814 main.go:141] libmachine: (ha-220492-m04) DBG | domain ha-220492-m04 has defined MAC address 52:54:00:ce:45:9f in network mk-ha-220492
	I0603 12:47:32.089339 1100814 main.go:141] libmachine: (ha-220492-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:45:9f", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:44:29 +0000 UTC Type:0 Mac:52:54:00:ce:45:9f Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-220492-m04 Clientid:01:52:54:00:ce:45:9f}
	I0603 12:47:32.089378 1100814 main.go:141] libmachine: (ha-220492-m04) DBG | domain ha-220492-m04 has defined IP address 192.168.39.76 and MAC address 52:54:00:ce:45:9f in network mk-ha-220492
	I0603 12:47:32.089505 1100814 main.go:141] libmachine: (ha-220492-m04) Calling .GetSSHPort
	I0603 12:47:32.089685 1100814 main.go:141] libmachine: (ha-220492-m04) Calling .GetSSHKeyPath
	I0603 12:47:32.089833 1100814 main.go:141] libmachine: (ha-220492-m04) Calling .GetSSHUsername
	I0603 12:47:32.089952 1100814 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m04/id_rsa Username:docker}
	I0603 12:47:32.179191 1100814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:47:32.195556 1100814 status.go:257] ha-220492-m04 status: &{Name:ha-220492-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-220492 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-220492 -n ha-220492
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-220492 logs -n 25: (1.397620039s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-220492 cp ha-220492-m03:/home/docker/cp-test.txt                              | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3428699095/001/cp-test_ha-220492-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-220492 ssh -n                                                                 | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-220492 cp ha-220492-m03:/home/docker/cp-test.txt                              | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492:/home/docker/cp-test_ha-220492-m03_ha-220492.txt                       |           |         |         |                     |                     |
	| ssh     | ha-220492 ssh -n                                                                 | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-220492 ssh -n ha-220492 sudo cat                                              | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | /home/docker/cp-test_ha-220492-m03_ha-220492.txt                                 |           |         |         |                     |                     |
	| cp      | ha-220492 cp ha-220492-m03:/home/docker/cp-test.txt                              | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492-m02:/home/docker/cp-test_ha-220492-m03_ha-220492-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-220492 ssh -n                                                                 | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-220492 ssh -n ha-220492-m02 sudo cat                                          | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | /home/docker/cp-test_ha-220492-m03_ha-220492-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-220492 cp ha-220492-m03:/home/docker/cp-test.txt                              | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492-m04:/home/docker/cp-test_ha-220492-m03_ha-220492-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-220492 ssh -n                                                                 | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-220492 ssh -n ha-220492-m04 sudo cat                                          | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | /home/docker/cp-test_ha-220492-m03_ha-220492-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-220492 cp testdata/cp-test.txt                                                | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-220492 ssh -n                                                                 | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-220492 cp ha-220492-m04:/home/docker/cp-test.txt                              | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3428699095/001/cp-test_ha-220492-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-220492 ssh -n                                                                 | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-220492 cp ha-220492-m04:/home/docker/cp-test.txt                              | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492:/home/docker/cp-test_ha-220492-m04_ha-220492.txt                       |           |         |         |                     |                     |
	| ssh     | ha-220492 ssh -n                                                                 | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-220492 ssh -n ha-220492 sudo cat                                              | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | /home/docker/cp-test_ha-220492-m04_ha-220492.txt                                 |           |         |         |                     |                     |
	| cp      | ha-220492 cp ha-220492-m04:/home/docker/cp-test.txt                              | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492-m02:/home/docker/cp-test_ha-220492-m04_ha-220492-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-220492 ssh -n                                                                 | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-220492 ssh -n ha-220492-m02 sudo cat                                          | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | /home/docker/cp-test_ha-220492-m04_ha-220492-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-220492 cp ha-220492-m04:/home/docker/cp-test.txt                              | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492-m03:/home/docker/cp-test_ha-220492-m04_ha-220492-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-220492 ssh -n                                                                 | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-220492 ssh -n ha-220492-m03 sudo cat                                          | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | /home/docker/cp-test_ha-220492-m04_ha-220492-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-220492 node stop m02 -v=7                                                     | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 12:40:45
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 12:40:45.154122 1096371 out.go:291] Setting OutFile to fd 1 ...
	I0603 12:40:45.154220 1096371 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:40:45.154228 1096371 out.go:304] Setting ErrFile to fd 2...
	I0603 12:40:45.154232 1096371 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:40:45.154410 1096371 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
	I0603 12:40:45.154944 1096371 out.go:298] Setting JSON to false
	I0603 12:40:45.155926 1096371 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12192,"bootTime":1717406253,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 12:40:45.155986 1096371 start.go:139] virtualization: kvm guest
	I0603 12:40:45.158145 1096371 out.go:177] * [ha-220492] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 12:40:45.159736 1096371 notify.go:220] Checking for updates...
	I0603 12:40:45.159744 1096371 out.go:177]   - MINIKUBE_LOCATION=19011
	I0603 12:40:45.161095 1096371 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 12:40:45.162385 1096371 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 12:40:45.163711 1096371 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 12:40:45.164898 1096371 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 12:40:45.166037 1096371 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 12:40:45.167326 1096371 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 12:40:45.202490 1096371 out.go:177] * Using the kvm2 driver based on user configuration
	I0603 12:40:45.203766 1096371 start.go:297] selected driver: kvm2
	I0603 12:40:45.203780 1096371 start.go:901] validating driver "kvm2" against <nil>
	I0603 12:40:45.203793 1096371 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 12:40:45.204471 1096371 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 12:40:45.204555 1096371 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19011-1078924/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0603 12:40:45.219610 1096371 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0603 12:40:45.219670 1096371 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0603 12:40:45.219878 1096371 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 12:40:45.219951 1096371 cni.go:84] Creating CNI manager for ""
	I0603 12:40:45.219969 1096371 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0603 12:40:45.219978 1096371 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0603 12:40:45.220046 1096371 start.go:340] cluster config:
	{Name:ha-220492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-220492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0603 12:40:45.220155 1096371 iso.go:125] acquiring lock: {Name:mka26d6a83f88b83737ccc78b57cc462fbe70fe1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 12:40:45.221748 1096371 out.go:177] * Starting "ha-220492" primary control-plane node in "ha-220492" cluster
	I0603 12:40:45.222990 1096371 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 12:40:45.223024 1096371 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0603 12:40:45.223048 1096371 cache.go:56] Caching tarball of preloaded images
	I0603 12:40:45.223125 1096371 preload.go:173] Found /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 12:40:45.223137 1096371 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0603 12:40:45.223447 1096371 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/config.json ...
	I0603 12:40:45.223472 1096371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/config.json: {Name:mkc9aa250f9d043c2e947d40a6dc3875c1521c9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:40:45.223612 1096371 start.go:360] acquireMachinesLock for ha-220492: {Name:mk20baaab39609d00406b78ad309423511e633ec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 12:40:45.223654 1096371 start.go:364] duration metric: took 25.719µs to acquireMachinesLock for "ha-220492"
	I0603 12:40:45.223683 1096371 start.go:93] Provisioning new machine with config: &{Name:ha-220492 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-220492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 12:40:45.223742 1096371 start.go:125] createHost starting for "" (driver="kvm2")
	I0603 12:40:45.225464 1096371 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0603 12:40:45.225606 1096371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:40:45.225660 1096371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:40:45.239421 1096371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34301
	I0603 12:40:45.239910 1096371 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:40:45.240536 1096371 main.go:141] libmachine: Using API Version  1
	I0603 12:40:45.240564 1096371 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:40:45.240924 1096371 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:40:45.241106 1096371 main.go:141] libmachine: (ha-220492) Calling .GetMachineName
	I0603 12:40:45.241237 1096371 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:40:45.241441 1096371 start.go:159] libmachine.API.Create for "ha-220492" (driver="kvm2")
	I0603 12:40:45.241473 1096371 client.go:168] LocalClient.Create starting
	I0603 12:40:45.241501 1096371 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem
	I0603 12:40:45.241533 1096371 main.go:141] libmachine: Decoding PEM data...
	I0603 12:40:45.241550 1096371 main.go:141] libmachine: Parsing certificate...
	I0603 12:40:45.241605 1096371 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem
	I0603 12:40:45.241624 1096371 main.go:141] libmachine: Decoding PEM data...
	I0603 12:40:45.241637 1096371 main.go:141] libmachine: Parsing certificate...
	I0603 12:40:45.241653 1096371 main.go:141] libmachine: Running pre-create checks...
	I0603 12:40:45.241662 1096371 main.go:141] libmachine: (ha-220492) Calling .PreCreateCheck
	I0603 12:40:45.242015 1096371 main.go:141] libmachine: (ha-220492) Calling .GetConfigRaw
	I0603 12:40:45.242395 1096371 main.go:141] libmachine: Creating machine...
	I0603 12:40:45.242419 1096371 main.go:141] libmachine: (ha-220492) Calling .Create
	I0603 12:40:45.242576 1096371 main.go:141] libmachine: (ha-220492) Creating KVM machine...
	I0603 12:40:45.243829 1096371 main.go:141] libmachine: (ha-220492) DBG | found existing default KVM network
	I0603 12:40:45.244550 1096371 main.go:141] libmachine: (ha-220492) DBG | I0603 12:40:45.244404 1096394 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012d9a0}
	I0603 12:40:45.244567 1096371 main.go:141] libmachine: (ha-220492) DBG | created network xml: 
	I0603 12:40:45.244577 1096371 main.go:141] libmachine: (ha-220492) DBG | <network>
	I0603 12:40:45.244582 1096371 main.go:141] libmachine: (ha-220492) DBG |   <name>mk-ha-220492</name>
	I0603 12:40:45.244588 1096371 main.go:141] libmachine: (ha-220492) DBG |   <dns enable='no'/>
	I0603 12:40:45.244592 1096371 main.go:141] libmachine: (ha-220492) DBG |   
	I0603 12:40:45.244602 1096371 main.go:141] libmachine: (ha-220492) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0603 12:40:45.244613 1096371 main.go:141] libmachine: (ha-220492) DBG |     <dhcp>
	I0603 12:40:45.244623 1096371 main.go:141] libmachine: (ha-220492) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0603 12:40:45.244634 1096371 main.go:141] libmachine: (ha-220492) DBG |     </dhcp>
	I0603 12:40:45.244642 1096371 main.go:141] libmachine: (ha-220492) DBG |   </ip>
	I0603 12:40:45.244653 1096371 main.go:141] libmachine: (ha-220492) DBG |   
	I0603 12:40:45.244665 1096371 main.go:141] libmachine: (ha-220492) DBG | </network>
	I0603 12:40:45.244673 1096371 main.go:141] libmachine: (ha-220492) DBG | 
	I0603 12:40:45.249628 1096371 main.go:141] libmachine: (ha-220492) DBG | trying to create private KVM network mk-ha-220492 192.168.39.0/24...
	I0603 12:40:45.311984 1096371 main.go:141] libmachine: (ha-220492) DBG | private KVM network mk-ha-220492 192.168.39.0/24 created
	I0603 12:40:45.312068 1096371 main.go:141] libmachine: (ha-220492) DBG | I0603 12:40:45.311945 1096394 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 12:40:45.312094 1096371 main.go:141] libmachine: (ha-220492) Setting up store path in /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492 ...
	I0603 12:40:45.312130 1096371 main.go:141] libmachine: (ha-220492) Building disk image from file:///home/jenkins/minikube-integration/19011-1078924/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso
	I0603 12:40:45.312150 1096371 main.go:141] libmachine: (ha-220492) Downloading /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19011-1078924/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0603 12:40:45.584465 1096371 main.go:141] libmachine: (ha-220492) DBG | I0603 12:40:45.584331 1096394 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/id_rsa...
	I0603 12:40:45.705607 1096371 main.go:141] libmachine: (ha-220492) DBG | I0603 12:40:45.705464 1096394 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/ha-220492.rawdisk...
	I0603 12:40:45.705640 1096371 main.go:141] libmachine: (ha-220492) DBG | Writing magic tar header
	I0603 12:40:45.705650 1096371 main.go:141] libmachine: (ha-220492) DBG | Writing SSH key tar header
	I0603 12:40:45.705737 1096371 main.go:141] libmachine: (ha-220492) DBG | I0603 12:40:45.705644 1096394 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492 ...
	I0603 12:40:45.705855 1096371 main.go:141] libmachine: (ha-220492) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492
	I0603 12:40:45.705879 1096371 main.go:141] libmachine: (ha-220492) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492 (perms=drwx------)
	I0603 12:40:45.705888 1096371 main.go:141] libmachine: (ha-220492) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines
	I0603 12:40:45.705899 1096371 main.go:141] libmachine: (ha-220492) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924/.minikube/machines (perms=drwxr-xr-x)
	I0603 12:40:45.705915 1096371 main.go:141] libmachine: (ha-220492) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924/.minikube (perms=drwxr-xr-x)
	I0603 12:40:45.705929 1096371 main.go:141] libmachine: (ha-220492) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 12:40:45.705940 1096371 main.go:141] libmachine: (ha-220492) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924 (perms=drwxrwxr-x)
	I0603 12:40:45.705956 1096371 main.go:141] libmachine: (ha-220492) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0603 12:40:45.705966 1096371 main.go:141] libmachine: (ha-220492) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0603 12:40:45.705975 1096371 main.go:141] libmachine: (ha-220492) Creating domain...
	I0603 12:40:45.705988 1096371 main.go:141] libmachine: (ha-220492) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924
	I0603 12:40:45.706002 1096371 main.go:141] libmachine: (ha-220492) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0603 12:40:45.706018 1096371 main.go:141] libmachine: (ha-220492) DBG | Checking permissions on dir: /home/jenkins
	I0603 12:40:45.706029 1096371 main.go:141] libmachine: (ha-220492) DBG | Checking permissions on dir: /home
	I0603 12:40:45.706040 1096371 main.go:141] libmachine: (ha-220492) DBG | Skipping /home - not owner
	I0603 12:40:45.707030 1096371 main.go:141] libmachine: (ha-220492) define libvirt domain using xml: 
	I0603 12:40:45.707050 1096371 main.go:141] libmachine: (ha-220492) <domain type='kvm'>
	I0603 12:40:45.707056 1096371 main.go:141] libmachine: (ha-220492)   <name>ha-220492</name>
	I0603 12:40:45.707064 1096371 main.go:141] libmachine: (ha-220492)   <memory unit='MiB'>2200</memory>
	I0603 12:40:45.707090 1096371 main.go:141] libmachine: (ha-220492)   <vcpu>2</vcpu>
	I0603 12:40:45.707111 1096371 main.go:141] libmachine: (ha-220492)   <features>
	I0603 12:40:45.707120 1096371 main.go:141] libmachine: (ha-220492)     <acpi/>
	I0603 12:40:45.707127 1096371 main.go:141] libmachine: (ha-220492)     <apic/>
	I0603 12:40:45.707135 1096371 main.go:141] libmachine: (ha-220492)     <pae/>
	I0603 12:40:45.707147 1096371 main.go:141] libmachine: (ha-220492)     
	I0603 12:40:45.707155 1096371 main.go:141] libmachine: (ha-220492)   </features>
	I0603 12:40:45.707162 1096371 main.go:141] libmachine: (ha-220492)   <cpu mode='host-passthrough'>
	I0603 12:40:45.707174 1096371 main.go:141] libmachine: (ha-220492)   
	I0603 12:40:45.707184 1096371 main.go:141] libmachine: (ha-220492)   </cpu>
	I0603 12:40:45.707192 1096371 main.go:141] libmachine: (ha-220492)   <os>
	I0603 12:40:45.707199 1096371 main.go:141] libmachine: (ha-220492)     <type>hvm</type>
	I0603 12:40:45.707208 1096371 main.go:141] libmachine: (ha-220492)     <boot dev='cdrom'/>
	I0603 12:40:45.707219 1096371 main.go:141] libmachine: (ha-220492)     <boot dev='hd'/>
	I0603 12:40:45.707296 1096371 main.go:141] libmachine: (ha-220492)     <bootmenu enable='no'/>
	I0603 12:40:45.707352 1096371 main.go:141] libmachine: (ha-220492)   </os>
	I0603 12:40:45.707369 1096371 main.go:141] libmachine: (ha-220492)   <devices>
	I0603 12:40:45.707381 1096371 main.go:141] libmachine: (ha-220492)     <disk type='file' device='cdrom'>
	I0603 12:40:45.707398 1096371 main.go:141] libmachine: (ha-220492)       <source file='/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/boot2docker.iso'/>
	I0603 12:40:45.707417 1096371 main.go:141] libmachine: (ha-220492)       <target dev='hdc' bus='scsi'/>
	I0603 12:40:45.707434 1096371 main.go:141] libmachine: (ha-220492)       <readonly/>
	I0603 12:40:45.707454 1096371 main.go:141] libmachine: (ha-220492)     </disk>
	I0603 12:40:45.707466 1096371 main.go:141] libmachine: (ha-220492)     <disk type='file' device='disk'>
	I0603 12:40:45.707484 1096371 main.go:141] libmachine: (ha-220492)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0603 12:40:45.707499 1096371 main.go:141] libmachine: (ha-220492)       <source file='/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/ha-220492.rawdisk'/>
	I0603 12:40:45.707510 1096371 main.go:141] libmachine: (ha-220492)       <target dev='hda' bus='virtio'/>
	I0603 12:40:45.707518 1096371 main.go:141] libmachine: (ha-220492)     </disk>
	I0603 12:40:45.707533 1096371 main.go:141] libmachine: (ha-220492)     <interface type='network'>
	I0603 12:40:45.707547 1096371 main.go:141] libmachine: (ha-220492)       <source network='mk-ha-220492'/>
	I0603 12:40:45.707557 1096371 main.go:141] libmachine: (ha-220492)       <model type='virtio'/>
	I0603 12:40:45.707566 1096371 main.go:141] libmachine: (ha-220492)     </interface>
	I0603 12:40:45.707576 1096371 main.go:141] libmachine: (ha-220492)     <interface type='network'>
	I0603 12:40:45.707605 1096371 main.go:141] libmachine: (ha-220492)       <source network='default'/>
	I0603 12:40:45.707621 1096371 main.go:141] libmachine: (ha-220492)       <model type='virtio'/>
	I0603 12:40:45.707633 1096371 main.go:141] libmachine: (ha-220492)     </interface>
	I0603 12:40:45.707643 1096371 main.go:141] libmachine: (ha-220492)     <serial type='pty'>
	I0603 12:40:45.707654 1096371 main.go:141] libmachine: (ha-220492)       <target port='0'/>
	I0603 12:40:45.707664 1096371 main.go:141] libmachine: (ha-220492)     </serial>
	I0603 12:40:45.707675 1096371 main.go:141] libmachine: (ha-220492)     <console type='pty'>
	I0603 12:40:45.707690 1096371 main.go:141] libmachine: (ha-220492)       <target type='serial' port='0'/>
	I0603 12:40:45.707709 1096371 main.go:141] libmachine: (ha-220492)     </console>
	I0603 12:40:45.707725 1096371 main.go:141] libmachine: (ha-220492)     <rng model='virtio'>
	I0603 12:40:45.707750 1096371 main.go:141] libmachine: (ha-220492)       <backend model='random'>/dev/random</backend>
	I0603 12:40:45.707766 1096371 main.go:141] libmachine: (ha-220492)     </rng>
	I0603 12:40:45.707778 1096371 main.go:141] libmachine: (ha-220492)     
	I0603 12:40:45.707791 1096371 main.go:141] libmachine: (ha-220492)     
	I0603 12:40:45.707809 1096371 main.go:141] libmachine: (ha-220492)   </devices>
	I0603 12:40:45.707823 1096371 main.go:141] libmachine: (ha-220492) </domain>
	I0603 12:40:45.707838 1096371 main.go:141] libmachine: (ha-220492) 
	I0603 12:40:45.711436 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:de:86:31 in network default
	I0603 12:40:45.712025 1096371 main.go:141] libmachine: (ha-220492) Ensuring networks are active...
	I0603 12:40:45.712047 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:40:45.712635 1096371 main.go:141] libmachine: (ha-220492) Ensuring network default is active
	I0603 12:40:45.712929 1096371 main.go:141] libmachine: (ha-220492) Ensuring network mk-ha-220492 is active
	I0603 12:40:45.713519 1096371 main.go:141] libmachine: (ha-220492) Getting domain xml...
	I0603 12:40:45.714138 1096371 main.go:141] libmachine: (ha-220492) Creating domain...
	I0603 12:40:46.873866 1096371 main.go:141] libmachine: (ha-220492) Waiting to get IP...
	I0603 12:40:46.874617 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:40:46.875016 1096371 main.go:141] libmachine: (ha-220492) DBG | unable to find current IP address of domain ha-220492 in network mk-ha-220492
	I0603 12:40:46.875059 1096371 main.go:141] libmachine: (ha-220492) DBG | I0603 12:40:46.874985 1096394 retry.go:31] will retry after 292.608651ms: waiting for machine to come up
	I0603 12:40:47.169512 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:40:47.169993 1096371 main.go:141] libmachine: (ha-220492) DBG | unable to find current IP address of domain ha-220492 in network mk-ha-220492
	I0603 12:40:47.170024 1096371 main.go:141] libmachine: (ha-220492) DBG | I0603 12:40:47.169954 1096394 retry.go:31] will retry after 331.173202ms: waiting for machine to come up
	I0603 12:40:47.502498 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:40:47.502913 1096371 main.go:141] libmachine: (ha-220492) DBG | unable to find current IP address of domain ha-220492 in network mk-ha-220492
	I0603 12:40:47.502948 1096371 main.go:141] libmachine: (ha-220492) DBG | I0603 12:40:47.502857 1096394 retry.go:31] will retry after 380.084322ms: waiting for machine to come up
	I0603 12:40:47.884522 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:40:47.884945 1096371 main.go:141] libmachine: (ha-220492) DBG | unable to find current IP address of domain ha-220492 in network mk-ha-220492
	I0603 12:40:47.884970 1096371 main.go:141] libmachine: (ha-220492) DBG | I0603 12:40:47.884914 1096394 retry.go:31] will retry after 457.940031ms: waiting for machine to come up
	I0603 12:40:48.344494 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:40:48.344876 1096371 main.go:141] libmachine: (ha-220492) DBG | unable to find current IP address of domain ha-220492 in network mk-ha-220492
	I0603 12:40:48.344897 1096371 main.go:141] libmachine: (ha-220492) DBG | I0603 12:40:48.344817 1096394 retry.go:31] will retry after 632.576512ms: waiting for machine to come up
	I0603 12:40:48.978563 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:40:48.978972 1096371 main.go:141] libmachine: (ha-220492) DBG | unable to find current IP address of domain ha-220492 in network mk-ha-220492
	I0603 12:40:48.978999 1096371 main.go:141] libmachine: (ha-220492) DBG | I0603 12:40:48.978929 1096394 retry.go:31] will retry after 909.430383ms: waiting for machine to come up
	I0603 12:40:49.889574 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:40:49.889917 1096371 main.go:141] libmachine: (ha-220492) DBG | unable to find current IP address of domain ha-220492 in network mk-ha-220492
	I0603 12:40:49.889951 1096371 main.go:141] libmachine: (ha-220492) DBG | I0603 12:40:49.889847 1096394 retry.go:31] will retry after 1.060400826s: waiting for machine to come up
	I0603 12:40:50.951652 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:40:50.952086 1096371 main.go:141] libmachine: (ha-220492) DBG | unable to find current IP address of domain ha-220492 in network mk-ha-220492
	I0603 12:40:50.952113 1096371 main.go:141] libmachine: (ha-220492) DBG | I0603 12:40:50.952035 1096394 retry.go:31] will retry after 967.639036ms: waiting for machine to come up
	I0603 12:40:51.921500 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:40:51.921850 1096371 main.go:141] libmachine: (ha-220492) DBG | unable to find current IP address of domain ha-220492 in network mk-ha-220492
	I0603 12:40:51.921911 1096371 main.go:141] libmachine: (ha-220492) DBG | I0603 12:40:51.921829 1096394 retry.go:31] will retry after 1.739106555s: waiting for machine to come up
	I0603 12:40:53.665285 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:40:53.665828 1096371 main.go:141] libmachine: (ha-220492) DBG | unable to find current IP address of domain ha-220492 in network mk-ha-220492
	I0603 12:40:53.665858 1096371 main.go:141] libmachine: (ha-220492) DBG | I0603 12:40:53.665772 1096394 retry.go:31] will retry after 1.453970794s: waiting for machine to come up
	I0603 12:40:55.121583 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:40:55.121969 1096371 main.go:141] libmachine: (ha-220492) DBG | unable to find current IP address of domain ha-220492 in network mk-ha-220492
	I0603 12:40:55.122001 1096371 main.go:141] libmachine: (ha-220492) DBG | I0603 12:40:55.121908 1096394 retry.go:31] will retry after 1.916636172s: waiting for machine to come up
	I0603 12:40:57.040564 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:40:57.041000 1096371 main.go:141] libmachine: (ha-220492) DBG | unable to find current IP address of domain ha-220492 in network mk-ha-220492
	I0603 12:40:57.041029 1096371 main.go:141] libmachine: (ha-220492) DBG | I0603 12:40:57.040958 1096394 retry.go:31] will retry after 2.280642214s: waiting for machine to come up
	I0603 12:40:59.324400 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:40:59.324815 1096371 main.go:141] libmachine: (ha-220492) DBG | unable to find current IP address of domain ha-220492 in network mk-ha-220492
	I0603 12:40:59.324841 1096371 main.go:141] libmachine: (ha-220492) DBG | I0603 12:40:59.324777 1096394 retry.go:31] will retry after 4.41502757s: waiting for machine to come up
	I0603 12:41:03.743917 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:03.744314 1096371 main.go:141] libmachine: (ha-220492) DBG | unable to find current IP address of domain ha-220492 in network mk-ha-220492
	I0603 12:41:03.744338 1096371 main.go:141] libmachine: (ha-220492) DBG | I0603 12:41:03.744274 1096394 retry.go:31] will retry after 4.66191218s: waiting for machine to come up
	I0603 12:41:08.410233 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:08.410774 1096371 main.go:141] libmachine: (ha-220492) Found IP for machine: 192.168.39.6
	I0603 12:41:08.410804 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has current primary IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:08.410813 1096371 main.go:141] libmachine: (ha-220492) Reserving static IP address...
	I0603 12:41:08.411211 1096371 main.go:141] libmachine: (ha-220492) DBG | unable to find host DHCP lease matching {name: "ha-220492", mac: "52:54:00:79:0d:a6", ip: "192.168.39.6"} in network mk-ha-220492
	I0603 12:41:08.484713 1096371 main.go:141] libmachine: (ha-220492) DBG | Getting to WaitForSSH function...
	I0603 12:41:08.484747 1096371 main.go:141] libmachine: (ha-220492) Reserved static IP address: 192.168.39.6
	I0603 12:41:08.484761 1096371 main.go:141] libmachine: (ha-220492) Waiting for SSH to be available...
	I0603 12:41:08.487460 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:08.487883 1096371 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:minikube Clientid:01:52:54:00:79:0d:a6}
	I0603 12:41:08.487928 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:08.488036 1096371 main.go:141] libmachine: (ha-220492) DBG | Using SSH client type: external
	I0603 12:41:08.488065 1096371 main.go:141] libmachine: (ha-220492) DBG | Using SSH private key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/id_rsa (-rw-------)
	I0603 12:41:08.488115 1096371 main.go:141] libmachine: (ha-220492) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.6 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 12:41:08.488135 1096371 main.go:141] libmachine: (ha-220492) DBG | About to run SSH command:
	I0603 12:41:08.488148 1096371 main.go:141] libmachine: (ha-220492) DBG | exit 0
	I0603 12:41:08.617602 1096371 main.go:141] libmachine: (ha-220492) DBG | SSH cmd err, output: <nil>: 
	I0603 12:41:08.617902 1096371 main.go:141] libmachine: (ha-220492) KVM machine creation complete!
	I0603 12:41:08.618255 1096371 main.go:141] libmachine: (ha-220492) Calling .GetConfigRaw
	I0603 12:41:08.618835 1096371 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:41:08.619050 1096371 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:41:08.619264 1096371 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0603 12:41:08.619281 1096371 main.go:141] libmachine: (ha-220492) Calling .GetState
	I0603 12:41:08.620453 1096371 main.go:141] libmachine: Detecting operating system of created instance...
	I0603 12:41:08.620481 1096371 main.go:141] libmachine: Waiting for SSH to be available...
	I0603 12:41:08.620487 1096371 main.go:141] libmachine: Getting to WaitForSSH function...
	I0603 12:41:08.620508 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:41:08.623035 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:08.623483 1096371 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:41:08.623499 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:08.623677 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:41:08.623919 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:41:08.624078 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:41:08.624333 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:41:08.624520 1096371 main.go:141] libmachine: Using SSH client type: native
	I0603 12:41:08.624742 1096371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0603 12:41:08.624757 1096371 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0603 12:41:08.732628 1096371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:41:08.732662 1096371 main.go:141] libmachine: Detecting the provisioner...
	I0603 12:41:08.732674 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:41:08.735828 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:08.736203 1096371 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:41:08.736226 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:08.736419 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:41:08.736625 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:41:08.736793 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:41:08.736950 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:41:08.737098 1096371 main.go:141] libmachine: Using SSH client type: native
	I0603 12:41:08.737324 1096371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0603 12:41:08.737339 1096371 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0603 12:41:08.846417 1096371 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0603 12:41:08.846525 1096371 main.go:141] libmachine: found compatible host: buildroot
	I0603 12:41:08.846537 1096371 main.go:141] libmachine: Provisioning with buildroot...
	I0603 12:41:08.846545 1096371 main.go:141] libmachine: (ha-220492) Calling .GetMachineName
	I0603 12:41:08.846871 1096371 buildroot.go:166] provisioning hostname "ha-220492"
	I0603 12:41:08.846903 1096371 main.go:141] libmachine: (ha-220492) Calling .GetMachineName
	I0603 12:41:08.847118 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:41:08.849533 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:08.849812 1096371 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:41:08.849854 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:08.849968 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:41:08.850170 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:41:08.850325 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:41:08.850543 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:41:08.850678 1096371 main.go:141] libmachine: Using SSH client type: native
	I0603 12:41:08.850889 1096371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0603 12:41:08.850902 1096371 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-220492 && echo "ha-220492" | sudo tee /etc/hostname
	I0603 12:41:08.975847 1096371 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-220492
	
	I0603 12:41:08.975877 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:41:08.978686 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:08.978954 1096371 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:41:08.978999 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:08.979154 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:41:08.979387 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:41:08.979591 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:41:08.979736 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:41:08.979922 1096371 main.go:141] libmachine: Using SSH client type: native
	I0603 12:41:08.980097 1096371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0603 12:41:08.980113 1096371 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-220492' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-220492/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-220492' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 12:41:09.099148 1096371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:41:09.099187 1096371 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19011-1078924/.minikube CaCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19011-1078924/.minikube}
	I0603 12:41:09.099227 1096371 buildroot.go:174] setting up certificates
	I0603 12:41:09.099240 1096371 provision.go:84] configureAuth start
	I0603 12:41:09.099252 1096371 main.go:141] libmachine: (ha-220492) Calling .GetMachineName
	I0603 12:41:09.099581 1096371 main.go:141] libmachine: (ha-220492) Calling .GetIP
	I0603 12:41:09.102107 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:09.102418 1096371 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:41:09.102444 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:09.102566 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:41:09.104787 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:09.105123 1096371 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:41:09.105149 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:09.105298 1096371 provision.go:143] copyHostCerts
	I0603 12:41:09.105329 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 12:41:09.105377 1096371 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem, removing ...
	I0603 12:41:09.105387 1096371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 12:41:09.105475 1096371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem (1675 bytes)
	I0603 12:41:09.105607 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 12:41:09.105626 1096371 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem, removing ...
	I0603 12:41:09.105631 1096371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 12:41:09.105661 1096371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem (1078 bytes)
	I0603 12:41:09.105718 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 12:41:09.105736 1096371 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem, removing ...
	I0603 12:41:09.105739 1096371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 12:41:09.105772 1096371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem (1123 bytes)
	I0603 12:41:09.105833 1096371 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem org=jenkins.ha-220492 san=[127.0.0.1 192.168.39.6 ha-220492 localhost minikube]
	I0603 12:41:09.144506 1096371 provision.go:177] copyRemoteCerts
	I0603 12:41:09.144571 1096371 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 12:41:09.144595 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:41:09.147555 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:09.147871 1096371 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:41:09.147911 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:09.148084 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:41:09.148311 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:41:09.148463 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:41:09.148616 1096371 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/id_rsa Username:docker}
	I0603 12:41:09.232186 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0603 12:41:09.232270 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 12:41:09.256495 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0603 12:41:09.256591 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0603 12:41:09.279937 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0603 12:41:09.280020 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 12:41:09.302800 1096371 provision.go:87] duration metric: took 203.541974ms to configureAuth
	I0603 12:41:09.302832 1096371 buildroot.go:189] setting minikube options for container-runtime
	I0603 12:41:09.303052 1096371 config.go:182] Loaded profile config "ha-220492": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:41:09.303169 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:41:09.305950 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:09.306309 1096371 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:41:09.306345 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:09.306571 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:41:09.306767 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:41:09.306974 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:41:09.307118 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:41:09.307322 1096371 main.go:141] libmachine: Using SSH client type: native
	I0603 12:41:09.307541 1096371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0603 12:41:09.307568 1096371 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 12:41:09.582908 1096371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 12:41:09.582947 1096371 main.go:141] libmachine: Checking connection to Docker...
	I0603 12:41:09.582973 1096371 main.go:141] libmachine: (ha-220492) Calling .GetURL
	I0603 12:41:09.584407 1096371 main.go:141] libmachine: (ha-220492) DBG | Using libvirt version 6000000
	I0603 12:41:09.586804 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:09.587235 1096371 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:41:09.587260 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:09.587399 1096371 main.go:141] libmachine: Docker is up and running!
	I0603 12:41:09.587414 1096371 main.go:141] libmachine: Reticulating splines...
	I0603 12:41:09.587424 1096371 client.go:171] duration metric: took 24.345940503s to LocalClient.Create
	I0603 12:41:09.587453 1096371 start.go:167] duration metric: took 24.346013192s to libmachine.API.Create "ha-220492"
	I0603 12:41:09.587467 1096371 start.go:293] postStartSetup for "ha-220492" (driver="kvm2")
	I0603 12:41:09.587488 1096371 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 12:41:09.587511 1096371 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:41:09.587761 1096371 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 12:41:09.587787 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:41:09.589732 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:09.590060 1096371 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:41:09.590087 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:09.590164 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:41:09.590378 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:41:09.590558 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:41:09.590740 1096371 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/id_rsa Username:docker}
	I0603 12:41:09.676420 1096371 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 12:41:09.680623 1096371 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 12:41:09.680650 1096371 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/addons for local assets ...
	I0603 12:41:09.680735 1096371 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/files for local assets ...
	I0603 12:41:09.680843 1096371 filesync.go:149] local asset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> 10862512.pem in /etc/ssl/certs
	I0603 12:41:09.680858 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> /etc/ssl/certs/10862512.pem
	I0603 12:41:09.680969 1096371 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 12:41:09.690475 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 12:41:09.714650 1096371 start.go:296] duration metric: took 127.159539ms for postStartSetup
	I0603 12:41:09.714708 1096371 main.go:141] libmachine: (ha-220492) Calling .GetConfigRaw
	I0603 12:41:09.715397 1096371 main.go:141] libmachine: (ha-220492) Calling .GetIP
	I0603 12:41:09.718274 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:09.718634 1096371 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:41:09.718662 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:09.718992 1096371 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/config.json ...
	I0603 12:41:09.719173 1096371 start.go:128] duration metric: took 24.495419868s to createHost
	I0603 12:41:09.719240 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:41:09.721338 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:09.721632 1096371 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:41:09.721654 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:09.721797 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:41:09.721975 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:41:09.722162 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:41:09.722277 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:41:09.722449 1096371 main.go:141] libmachine: Using SSH client type: native
	I0603 12:41:09.722617 1096371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0603 12:41:09.722638 1096371 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 12:41:09.834352 1096371 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717418469.811408647
	
	I0603 12:41:09.834385 1096371 fix.go:216] guest clock: 1717418469.811408647
	I0603 12:41:09.834395 1096371 fix.go:229] Guest: 2024-06-03 12:41:09.811408647 +0000 UTC Remote: 2024-06-03 12:41:09.719204809 +0000 UTC m=+24.601774795 (delta=92.203838ms)
	I0603 12:41:09.834422 1096371 fix.go:200] guest clock delta is within tolerance: 92.203838ms
	I0603 12:41:09.834428 1096371 start.go:83] releasing machines lock for "ha-220492", held for 24.610763142s
	I0603 12:41:09.834448 1096371 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:41:09.834698 1096371 main.go:141] libmachine: (ha-220492) Calling .GetIP
	I0603 12:41:09.837362 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:09.837770 1096371 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:41:09.837810 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:09.837878 1096371 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:41:09.838413 1096371 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:41:09.838611 1096371 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:41:09.838714 1096371 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 12:41:09.838765 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:41:09.838861 1096371 ssh_runner.go:195] Run: cat /version.json
	I0603 12:41:09.838887 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:41:09.841501 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:09.841605 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:09.841930 1096371 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:41:09.841956 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:09.842004 1096371 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:41:09.842040 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:09.842084 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:41:09.842265 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:41:09.842326 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:41:09.842453 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:41:09.842481 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:41:09.842687 1096371 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/id_rsa Username:docker}
	I0603 12:41:09.842707 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:41:09.842841 1096371 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/id_rsa Username:docker}
	I0603 12:41:09.946969 1096371 ssh_runner.go:195] Run: systemctl --version
	I0603 12:41:09.953061 1096371 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 12:41:10.114367 1096371 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 12:41:10.120451 1096371 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 12:41:10.120507 1096371 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 12:41:10.136901 1096371 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 12:41:10.136927 1096371 start.go:494] detecting cgroup driver to use...
	I0603 12:41:10.137010 1096371 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 12:41:10.152519 1096371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:41:10.166479 1096371 docker.go:217] disabling cri-docker service (if available) ...
	I0603 12:41:10.166553 1096371 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 12:41:10.179615 1096371 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 12:41:10.192772 1096371 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 12:41:10.302754 1096371 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 12:41:10.447209 1096371 docker.go:233] disabling docker service ...
	I0603 12:41:10.447309 1096371 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 12:41:10.462073 1096371 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 12:41:10.475186 1096371 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 12:41:10.604450 1096371 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 12:41:10.730595 1096371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 12:41:10.744935 1096371 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:41:10.763746 1096371 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 12:41:10.763808 1096371 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:41:10.774316 1096371 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 12:41:10.774404 1096371 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:41:10.784785 1096371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:41:10.795071 1096371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:41:10.805255 1096371 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 12:41:10.815375 1096371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:41:10.825270 1096371 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:41:10.842181 1096371 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:41:10.852166 1096371 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 12:41:10.861053 1096371 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 12:41:10.861113 1096371 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 12:41:10.874159 1096371 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 12:41:10.883417 1096371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:41:10.992570 1096371 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 12:41:11.128086 1096371 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 12:41:11.128206 1096371 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 12:41:11.132907 1096371 start.go:562] Will wait 60s for crictl version
	I0603 12:41:11.132978 1096371 ssh_runner.go:195] Run: which crictl
	I0603 12:41:11.136891 1096371 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 12:41:11.176818 1096371 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 12:41:11.176897 1096371 ssh_runner.go:195] Run: crio --version
	I0603 12:41:11.205711 1096371 ssh_runner.go:195] Run: crio --version
	I0603 12:41:11.235610 1096371 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 12:41:11.236829 1096371 main.go:141] libmachine: (ha-220492) Calling .GetIP
	I0603 12:41:11.239504 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:11.239857 1096371 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:41:11.239902 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:11.240094 1096371 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0603 12:41:11.244177 1096371 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:41:11.257181 1096371 kubeadm.go:877] updating cluster {Name:ha-220492 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:ha-220492 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 12:41:11.257314 1096371 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 12:41:11.257358 1096371 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:41:11.290352 1096371 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0603 12:41:11.290435 1096371 ssh_runner.go:195] Run: which lz4
	I0603 12:41:11.294176 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0603 12:41:11.294272 1096371 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 12:41:11.298645 1096371 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 12:41:11.298674 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0603 12:41:12.707970 1096371 crio.go:462] duration metric: took 1.413714631s to copy over tarball
	I0603 12:41:12.708044 1096371 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 12:41:14.850543 1096371 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.142469766s)
	I0603 12:41:14.850575 1096371 crio.go:469] duration metric: took 2.142572179s to extract the tarball
	I0603 12:41:14.850582 1096371 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 12:41:14.888041 1096371 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:41:14.937691 1096371 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 12:41:14.937722 1096371 cache_images.go:84] Images are preloaded, skipping loading
	I0603 12:41:14.937731 1096371 kubeadm.go:928] updating node { 192.168.39.6 8443 v1.30.1 crio true true} ...
	I0603 12:41:14.937872 1096371 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-220492 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-220492 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 12:41:14.937971 1096371 ssh_runner.go:195] Run: crio config
	I0603 12:41:14.983244 1096371 cni.go:84] Creating CNI manager for ""
	I0603 12:41:14.983269 1096371 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0603 12:41:14.983283 1096371 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 12:41:14.983306 1096371 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.6 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-220492 NodeName:ha-220492 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 12:41:14.983454 1096371 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-220492"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 12:41:14.983485 1096371 kube-vip.go:115] generating kube-vip config ...
	I0603 12:41:14.983530 1096371 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0603 12:41:15.002647 1096371 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0603 12:41:15.002758 1096371 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0603 12:41:15.002834 1096371 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 12:41:15.013239 1096371 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 12:41:15.013305 1096371 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0603 12:41:15.023168 1096371 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0603 12:41:15.040219 1096371 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 12:41:15.056200 1096371 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0603 12:41:15.072933 1096371 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0603 12:41:15.089270 1096371 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0603 12:41:15.093234 1096371 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:41:15.105390 1096371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:41:15.213160 1096371 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:41:15.228491 1096371 certs.go:68] Setting up /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492 for IP: 192.168.39.6
	I0603 12:41:15.228516 1096371 certs.go:194] generating shared ca certs ...
	I0603 12:41:15.228534 1096371 certs.go:226] acquiring lock for ca certs: {Name:mkeec5aabce7c9540fcb31b78e4f96c2851d54f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:41:15.228726 1096371 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key
	I0603 12:41:15.228786 1096371 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key
	I0603 12:41:15.228800 1096371 certs.go:256] generating profile certs ...
	I0603 12:41:15.228874 1096371 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/client.key
	I0603 12:41:15.228891 1096371 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/client.crt with IP's: []
	I0603 12:41:16.007432 1096371 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/client.crt ...
	I0603 12:41:16.007467 1096371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/client.crt: {Name:mkcf8e4c0397b30b1fc6ff360e1357815d7e9487 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:41:16.007645 1096371 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/client.key ...
	I0603 12:41:16.007657 1096371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/client.key: {Name:mkf5571341d9e95c379715e81518b377f7fe4a62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:41:16.007742 1096371 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key.8b57d20e
	I0603 12:41:16.007758 1096371 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt.8b57d20e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.6 192.168.39.254]
	I0603 12:41:16.076792 1096371 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt.8b57d20e ...
	I0603 12:41:16.076823 1096371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt.8b57d20e: {Name:mk7c3878ef4aff24b303a01d932b8859cd5fadb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:41:16.076982 1096371 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key.8b57d20e ...
	I0603 12:41:16.076995 1096371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key.8b57d20e: {Name:mk2a40f6900698664b0c05d410f3a6a10c2384fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:41:16.077063 1096371 certs.go:381] copying /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt.8b57d20e -> /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt
	I0603 12:41:16.077155 1096371 certs.go:385] copying /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key.8b57d20e -> /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key
	I0603 12:41:16.077214 1096371 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/proxy-client.key
	I0603 12:41:16.077230 1096371 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/proxy-client.crt with IP's: []
	I0603 12:41:16.343261 1096371 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/proxy-client.crt ...
	I0603 12:41:16.343300 1096371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/proxy-client.crt: {Name:mk84cee379f524557192feddab8407818bce5852 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:41:16.343477 1096371 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/proxy-client.key ...
	I0603 12:41:16.343487 1096371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/proxy-client.key: {Name:mkba07abba520f757a1375a8fe5f778a22b26881 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:41:16.343559 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0603 12:41:16.343576 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0603 12:41:16.343586 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0603 12:41:16.343599 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0603 12:41:16.343609 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0603 12:41:16.343620 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0603 12:41:16.343630 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0603 12:41:16.343641 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0603 12:41:16.343698 1096371 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem (1338 bytes)
	W0603 12:41:16.343735 1096371 certs.go:480] ignoring /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251_empty.pem, impossibly tiny 0 bytes
	I0603 12:41:16.343745 1096371 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 12:41:16.343767 1096371 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem (1078 bytes)
	I0603 12:41:16.343790 1096371 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem (1123 bytes)
	I0603 12:41:16.343811 1096371 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem (1675 bytes)
	I0603 12:41:16.343846 1096371 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 12:41:16.343872 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> /usr/share/ca-certificates/10862512.pem
	I0603 12:41:16.343886 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:41:16.343898 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem -> /usr/share/ca-certificates/1086251.pem
	I0603 12:41:16.344503 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 12:41:16.374351 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 12:41:16.399920 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 12:41:16.426092 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0603 12:41:16.450523 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0603 12:41:16.475199 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0603 12:41:16.499461 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 12:41:16.525093 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 12:41:16.548674 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /usr/share/ca-certificates/10862512.pem (1708 bytes)
	I0603 12:41:16.572498 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 12:41:16.595826 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem --> /usr/share/ca-certificates/1086251.pem (1338 bytes)
	I0603 12:41:16.619941 1096371 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 12:41:16.636934 1096371 ssh_runner.go:195] Run: openssl version
	I0603 12:41:16.642939 1096371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1086251.pem && ln -fs /usr/share/ca-certificates/1086251.pem /etc/ssl/certs/1086251.pem"
	I0603 12:41:16.653985 1096371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1086251.pem
	I0603 12:41:16.658496 1096371 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:37 /usr/share/ca-certificates/1086251.pem
	I0603 12:41:16.658561 1096371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1086251.pem
	I0603 12:41:16.664405 1096371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1086251.pem /etc/ssl/certs/51391683.0"
	I0603 12:41:16.675317 1096371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10862512.pem && ln -fs /usr/share/ca-certificates/10862512.pem /etc/ssl/certs/10862512.pem"
	I0603 12:41:16.686072 1096371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10862512.pem
	I0603 12:41:16.690593 1096371 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:37 /usr/share/ca-certificates/10862512.pem
	I0603 12:41:16.690652 1096371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10862512.pem
	I0603 12:41:16.696468 1096371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10862512.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 12:41:16.707203 1096371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 12:41:16.717986 1096371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:41:16.722665 1096371 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:24 /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:41:16.722725 1096371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:41:16.728327 1096371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 12:41:16.739532 1096371 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 12:41:16.743738 1096371 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 12:41:16.743806 1096371 kubeadm.go:391] StartCluster: {Name:ha-220492 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-220492 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:41:16.743898 1096371 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 12:41:16.743942 1096371 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:41:16.786837 1096371 cri.go:89] found id: ""
	I0603 12:41:16.786924 1096371 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0603 12:41:16.797324 1096371 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:41:16.807138 1096371 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:41:16.816725 1096371 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:41:16.816747 1096371 kubeadm.go:156] found existing configuration files:
	
	I0603 12:41:16.816787 1096371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 12:41:16.826048 1096371 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:41:16.826117 1096371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:41:16.835723 1096371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 12:41:16.844806 1096371 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:41:16.844855 1096371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:41:16.854180 1096371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 12:41:16.863089 1096371 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:41:16.863148 1096371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:41:16.872432 1096371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 12:41:16.881249 1096371 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:41:16.881315 1096371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:41:16.893091 1096371 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 12:41:17.142716 1096371 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 12:41:27.696084 1096371 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0603 12:41:27.696146 1096371 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 12:41:27.696209 1096371 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 12:41:27.696314 1096371 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 12:41:27.696448 1096371 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 12:41:27.696559 1096371 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 12:41:27.697948 1096371 out.go:204]   - Generating certificates and keys ...
	I0603 12:41:27.698064 1096371 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 12:41:27.698153 1096371 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 12:41:27.698252 1096371 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0603 12:41:27.698353 1096371 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0603 12:41:27.698433 1096371 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0603 12:41:27.698486 1096371 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0603 12:41:27.698532 1096371 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0603 12:41:27.698649 1096371 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-220492 localhost] and IPs [192.168.39.6 127.0.0.1 ::1]
	I0603 12:41:27.698720 1096371 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0603 12:41:27.698859 1096371 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-220492 localhost] and IPs [192.168.39.6 127.0.0.1 ::1]
	I0603 12:41:27.698941 1096371 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0603 12:41:27.699029 1096371 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0603 12:41:27.699095 1096371 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0603 12:41:27.699173 1096371 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 12:41:27.699237 1096371 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 12:41:27.699308 1096371 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0603 12:41:27.699388 1096371 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 12:41:27.699470 1096371 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 12:41:27.699550 1096371 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 12:41:27.699659 1096371 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 12:41:27.699746 1096371 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 12:41:27.701049 1096371 out.go:204]   - Booting up control plane ...
	I0603 12:41:27.701154 1096371 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 12:41:27.701232 1096371 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 12:41:27.701295 1096371 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 12:41:27.701389 1096371 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 12:41:27.701482 1096371 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 12:41:27.701521 1096371 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 12:41:27.701655 1096371 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0603 12:41:27.701751 1096371 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0603 12:41:27.701808 1096371 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.456802ms
	I0603 12:41:27.701867 1096371 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0603 12:41:27.701916 1096371 kubeadm.go:309] [api-check] The API server is healthy after 6.002004686s
	I0603 12:41:27.702014 1096371 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0603 12:41:27.702116 1096371 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0603 12:41:27.702171 1096371 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0603 12:41:27.702358 1096371 kubeadm.go:309] [mark-control-plane] Marking the node ha-220492 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0603 12:41:27.702418 1096371 kubeadm.go:309] [bootstrap-token] Using token: udpj77.zgtf6r34m22e6dpn
	I0603 12:41:27.703655 1096371 out.go:204]   - Configuring RBAC rules ...
	I0603 12:41:27.703765 1096371 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0603 12:41:27.703849 1096371 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0603 12:41:27.704016 1096371 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0603 12:41:27.704182 1096371 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0603 12:41:27.704344 1096371 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0603 12:41:27.704436 1096371 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0603 12:41:27.704562 1096371 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0603 12:41:27.704625 1096371 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0603 12:41:27.704682 1096371 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0603 12:41:27.704689 1096371 kubeadm.go:309] 
	I0603 12:41:27.704740 1096371 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0603 12:41:27.704746 1096371 kubeadm.go:309] 
	I0603 12:41:27.704816 1096371 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0603 12:41:27.704822 1096371 kubeadm.go:309] 
	I0603 12:41:27.704852 1096371 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0603 12:41:27.704908 1096371 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0603 12:41:27.704955 1096371 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0603 12:41:27.704962 1096371 kubeadm.go:309] 
	I0603 12:41:27.705011 1096371 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0603 12:41:27.705018 1096371 kubeadm.go:309] 
	I0603 12:41:27.705056 1096371 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0603 12:41:27.705062 1096371 kubeadm.go:309] 
	I0603 12:41:27.705103 1096371 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0603 12:41:27.705165 1096371 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0603 12:41:27.705242 1096371 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0603 12:41:27.705255 1096371 kubeadm.go:309] 
	I0603 12:41:27.705357 1096371 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0603 12:41:27.705446 1096371 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0603 12:41:27.705454 1096371 kubeadm.go:309] 
	I0603 12:41:27.705523 1096371 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token udpj77.zgtf6r34m22e6dpn \
	I0603 12:41:27.705611 1096371 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c33e9516f6d05db03b44f9194bafe44692a1b8ae1d860b8bc74f77578e93fdb1 \
	I0603 12:41:27.705635 1096371 kubeadm.go:309] 	--control-plane 
	I0603 12:41:27.705641 1096371 kubeadm.go:309] 
	I0603 12:41:27.705708 1096371 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0603 12:41:27.705714 1096371 kubeadm.go:309] 
	I0603 12:41:27.705779 1096371 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token udpj77.zgtf6r34m22e6dpn \
	I0603 12:41:27.705879 1096371 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c33e9516f6d05db03b44f9194bafe44692a1b8ae1d860b8bc74f77578e93fdb1 
	I0603 12:41:27.705892 1096371 cni.go:84] Creating CNI manager for ""
	I0603 12:41:27.705897 1096371 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0603 12:41:27.707387 1096371 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0603 12:41:27.708551 1096371 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0603 12:41:27.713890 1096371 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0603 12:41:27.713909 1096371 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0603 12:41:27.734853 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0603 12:41:28.054420 1096371 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 12:41:28.054507 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:28.054538 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-220492 minikube.k8s.io/updated_at=2024_06_03T12_41_28_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354 minikube.k8s.io/name=ha-220492 minikube.k8s.io/primary=true
	I0603 12:41:28.086452 1096371 ops.go:34] apiserver oom_adj: -16
	I0603 12:41:28.182311 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:28.682758 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:29.183361 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:29.682627 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:30.182859 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:30.683228 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:31.183069 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:31.682992 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:32.182664 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:32.683313 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:33.182544 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:33.683120 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:34.182933 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:34.682944 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:35.183074 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:35.682695 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:36.182914 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:36.683222 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:37.182640 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:37.682507 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:38.182629 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:38.682400 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:39.182712 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:39.682329 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:40.183320 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:40.683259 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:40.789456 1096371 kubeadm.go:1107] duration metric: took 12.735015654s to wait for elevateKubeSystemPrivileges
	W0603 12:41:40.789505 1096371 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0603 12:41:40.789516 1096371 kubeadm.go:393] duration metric: took 24.045716128s to StartCluster
	I0603 12:41:40.789542 1096371 settings.go:142] acquiring lock: {Name:mka7155af15d143794eb08b8670f7d850f44839e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:41:40.789635 1096371 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 12:41:40.790400 1096371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/kubeconfig: {Name:mk082a4c41fd0f4876b4085806e1bc5ef6533b14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:41:40.790632 1096371 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 12:41:40.790661 1096371 start.go:240] waiting for startup goroutines ...
	I0603 12:41:40.790634 1096371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0603 12:41:40.790658 1096371 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 12:41:40.790725 1096371 addons.go:69] Setting storage-provisioner=true in profile "ha-220492"
	I0603 12:41:40.790751 1096371 addons.go:234] Setting addon storage-provisioner=true in "ha-220492"
	I0603 12:41:40.790755 1096371 addons.go:69] Setting default-storageclass=true in profile "ha-220492"
	I0603 12:41:40.790795 1096371 host.go:66] Checking if "ha-220492" exists ...
	I0603 12:41:40.790796 1096371 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-220492"
	I0603 12:41:40.790857 1096371 config.go:182] Loaded profile config "ha-220492": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:41:40.791200 1096371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:41:40.791233 1096371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:41:40.791244 1096371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:41:40.791268 1096371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:41:40.806968 1096371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40089
	I0603 12:41:40.807010 1096371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34511
	I0603 12:41:40.807503 1096371 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:41:40.807550 1096371 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:41:40.808030 1096371 main.go:141] libmachine: Using API Version  1
	I0603 12:41:40.808051 1096371 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:41:40.808176 1096371 main.go:141] libmachine: Using API Version  1
	I0603 12:41:40.808201 1096371 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:41:40.808396 1096371 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:41:40.808584 1096371 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:41:40.808774 1096371 main.go:141] libmachine: (ha-220492) Calling .GetState
	I0603 12:41:40.808989 1096371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:41:40.809018 1096371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:41:40.811030 1096371 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 12:41:40.811405 1096371 kapi.go:59] client config for ha-220492: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/client.crt", KeyFile:"/home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/client.key", CAFile:"/home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfa500), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0603 12:41:40.812024 1096371 cert_rotation.go:137] Starting client certificate rotation controller
	I0603 12:41:40.812284 1096371 addons.go:234] Setting addon default-storageclass=true in "ha-220492"
	I0603 12:41:40.812334 1096371 host.go:66] Checking if "ha-220492" exists ...
	I0603 12:41:40.812705 1096371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:41:40.812742 1096371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:41:40.824612 1096371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39813
	I0603 12:41:40.825060 1096371 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:41:40.825616 1096371 main.go:141] libmachine: Using API Version  1
	I0603 12:41:40.825641 1096371 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:41:40.825959 1096371 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:41:40.826180 1096371 main.go:141] libmachine: (ha-220492) Calling .GetState
	I0603 12:41:40.827899 1096371 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:41:40.830068 1096371 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:41:40.828430 1096371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33655
	I0603 12:41:40.831614 1096371 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 12:41:40.831636 1096371 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 12:41:40.831656 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:41:40.831834 1096371 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:41:40.832331 1096371 main.go:141] libmachine: Using API Version  1
	I0603 12:41:40.832352 1096371 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:41:40.832736 1096371 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:41:40.833309 1096371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:41:40.833358 1096371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:41:40.834604 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:40.835055 1096371 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:41:40.835084 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:40.835215 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:41:40.835408 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:41:40.835571 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:41:40.835746 1096371 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/id_rsa Username:docker}
	I0603 12:41:40.848804 1096371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37175
	I0603 12:41:40.849189 1096371 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:41:40.849622 1096371 main.go:141] libmachine: Using API Version  1
	I0603 12:41:40.849642 1096371 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:41:40.849982 1096371 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:41:40.850164 1096371 main.go:141] libmachine: (ha-220492) Calling .GetState
	I0603 12:41:40.851627 1096371 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:41:40.851798 1096371 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 12:41:40.851814 1096371 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 12:41:40.851830 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:41:40.854207 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:40.854548 1096371 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:41:40.854577 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:40.854763 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:41:40.854937 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:41:40.855133 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:41:40.855293 1096371 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/id_rsa Username:docker}
	I0603 12:41:41.006436 1096371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0603 12:41:41.055489 1096371 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 12:41:41.101156 1096371 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 12:41:41.501620 1096371 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0603 12:41:41.501705 1096371 main.go:141] libmachine: Making call to close driver server
	I0603 12:41:41.501728 1096371 main.go:141] libmachine: (ha-220492) Calling .Close
	I0603 12:41:41.502034 1096371 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:41:41.502051 1096371 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:41:41.502059 1096371 main.go:141] libmachine: Making call to close driver server
	I0603 12:41:41.502067 1096371 main.go:141] libmachine: (ha-220492) Calling .Close
	I0603 12:41:41.502340 1096371 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:41:41.502361 1096371 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:41:41.502428 1096371 main.go:141] libmachine: (ha-220492) DBG | Closing plugin on server side
	I0603 12:41:41.502489 1096371 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0603 12:41:41.502501 1096371 round_trippers.go:469] Request Headers:
	I0603 12:41:41.502511 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:41:41.502523 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:41:41.510300 1096371 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 12:41:41.511148 1096371 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0603 12:41:41.511171 1096371 round_trippers.go:469] Request Headers:
	I0603 12:41:41.511181 1096371 round_trippers.go:473]     Content-Type: application/json
	I0603 12:41:41.511190 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:41:41.511194 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:41:41.519775 1096371 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0603 12:41:41.519949 1096371 main.go:141] libmachine: Making call to close driver server
	I0603 12:41:41.519964 1096371 main.go:141] libmachine: (ha-220492) Calling .Close
	I0603 12:41:41.520298 1096371 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:41:41.520319 1096371 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:41:41.807170 1096371 main.go:141] libmachine: Making call to close driver server
	I0603 12:41:41.807203 1096371 main.go:141] libmachine: (ha-220492) Calling .Close
	I0603 12:41:41.807610 1096371 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:41:41.807639 1096371 main.go:141] libmachine: (ha-220492) DBG | Closing plugin on server side
	I0603 12:41:41.807646 1096371 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:41:41.807663 1096371 main.go:141] libmachine: Making call to close driver server
	I0603 12:41:41.807672 1096371 main.go:141] libmachine: (ha-220492) Calling .Close
	I0603 12:41:41.807941 1096371 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:41:41.807977 1096371 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:41:41.809742 1096371 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0603 12:41:41.811132 1096371 addons.go:510] duration metric: took 1.020469972s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0603 12:41:41.811167 1096371 start.go:245] waiting for cluster config update ...
	I0603 12:41:41.811181 1096371 start.go:254] writing updated cluster config ...
	I0603 12:41:41.812981 1096371 out.go:177] 
	I0603 12:41:41.814660 1096371 config.go:182] Loaded profile config "ha-220492": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:41:41.814750 1096371 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/config.json ...
	I0603 12:41:41.816223 1096371 out.go:177] * Starting "ha-220492-m02" control-plane node in "ha-220492" cluster
	I0603 12:41:41.817447 1096371 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 12:41:41.817471 1096371 cache.go:56] Caching tarball of preloaded images
	I0603 12:41:41.817575 1096371 preload.go:173] Found /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 12:41:41.817588 1096371 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0603 12:41:41.817673 1096371 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/config.json ...
	I0603 12:41:41.817850 1096371 start.go:360] acquireMachinesLock for ha-220492-m02: {Name:mk20baaab39609d00406b78ad309423511e633ec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 12:41:41.817914 1096371 start.go:364] duration metric: took 42.326µs to acquireMachinesLock for "ha-220492-m02"
	I0603 12:41:41.817939 1096371 start.go:93] Provisioning new machine with config: &{Name:ha-220492 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-220492 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 12:41:41.818039 1096371 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0603 12:41:41.819647 1096371 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0603 12:41:41.819745 1096371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:41:41.819777 1096371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:41:41.834827 1096371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39621
	I0603 12:41:41.835244 1096371 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:41:41.835692 1096371 main.go:141] libmachine: Using API Version  1
	I0603 12:41:41.835712 1096371 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:41:41.836046 1096371 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:41:41.836236 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetMachineName
	I0603 12:41:41.836356 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .DriverName
	I0603 12:41:41.836494 1096371 start.go:159] libmachine.API.Create for "ha-220492" (driver="kvm2")
	I0603 12:41:41.836512 1096371 client.go:168] LocalClient.Create starting
	I0603 12:41:41.836537 1096371 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem
	I0603 12:41:41.836569 1096371 main.go:141] libmachine: Decoding PEM data...
	I0603 12:41:41.836583 1096371 main.go:141] libmachine: Parsing certificate...
	I0603 12:41:41.836644 1096371 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem
	I0603 12:41:41.836663 1096371 main.go:141] libmachine: Decoding PEM data...
	I0603 12:41:41.836673 1096371 main.go:141] libmachine: Parsing certificate...
	I0603 12:41:41.836692 1096371 main.go:141] libmachine: Running pre-create checks...
	I0603 12:41:41.836700 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .PreCreateCheck
	I0603 12:41:41.836898 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetConfigRaw
	I0603 12:41:41.837273 1096371 main.go:141] libmachine: Creating machine...
	I0603 12:41:41.837284 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .Create
	I0603 12:41:41.837449 1096371 main.go:141] libmachine: (ha-220492-m02) Creating KVM machine...
	I0603 12:41:41.838645 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | found existing default KVM network
	I0603 12:41:41.838775 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | found existing private KVM network mk-ha-220492
	I0603 12:41:41.838942 1096371 main.go:141] libmachine: (ha-220492-m02) Setting up store path in /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m02 ...
	I0603 12:41:41.838969 1096371 main.go:141] libmachine: (ha-220492-m02) Building disk image from file:///home/jenkins/minikube-integration/19011-1078924/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso
	I0603 12:41:41.839005 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | I0603 12:41:41.838909 1096793 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 12:41:41.839091 1096371 main.go:141] libmachine: (ha-220492-m02) Downloading /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19011-1078924/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0603 12:41:42.098476 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | I0603 12:41:42.098332 1096793 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m02/id_rsa...
	I0603 12:41:42.164226 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | I0603 12:41:42.164093 1096793 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m02/ha-220492-m02.rawdisk...
	I0603 12:41:42.164262 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | Writing magic tar header
	I0603 12:41:42.164286 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | Writing SSH key tar header
	I0603 12:41:42.164295 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | I0603 12:41:42.164206 1096793 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m02 ...
	I0603 12:41:42.164306 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m02
	I0603 12:41:42.164379 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines
	I0603 12:41:42.164412 1096371 main.go:141] libmachine: (ha-220492-m02) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m02 (perms=drwx------)
	I0603 12:41:42.164422 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 12:41:42.164433 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924
	I0603 12:41:42.164447 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0603 12:41:42.164453 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | Checking permissions on dir: /home/jenkins
	I0603 12:41:42.164459 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | Checking permissions on dir: /home
	I0603 12:41:42.164466 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | Skipping /home - not owner
	I0603 12:41:42.164510 1096371 main.go:141] libmachine: (ha-220492-m02) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924/.minikube/machines (perms=drwxr-xr-x)
	I0603 12:41:42.164547 1096371 main.go:141] libmachine: (ha-220492-m02) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924/.minikube (perms=drwxr-xr-x)
	I0603 12:41:42.164561 1096371 main.go:141] libmachine: (ha-220492-m02) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924 (perms=drwxrwxr-x)
	I0603 12:41:42.164573 1096371 main.go:141] libmachine: (ha-220492-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0603 12:41:42.164587 1096371 main.go:141] libmachine: (ha-220492-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0603 12:41:42.164598 1096371 main.go:141] libmachine: (ha-220492-m02) Creating domain...
	I0603 12:41:42.165519 1096371 main.go:141] libmachine: (ha-220492-m02) define libvirt domain using xml: 
	I0603 12:41:42.165540 1096371 main.go:141] libmachine: (ha-220492-m02) <domain type='kvm'>
	I0603 12:41:42.165550 1096371 main.go:141] libmachine: (ha-220492-m02)   <name>ha-220492-m02</name>
	I0603 12:41:42.165557 1096371 main.go:141] libmachine: (ha-220492-m02)   <memory unit='MiB'>2200</memory>
	I0603 12:41:42.165568 1096371 main.go:141] libmachine: (ha-220492-m02)   <vcpu>2</vcpu>
	I0603 12:41:42.165573 1096371 main.go:141] libmachine: (ha-220492-m02)   <features>
	I0603 12:41:42.165581 1096371 main.go:141] libmachine: (ha-220492-m02)     <acpi/>
	I0603 12:41:42.165587 1096371 main.go:141] libmachine: (ha-220492-m02)     <apic/>
	I0603 12:41:42.165595 1096371 main.go:141] libmachine: (ha-220492-m02)     <pae/>
	I0603 12:41:42.165605 1096371 main.go:141] libmachine: (ha-220492-m02)     
	I0603 12:41:42.165612 1096371 main.go:141] libmachine: (ha-220492-m02)   </features>
	I0603 12:41:42.165621 1096371 main.go:141] libmachine: (ha-220492-m02)   <cpu mode='host-passthrough'>
	I0603 12:41:42.165648 1096371 main.go:141] libmachine: (ha-220492-m02)   
	I0603 12:41:42.165671 1096371 main.go:141] libmachine: (ha-220492-m02)   </cpu>
	I0603 12:41:42.165687 1096371 main.go:141] libmachine: (ha-220492-m02)   <os>
	I0603 12:41:42.165699 1096371 main.go:141] libmachine: (ha-220492-m02)     <type>hvm</type>
	I0603 12:41:42.165709 1096371 main.go:141] libmachine: (ha-220492-m02)     <boot dev='cdrom'/>
	I0603 12:41:42.165719 1096371 main.go:141] libmachine: (ha-220492-m02)     <boot dev='hd'/>
	I0603 12:41:42.165731 1096371 main.go:141] libmachine: (ha-220492-m02)     <bootmenu enable='no'/>
	I0603 12:41:42.165741 1096371 main.go:141] libmachine: (ha-220492-m02)   </os>
	I0603 12:41:42.165751 1096371 main.go:141] libmachine: (ha-220492-m02)   <devices>
	I0603 12:41:42.165764 1096371 main.go:141] libmachine: (ha-220492-m02)     <disk type='file' device='cdrom'>
	I0603 12:41:42.165782 1096371 main.go:141] libmachine: (ha-220492-m02)       <source file='/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m02/boot2docker.iso'/>
	I0603 12:41:42.165793 1096371 main.go:141] libmachine: (ha-220492-m02)       <target dev='hdc' bus='scsi'/>
	I0603 12:41:42.165801 1096371 main.go:141] libmachine: (ha-220492-m02)       <readonly/>
	I0603 12:41:42.165818 1096371 main.go:141] libmachine: (ha-220492-m02)     </disk>
	I0603 12:41:42.165829 1096371 main.go:141] libmachine: (ha-220492-m02)     <disk type='file' device='disk'>
	I0603 12:41:42.165840 1096371 main.go:141] libmachine: (ha-220492-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0603 12:41:42.165857 1096371 main.go:141] libmachine: (ha-220492-m02)       <source file='/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m02/ha-220492-m02.rawdisk'/>
	I0603 12:41:42.165869 1096371 main.go:141] libmachine: (ha-220492-m02)       <target dev='hda' bus='virtio'/>
	I0603 12:41:42.165877 1096371 main.go:141] libmachine: (ha-220492-m02)     </disk>
	I0603 12:41:42.165892 1096371 main.go:141] libmachine: (ha-220492-m02)     <interface type='network'>
	I0603 12:41:42.165904 1096371 main.go:141] libmachine: (ha-220492-m02)       <source network='mk-ha-220492'/>
	I0603 12:41:42.165913 1096371 main.go:141] libmachine: (ha-220492-m02)       <model type='virtio'/>
	I0603 12:41:42.165919 1096371 main.go:141] libmachine: (ha-220492-m02)     </interface>
	I0603 12:41:42.165930 1096371 main.go:141] libmachine: (ha-220492-m02)     <interface type='network'>
	I0603 12:41:42.165944 1096371 main.go:141] libmachine: (ha-220492-m02)       <source network='default'/>
	I0603 12:41:42.165958 1096371 main.go:141] libmachine: (ha-220492-m02)       <model type='virtio'/>
	I0603 12:41:42.165970 1096371 main.go:141] libmachine: (ha-220492-m02)     </interface>
	I0603 12:41:42.165979 1096371 main.go:141] libmachine: (ha-220492-m02)     <serial type='pty'>
	I0603 12:41:42.165991 1096371 main.go:141] libmachine: (ha-220492-m02)       <target port='0'/>
	I0603 12:41:42.165999 1096371 main.go:141] libmachine: (ha-220492-m02)     </serial>
	I0603 12:41:42.166005 1096371 main.go:141] libmachine: (ha-220492-m02)     <console type='pty'>
	I0603 12:41:42.166016 1096371 main.go:141] libmachine: (ha-220492-m02)       <target type='serial' port='0'/>
	I0603 12:41:42.166039 1096371 main.go:141] libmachine: (ha-220492-m02)     </console>
	I0603 12:41:42.166057 1096371 main.go:141] libmachine: (ha-220492-m02)     <rng model='virtio'>
	I0603 12:41:42.166064 1096371 main.go:141] libmachine: (ha-220492-m02)       <backend model='random'>/dev/random</backend>
	I0603 12:41:42.166071 1096371 main.go:141] libmachine: (ha-220492-m02)     </rng>
	I0603 12:41:42.166077 1096371 main.go:141] libmachine: (ha-220492-m02)     
	I0603 12:41:42.166081 1096371 main.go:141] libmachine: (ha-220492-m02)     
	I0603 12:41:42.166087 1096371 main.go:141] libmachine: (ha-220492-m02)   </devices>
	I0603 12:41:42.166093 1096371 main.go:141] libmachine: (ha-220492-m02) </domain>
	I0603 12:41:42.166100 1096371 main.go:141] libmachine: (ha-220492-m02) 
	I0603 12:41:42.172501 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:3c:64:73 in network default
	I0603 12:41:42.173058 1096371 main.go:141] libmachine: (ha-220492-m02) Ensuring networks are active...
	I0603 12:41:42.173071 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:41:42.173840 1096371 main.go:141] libmachine: (ha-220492-m02) Ensuring network default is active
	I0603 12:41:42.174161 1096371 main.go:141] libmachine: (ha-220492-m02) Ensuring network mk-ha-220492 is active
	I0603 12:41:42.174575 1096371 main.go:141] libmachine: (ha-220492-m02) Getting domain xml...
	I0603 12:41:42.175289 1096371 main.go:141] libmachine: (ha-220492-m02) Creating domain...
	I0603 12:41:43.408023 1096371 main.go:141] libmachine: (ha-220492-m02) Waiting to get IP...
	I0603 12:41:43.408787 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:41:43.409234 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | unable to find current IP address of domain ha-220492-m02 in network mk-ha-220492
	I0603 12:41:43.409266 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | I0603 12:41:43.409198 1096793 retry.go:31] will retry after 231.363398ms: waiting for machine to come up
	I0603 12:41:43.643040 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:41:43.643639 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | unable to find current IP address of domain ha-220492-m02 in network mk-ha-220492
	I0603 12:41:43.643666 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | I0603 12:41:43.643579 1096793 retry.go:31] will retry after 353.063611ms: waiting for machine to come up
	I0603 12:41:43.998171 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:41:43.998655 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | unable to find current IP address of domain ha-220492-m02 in network mk-ha-220492
	I0603 12:41:43.998688 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | I0603 12:41:43.998593 1096793 retry.go:31] will retry after 405.64874ms: waiting for machine to come up
	I0603 12:41:44.406228 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:41:44.406687 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | unable to find current IP address of domain ha-220492-m02 in network mk-ha-220492
	I0603 12:41:44.406712 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | I0603 12:41:44.406641 1096793 retry.go:31] will retry after 471.518099ms: waiting for machine to come up
	I0603 12:41:44.879308 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:41:44.879787 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | unable to find current IP address of domain ha-220492-m02 in network mk-ha-220492
	I0603 12:41:44.879818 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | I0603 12:41:44.879742 1096793 retry.go:31] will retry after 670.162296ms: waiting for machine to come up
	I0603 12:41:45.551947 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:41:45.552455 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | unable to find current IP address of domain ha-220492-m02 in network mk-ha-220492
	I0603 12:41:45.552508 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | I0603 12:41:45.552449 1096793 retry.go:31] will retry after 784.973205ms: waiting for machine to come up
	I0603 12:41:46.339394 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:41:46.339836 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | unable to find current IP address of domain ha-220492-m02 in network mk-ha-220492
	I0603 12:41:46.339869 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | I0603 12:41:46.339773 1096793 retry.go:31] will retry after 946.869881ms: waiting for machine to come up
	I0603 12:41:47.288357 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:41:47.288753 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | unable to find current IP address of domain ha-220492-m02 in network mk-ha-220492
	I0603 12:41:47.288780 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | I0603 12:41:47.288698 1096793 retry.go:31] will retry after 1.43924214s: waiting for machine to come up
	I0603 12:41:48.729639 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:41:48.730058 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | unable to find current IP address of domain ha-220492-m02 in network mk-ha-220492
	I0603 12:41:48.730084 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | I0603 12:41:48.730007 1096793 retry.go:31] will retry after 1.520365565s: waiting for machine to come up
	I0603 12:41:50.252526 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:41:50.252955 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | unable to find current IP address of domain ha-220492-m02 in network mk-ha-220492
	I0603 12:41:50.252979 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | I0603 12:41:50.252908 1096793 retry.go:31] will retry after 1.523540957s: waiting for machine to come up
	I0603 12:41:51.778661 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:41:51.779119 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | unable to find current IP address of domain ha-220492-m02 in network mk-ha-220492
	I0603 12:41:51.779143 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | I0603 12:41:51.779069 1096793 retry.go:31] will retry after 2.17843585s: waiting for machine to come up
	I0603 12:41:53.959571 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:41:53.960016 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | unable to find current IP address of domain ha-220492-m02 in network mk-ha-220492
	I0603 12:41:53.960046 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | I0603 12:41:53.959992 1096793 retry.go:31] will retry after 3.266960434s: waiting for machine to come up
	I0603 12:41:57.228322 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:41:57.228849 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | unable to find current IP address of domain ha-220492-m02 in network mk-ha-220492
	I0603 12:41:57.228872 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | I0603 12:41:57.228794 1096793 retry.go:31] will retry after 3.22328969s: waiting for machine to come up
	I0603 12:42:00.454701 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:00.455157 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | unable to find current IP address of domain ha-220492-m02 in network mk-ha-220492
	I0603 12:42:00.455195 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | I0603 12:42:00.455113 1096793 retry.go:31] will retry after 4.667919915s: waiting for machine to come up
	I0603 12:42:05.126452 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:05.126859 1096371 main.go:141] libmachine: (ha-220492-m02) Found IP for machine: 192.168.39.106
	I0603 12:42:05.126887 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has current primary IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:05.126895 1096371 main.go:141] libmachine: (ha-220492-m02) Reserving static IP address...
	I0603 12:42:05.127264 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | unable to find host DHCP lease matching {name: "ha-220492-m02", mac: "52:54:00:5d:56:2b", ip: "192.168.39.106"} in network mk-ha-220492
	I0603 12:42:05.200106 1096371 main.go:141] libmachine: (ha-220492-m02) Reserved static IP address: 192.168.39.106
	I0603 12:42:05.200140 1096371 main.go:141] libmachine: (ha-220492-m02) Waiting for SSH to be available...
	I0603 12:42:05.200150 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | Getting to WaitForSSH function...
	I0603 12:42:05.202975 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:05.203470 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:56:2b", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:41:56 +0000 UTC Type:0 Mac:52:54:00:5d:56:2b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5d:56:2b}
	I0603 12:42:05.203507 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:05.203695 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | Using SSH client type: external
	I0603 12:42:05.203727 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m02/id_rsa (-rw-------)
	I0603 12:42:05.203757 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.106 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 12:42:05.203775 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | About to run SSH command:
	I0603 12:42:05.203792 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | exit 0
	I0603 12:42:05.329602 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | SSH cmd err, output: <nil>: 
	I0603 12:42:05.329923 1096371 main.go:141] libmachine: (ha-220492-m02) KVM machine creation complete!
	I0603 12:42:05.330264 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetConfigRaw
	I0603 12:42:05.330860 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .DriverName
	I0603 12:42:05.331081 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .DriverName
	I0603 12:42:05.331272 1096371 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0603 12:42:05.331290 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetState
	I0603 12:42:05.332521 1096371 main.go:141] libmachine: Detecting operating system of created instance...
	I0603 12:42:05.332541 1096371 main.go:141] libmachine: Waiting for SSH to be available...
	I0603 12:42:05.332549 1096371 main.go:141] libmachine: Getting to WaitForSSH function...
	I0603 12:42:05.332556 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHHostname
	I0603 12:42:05.335101 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:05.335490 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:56:2b", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:41:56 +0000 UTC Type:0 Mac:52:54:00:5d:56:2b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-220492-m02 Clientid:01:52:54:00:5d:56:2b}
	I0603 12:42:05.335519 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:05.335663 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHPort
	I0603 12:42:05.335877 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHKeyPath
	I0603 12:42:05.336039 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHKeyPath
	I0603 12:42:05.336201 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHUsername
	I0603 12:42:05.336355 1096371 main.go:141] libmachine: Using SSH client type: native
	I0603 12:42:05.336562 1096371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0603 12:42:05.336573 1096371 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0603 12:42:05.444957 1096371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:42:05.444990 1096371 main.go:141] libmachine: Detecting the provisioner...
	I0603 12:42:05.445000 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHHostname
	I0603 12:42:05.448052 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:05.448397 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:56:2b", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:41:56 +0000 UTC Type:0 Mac:52:54:00:5d:56:2b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-220492-m02 Clientid:01:52:54:00:5d:56:2b}
	I0603 12:42:05.448425 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:05.448648 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHPort
	I0603 12:42:05.448850 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHKeyPath
	I0603 12:42:05.448996 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHKeyPath
	I0603 12:42:05.449126 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHUsername
	I0603 12:42:05.449297 1096371 main.go:141] libmachine: Using SSH client type: native
	I0603 12:42:05.449483 1096371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0603 12:42:05.449494 1096371 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0603 12:42:05.558489 1096371 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0603 12:42:05.558575 1096371 main.go:141] libmachine: found compatible host: buildroot
	I0603 12:42:05.558582 1096371 main.go:141] libmachine: Provisioning with buildroot...
	I0603 12:42:05.558591 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetMachineName
	I0603 12:42:05.558853 1096371 buildroot.go:166] provisioning hostname "ha-220492-m02"
	I0603 12:42:05.558884 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetMachineName
	I0603 12:42:05.559079 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHHostname
	I0603 12:42:05.561873 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:05.562264 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:56:2b", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:41:56 +0000 UTC Type:0 Mac:52:54:00:5d:56:2b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-220492-m02 Clientid:01:52:54:00:5d:56:2b}
	I0603 12:42:05.562292 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:05.562440 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHPort
	I0603 12:42:05.562650 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHKeyPath
	I0603 12:42:05.562804 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHKeyPath
	I0603 12:42:05.562961 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHUsername
	I0603 12:42:05.563147 1096371 main.go:141] libmachine: Using SSH client type: native
	I0603 12:42:05.563333 1096371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0603 12:42:05.563349 1096371 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-220492-m02 && echo "ha-220492-m02" | sudo tee /etc/hostname
	I0603 12:42:05.688813 1096371 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-220492-m02
	
	I0603 12:42:05.688851 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHHostname
	I0603 12:42:05.691627 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:05.692015 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:56:2b", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:41:56 +0000 UTC Type:0 Mac:52:54:00:5d:56:2b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-220492-m02 Clientid:01:52:54:00:5d:56:2b}
	I0603 12:42:05.692047 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:05.692236 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHPort
	I0603 12:42:05.692475 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHKeyPath
	I0603 12:42:05.692661 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHKeyPath
	I0603 12:42:05.692850 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHUsername
	I0603 12:42:05.693027 1096371 main.go:141] libmachine: Using SSH client type: native
	I0603 12:42:05.693215 1096371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0603 12:42:05.693238 1096371 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-220492-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-220492-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-220492-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 12:42:05.810157 1096371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:42:05.810197 1096371 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19011-1078924/.minikube CaCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19011-1078924/.minikube}
	I0603 12:42:05.810216 1096371 buildroot.go:174] setting up certificates
	I0603 12:42:05.810227 1096371 provision.go:84] configureAuth start
	I0603 12:42:05.810240 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetMachineName
	I0603 12:42:05.810528 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetIP
	I0603 12:42:05.813279 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:05.813619 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:56:2b", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:41:56 +0000 UTC Type:0 Mac:52:54:00:5d:56:2b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-220492-m02 Clientid:01:52:54:00:5d:56:2b}
	I0603 12:42:05.813647 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:05.813833 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHHostname
	I0603 12:42:05.815843 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:05.816159 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:56:2b", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:41:56 +0000 UTC Type:0 Mac:52:54:00:5d:56:2b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-220492-m02 Clientid:01:52:54:00:5d:56:2b}
	I0603 12:42:05.816204 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:05.816330 1096371 provision.go:143] copyHostCerts
	I0603 12:42:05.816374 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 12:42:05.816415 1096371 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem, removing ...
	I0603 12:42:05.816428 1096371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 12:42:05.816508 1096371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem (1078 bytes)
	I0603 12:42:05.816602 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 12:42:05.816626 1096371 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem, removing ...
	I0603 12:42:05.816634 1096371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 12:42:05.816674 1096371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem (1123 bytes)
	I0603 12:42:05.816735 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 12:42:05.816759 1096371 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem, removing ...
	I0603 12:42:05.816768 1096371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 12:42:05.816800 1096371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem (1675 bytes)
	I0603 12:42:05.816866 1096371 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem org=jenkins.ha-220492-m02 san=[127.0.0.1 192.168.39.106 ha-220492-m02 localhost minikube]
	I0603 12:42:05.949501 1096371 provision.go:177] copyRemoteCerts
	I0603 12:42:05.949574 1096371 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 12:42:05.949609 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHHostname
	I0603 12:42:05.952377 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:05.952708 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:56:2b", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:41:56 +0000 UTC Type:0 Mac:52:54:00:5d:56:2b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-220492-m02 Clientid:01:52:54:00:5d:56:2b}
	I0603 12:42:05.952742 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:05.952896 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHPort
	I0603 12:42:05.953080 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHKeyPath
	I0603 12:42:05.953262 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHUsername
	I0603 12:42:05.953400 1096371 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m02/id_rsa Username:docker}
	I0603 12:42:06.039497 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0603 12:42:06.039603 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 12:42:06.065277 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0603 12:42:06.065349 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0603 12:42:06.091385 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0603 12:42:06.091455 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 12:42:06.122390 1096371 provision.go:87] duration metric: took 312.14592ms to configureAuth
	I0603 12:42:06.122424 1096371 buildroot.go:189] setting minikube options for container-runtime
	I0603 12:42:06.122671 1096371 config.go:182] Loaded profile config "ha-220492": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:42:06.122781 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHHostname
	I0603 12:42:06.125780 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:06.126255 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:56:2b", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:41:56 +0000 UTC Type:0 Mac:52:54:00:5d:56:2b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-220492-m02 Clientid:01:52:54:00:5d:56:2b}
	I0603 12:42:06.126289 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:06.126374 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHPort
	I0603 12:42:06.126579 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHKeyPath
	I0603 12:42:06.126777 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHKeyPath
	I0603 12:42:06.126945 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHUsername
	I0603 12:42:06.127161 1096371 main.go:141] libmachine: Using SSH client type: native
	I0603 12:42:06.127366 1096371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0603 12:42:06.127385 1096371 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 12:42:06.414766 1096371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 12:42:06.414806 1096371 main.go:141] libmachine: Checking connection to Docker...
	I0603 12:42:06.414828 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetURL
	I0603 12:42:06.416212 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | Using libvirt version 6000000
	I0603 12:42:06.418443 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:06.418867 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:56:2b", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:41:56 +0000 UTC Type:0 Mac:52:54:00:5d:56:2b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-220492-m02 Clientid:01:52:54:00:5d:56:2b}
	I0603 12:42:06.418889 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:06.419077 1096371 main.go:141] libmachine: Docker is up and running!
	I0603 12:42:06.419093 1096371 main.go:141] libmachine: Reticulating splines...
	I0603 12:42:06.419101 1096371 client.go:171] duration metric: took 24.5825817s to LocalClient.Create
	I0603 12:42:06.419122 1096371 start.go:167] duration metric: took 24.582629095s to libmachine.API.Create "ha-220492"
	I0603 12:42:06.419130 1096371 start.go:293] postStartSetup for "ha-220492-m02" (driver="kvm2")
	I0603 12:42:06.419141 1096371 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 12:42:06.419158 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .DriverName
	I0603 12:42:06.419416 1096371 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 12:42:06.419445 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHHostname
	I0603 12:42:06.421613 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:06.421902 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:56:2b", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:41:56 +0000 UTC Type:0 Mac:52:54:00:5d:56:2b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-220492-m02 Clientid:01:52:54:00:5d:56:2b}
	I0603 12:42:06.421930 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:06.422008 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHPort
	I0603 12:42:06.422169 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHKeyPath
	I0603 12:42:06.422328 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHUsername
	I0603 12:42:06.422486 1096371 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m02/id_rsa Username:docker}
	I0603 12:42:06.507721 1096371 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 12:42:06.512254 1096371 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 12:42:06.512286 1096371 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/addons for local assets ...
	I0603 12:42:06.512367 1096371 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/files for local assets ...
	I0603 12:42:06.512464 1096371 filesync.go:149] local asset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> 10862512.pem in /etc/ssl/certs
	I0603 12:42:06.512479 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> /etc/ssl/certs/10862512.pem
	I0603 12:42:06.512597 1096371 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 12:42:06.522036 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 12:42:06.546236 1096371 start.go:296] duration metric: took 127.089603ms for postStartSetup
	I0603 12:42:06.546292 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetConfigRaw
	I0603 12:42:06.546939 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetIP
	I0603 12:42:06.549490 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:06.549831 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:56:2b", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:41:56 +0000 UTC Type:0 Mac:52:54:00:5d:56:2b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-220492-m02 Clientid:01:52:54:00:5d:56:2b}
	I0603 12:42:06.549853 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:06.550152 1096371 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/config.json ...
	I0603 12:42:06.550339 1096371 start.go:128] duration metric: took 24.732287104s to createHost
	I0603 12:42:06.550363 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHHostname
	I0603 12:42:06.552701 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:06.552989 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:56:2b", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:41:56 +0000 UTC Type:0 Mac:52:54:00:5d:56:2b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-220492-m02 Clientid:01:52:54:00:5d:56:2b}
	I0603 12:42:06.553012 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:06.553239 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHPort
	I0603 12:42:06.553450 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHKeyPath
	I0603 12:42:06.553592 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHKeyPath
	I0603 12:42:06.553727 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHUsername
	I0603 12:42:06.553864 1096371 main.go:141] libmachine: Using SSH client type: native
	I0603 12:42:06.554029 1096371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0603 12:42:06.554040 1096371 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 12:42:06.666580 1096371 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717418526.642377736
	
	I0603 12:42:06.666608 1096371 fix.go:216] guest clock: 1717418526.642377736
	I0603 12:42:06.666618 1096371 fix.go:229] Guest: 2024-06-03 12:42:06.642377736 +0000 UTC Remote: 2024-06-03 12:42:06.550350299 +0000 UTC m=+81.432920285 (delta=92.027437ms)
	I0603 12:42:06.666639 1096371 fix.go:200] guest clock delta is within tolerance: 92.027437ms
	I0603 12:42:06.666646 1096371 start.go:83] releasing machines lock for "ha-220492-m02", held for 24.848719588s
	I0603 12:42:06.666672 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .DriverName
	I0603 12:42:06.666965 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetIP
	I0603 12:42:06.670944 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:06.671366 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:56:2b", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:41:56 +0000 UTC Type:0 Mac:52:54:00:5d:56:2b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-220492-m02 Clientid:01:52:54:00:5d:56:2b}
	I0603 12:42:06.671395 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:06.673668 1096371 out.go:177] * Found network options:
	I0603 12:42:06.674997 1096371 out.go:177]   - NO_PROXY=192.168.39.6
	W0603 12:42:06.676110 1096371 proxy.go:119] fail to check proxy env: Error ip not in block
	I0603 12:42:06.676136 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .DriverName
	I0603 12:42:06.676719 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .DriverName
	I0603 12:42:06.676925 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .DriverName
	I0603 12:42:06.677049 1096371 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	W0603 12:42:06.677092 1096371 proxy.go:119] fail to check proxy env: Error ip not in block
	I0603 12:42:06.677101 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHHostname
	I0603 12:42:06.677171 1096371 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 12:42:06.677227 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHHostname
	I0603 12:42:06.679770 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:06.679946 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:06.680124 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:56:2b", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:41:56 +0000 UTC Type:0 Mac:52:54:00:5d:56:2b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-220492-m02 Clientid:01:52:54:00:5d:56:2b}
	I0603 12:42:06.680150 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:06.680321 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHPort
	I0603 12:42:06.680406 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:56:2b", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:41:56 +0000 UTC Type:0 Mac:52:54:00:5d:56:2b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-220492-m02 Clientid:01:52:54:00:5d:56:2b}
	I0603 12:42:06.680437 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:06.680508 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHKeyPath
	I0603 12:42:06.680602 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHPort
	I0603 12:42:06.680716 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHKeyPath
	I0603 12:42:06.680715 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHUsername
	I0603 12:42:06.680884 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHUsername
	I0603 12:42:06.680881 1096371 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m02/id_rsa Username:docker}
	I0603 12:42:06.681009 1096371 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m02/id_rsa Username:docker}
	I0603 12:42:06.928955 1096371 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 12:42:06.935185 1096371 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 12:42:06.935268 1096371 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 12:42:06.950886 1096371 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 12:42:06.950909 1096371 start.go:494] detecting cgroup driver to use...
	I0603 12:42:06.950967 1096371 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 12:42:06.968906 1096371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:42:06.984061 1096371 docker.go:217] disabling cri-docker service (if available) ...
	I0603 12:42:06.984127 1096371 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 12:42:06.997903 1096371 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 12:42:07.011677 1096371 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 12:42:07.132411 1096371 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 12:42:07.272200 1096371 docker.go:233] disabling docker service ...
	I0603 12:42:07.272297 1096371 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 12:42:07.289229 1096371 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 12:42:07.303039 1096371 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 12:42:07.433572 1096371 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 12:42:07.546622 1096371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 12:42:07.560394 1096371 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:42:07.578937 1096371 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 12:42:07.579006 1096371 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:42:07.589108 1096371 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 12:42:07.589166 1096371 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:42:07.600314 1096371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:42:07.610061 1096371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:42:07.619841 1096371 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 12:42:07.629789 1096371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:42:07.639901 1096371 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:42:07.656766 1096371 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:42:07.667492 1096371 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 12:42:07.677190 1096371 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 12:42:07.677233 1096371 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 12:42:07.691268 1096371 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 12:42:07.700972 1096371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:42:07.826553 1096371 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 12:42:07.961045 1096371 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 12:42:07.961117 1096371 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 12:42:07.965738 1096371 start.go:562] Will wait 60s for crictl version
	I0603 12:42:07.965794 1096371 ssh_runner.go:195] Run: which crictl
	I0603 12:42:07.969909 1096371 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 12:42:08.019940 1096371 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 12:42:08.020041 1096371 ssh_runner.go:195] Run: crio --version
	I0603 12:42:08.048407 1096371 ssh_runner.go:195] Run: crio --version
	I0603 12:42:08.078649 1096371 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 12:42:08.079999 1096371 out.go:177]   - env NO_PROXY=192.168.39.6
	I0603 12:42:08.081200 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetIP
	I0603 12:42:08.083757 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:08.084130 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:56:2b", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:41:56 +0000 UTC Type:0 Mac:52:54:00:5d:56:2b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-220492-m02 Clientid:01:52:54:00:5d:56:2b}
	I0603 12:42:08.084152 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:08.084412 1096371 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0603 12:42:08.088707 1096371 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:42:08.101315 1096371 mustload.go:65] Loading cluster: ha-220492
	I0603 12:42:08.101560 1096371 config.go:182] Loaded profile config "ha-220492": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:42:08.101805 1096371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:42:08.101873 1096371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:42:08.116945 1096371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41499
	I0603 12:42:08.117382 1096371 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:42:08.117914 1096371 main.go:141] libmachine: Using API Version  1
	I0603 12:42:08.117939 1096371 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:42:08.118276 1096371 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:42:08.118501 1096371 main.go:141] libmachine: (ha-220492) Calling .GetState
	I0603 12:42:08.120058 1096371 host.go:66] Checking if "ha-220492" exists ...
	I0603 12:42:08.120367 1096371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:42:08.120392 1096371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:42:08.135995 1096371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44789
	I0603 12:42:08.136369 1096371 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:42:08.136845 1096371 main.go:141] libmachine: Using API Version  1
	I0603 12:42:08.136870 1096371 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:42:08.137213 1096371 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:42:08.137417 1096371 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:42:08.137610 1096371 certs.go:68] Setting up /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492 for IP: 192.168.39.106
	I0603 12:42:08.137626 1096371 certs.go:194] generating shared ca certs ...
	I0603 12:42:08.137640 1096371 certs.go:226] acquiring lock for ca certs: {Name:mkeec5aabce7c9540fcb31b78e4f96c2851d54f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:42:08.137760 1096371 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key
	I0603 12:42:08.137795 1096371 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key
	I0603 12:42:08.137807 1096371 certs.go:256] generating profile certs ...
	I0603 12:42:08.137872 1096371 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/client.key
	I0603 12:42:08.137896 1096371 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key.11e6568a
	I0603 12:42:08.137908 1096371 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt.11e6568a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.6 192.168.39.106 192.168.39.254]
	I0603 12:42:08.319810 1096371 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt.11e6568a ...
	I0603 12:42:08.319845 1096371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt.11e6568a: {Name:mkd21a2eba7380f69e7d36df8d1f2bd501844ef3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:42:08.320044 1096371 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key.11e6568a ...
	I0603 12:42:08.320064 1096371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key.11e6568a: {Name:mkfc0c55f94b5f637b57a4905b366f7655de4d17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:42:08.320172 1096371 certs.go:381] copying /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt.11e6568a -> /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt
	I0603 12:42:08.320343 1096371 certs.go:385] copying /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key.11e6568a -> /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key
	I0603 12:42:08.320589 1096371 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/proxy-client.key
	I0603 12:42:08.320622 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0603 12:42:08.320663 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0603 12:42:08.320685 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0603 12:42:08.320703 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0603 12:42:08.320721 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0603 12:42:08.320849 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0603 12:42:08.320874 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0603 12:42:08.320893 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0603 12:42:08.320984 1096371 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem (1338 bytes)
	W0603 12:42:08.321032 1096371 certs.go:480] ignoring /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251_empty.pem, impossibly tiny 0 bytes
	I0603 12:42:08.321045 1096371 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 12:42:08.321076 1096371 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem (1078 bytes)
	I0603 12:42:08.321109 1096371 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem (1123 bytes)
	I0603 12:42:08.321140 1096371 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem (1675 bytes)
	I0603 12:42:08.321226 1096371 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 12:42:08.321270 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem -> /usr/share/ca-certificates/1086251.pem
	I0603 12:42:08.321291 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> /usr/share/ca-certificates/10862512.pem
	I0603 12:42:08.321308 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:42:08.321358 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:42:08.324577 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:42:08.325002 1096371 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:42:08.325024 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:42:08.325209 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:42:08.325476 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:42:08.325663 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:42:08.325949 1096371 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/id_rsa Username:docker}
	I0603 12:42:08.401753 1096371 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0603 12:42:08.407623 1096371 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0603 12:42:08.419823 1096371 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0603 12:42:08.424030 1096371 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0603 12:42:08.435432 1096371 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0603 12:42:08.439640 1096371 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0603 12:42:08.450791 1096371 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0603 12:42:08.459749 1096371 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0603 12:42:08.470705 1096371 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0603 12:42:08.474974 1096371 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0603 12:42:08.485132 1096371 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0603 12:42:08.489458 1096371 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0603 12:42:08.500170 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 12:42:08.528731 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 12:42:08.552333 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 12:42:08.576303 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0603 12:42:08.599871 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0603 12:42:08.625361 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0603 12:42:08.651137 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 12:42:08.674060 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 12:42:08.696650 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem --> /usr/share/ca-certificates/1086251.pem (1338 bytes)
	I0603 12:42:08.719885 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /usr/share/ca-certificates/10862512.pem (1708 bytes)
	I0603 12:42:08.742499 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 12:42:08.765530 1096371 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0603 12:42:08.782242 1096371 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0603 12:42:08.798849 1096371 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0603 12:42:08.815252 1096371 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0603 12:42:08.832083 1096371 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0603 12:42:08.848487 1096371 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0603 12:42:08.865730 1096371 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0603 12:42:08.882211 1096371 ssh_runner.go:195] Run: openssl version
	I0603 12:42:08.888010 1096371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10862512.pem && ln -fs /usr/share/ca-certificates/10862512.pem /etc/ssl/certs/10862512.pem"
	I0603 12:42:08.898794 1096371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10862512.pem
	I0603 12:42:08.903261 1096371 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:37 /usr/share/ca-certificates/10862512.pem
	I0603 12:42:08.903330 1096371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10862512.pem
	I0603 12:42:08.909128 1096371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10862512.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 12:42:08.919702 1096371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 12:42:08.929911 1096371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:42:08.934373 1096371 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:24 /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:42:08.934426 1096371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:42:08.939885 1096371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 12:42:08.950480 1096371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1086251.pem && ln -fs /usr/share/ca-certificates/1086251.pem /etc/ssl/certs/1086251.pem"
	I0603 12:42:08.961118 1096371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1086251.pem
	I0603 12:42:08.965509 1096371 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:37 /usr/share/ca-certificates/1086251.pem
	I0603 12:42:08.965557 1096371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1086251.pem
	I0603 12:42:08.971051 1096371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1086251.pem /etc/ssl/certs/51391683.0"
	I0603 12:42:08.981382 1096371 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 12:42:08.985271 1096371 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 12:42:08.985331 1096371 kubeadm.go:928] updating node {m02 192.168.39.106 8443 v1.30.1 crio true true} ...
	I0603 12:42:08.985451 1096371 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-220492-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.106
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-220492 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 12:42:08.985480 1096371 kube-vip.go:115] generating kube-vip config ...
	I0603 12:42:08.985523 1096371 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0603 12:42:09.000417 1096371 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0603 12:42:09.000502 1096371 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0603 12:42:09.000556 1096371 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 12:42:09.010330 1096371 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0603 12:42:09.010391 1096371 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0603 12:42:09.020126 1096371 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0603 12:42:09.020155 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/linux/amd64/v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0603 12:42:09.020164 1096371 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/linux/amd64/v1.30.1/kubeadm
	I0603 12:42:09.020177 1096371 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/linux/amd64/v1.30.1/kubelet
	I0603 12:42:09.020235 1096371 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0603 12:42:09.024594 1096371 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0603 12:42:09.024625 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/linux/amd64/v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0603 12:42:09.662346 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/linux/amd64/v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0603 12:42:09.662438 1096371 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0603 12:42:09.667373 1096371 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0603 12:42:09.667408 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/linux/amd64/v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0603 12:42:14.371306 1096371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:42:14.386191 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/linux/amd64/v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0603 12:42:14.386288 1096371 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0603 12:42:14.390761 1096371 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0603 12:42:14.390790 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/linux/amd64/v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0603 12:42:14.791630 1096371 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0603 12:42:14.801234 1096371 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0603 12:42:14.818132 1096371 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 12:42:14.834413 1096371 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0603 12:42:14.850605 1096371 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0603 12:42:14.854560 1096371 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:42:14.866382 1096371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:42:14.992955 1096371 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:42:15.011459 1096371 host.go:66] Checking if "ha-220492" exists ...
	I0603 12:42:15.011896 1096371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:42:15.011955 1096371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:42:15.027141 1096371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35197
	I0603 12:42:15.027605 1096371 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:42:15.028094 1096371 main.go:141] libmachine: Using API Version  1
	I0603 12:42:15.028116 1096371 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:42:15.028445 1096371 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:42:15.028698 1096371 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:42:15.028862 1096371 start.go:316] joinCluster: &{Name:ha-220492 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cluster
Name:ha-220492 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.106 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:42:15.028982 1096371 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0603 12:42:15.029010 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:42:15.031978 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:42:15.032398 1096371 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:42:15.032428 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:42:15.032568 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:42:15.032733 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:42:15.032882 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:42:15.033039 1096371 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/id_rsa Username:docker}
	I0603 12:42:15.202036 1096371 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.106 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 12:42:15.202116 1096371 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token unnm3w.kbp0iaoodjba0o8t --discovery-token-ca-cert-hash sha256:c33e9516f6d05db03b44f9194bafe44692a1b8ae1d860b8bc74f77578e93fdb1 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-220492-m02 --control-plane --apiserver-advertise-address=192.168.39.106 --apiserver-bind-port=8443"
	I0603 12:42:36.806620 1096371 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token unnm3w.kbp0iaoodjba0o8t --discovery-token-ca-cert-hash sha256:c33e9516f6d05db03b44f9194bafe44692a1b8ae1d860b8bc74f77578e93fdb1 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-220492-m02 --control-plane --apiserver-advertise-address=192.168.39.106 --apiserver-bind-port=8443": (21.604476371s)
	I0603 12:42:36.806666 1096371 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0603 12:42:37.291847 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-220492-m02 minikube.k8s.io/updated_at=2024_06_03T12_42_37_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354 minikube.k8s.io/name=ha-220492 minikube.k8s.io/primary=false
	I0603 12:42:37.460680 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-220492-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0603 12:42:37.597348 1096371 start.go:318] duration metric: took 22.568467348s to joinCluster
	I0603 12:42:37.597465 1096371 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.106 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 12:42:37.599025 1096371 out.go:177] * Verifying Kubernetes components...
	I0603 12:42:37.597819 1096371 config.go:182] Loaded profile config "ha-220492": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:42:37.600512 1096371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:42:37.854079 1096371 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:42:37.909944 1096371 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 12:42:37.910331 1096371 kapi.go:59] client config for ha-220492: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/client.crt", KeyFile:"/home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/client.key", CAFile:"/home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfa500), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0603 12:42:37.910437 1096371 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.6:8443
	I0603 12:42:37.910748 1096371 node_ready.go:35] waiting up to 6m0s for node "ha-220492-m02" to be "Ready" ...
	I0603 12:42:37.910885 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:37.910899 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:37.910910 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:37.910917 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:37.921735 1096371 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0603 12:42:38.411681 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:38.411706 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:38.411714 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:38.411718 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:38.415557 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:38.911551 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:38.911575 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:38.911588 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:38.911595 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:38.914987 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:39.411024 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:39.411052 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:39.411062 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:39.411066 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:39.415523 1096371 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:42:39.911698 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:39.911721 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:39.911732 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:39.911737 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:39.914421 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:39.915151 1096371 node_ready.go:53] node "ha-220492-m02" has status "Ready":"False"
	I0603 12:42:40.411860 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:40.411891 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:40.411902 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:40.411909 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:40.414709 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:40.911073 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:40.911101 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:40.911113 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:40.911118 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:41.071044 1096371 round_trippers.go:574] Response Status: 200 OK in 159 milliseconds
	I0603 12:42:41.411483 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:41.411510 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:41.411522 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:41.411529 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:41.415459 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:41.911745 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:41.911768 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:41.911776 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:41.911780 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:41.915242 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:41.915823 1096371 node_ready.go:53] node "ha-220492-m02" has status "Ready":"False"
	I0603 12:42:42.411039 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:42.411060 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:42.411069 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:42.411072 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:42.414058 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:42.910998 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:42.911028 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:42.911038 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:42.911042 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:42.914644 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:43.411647 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:43.411676 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:43.411688 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:43.411696 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:43.415081 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:43.911305 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:43.911330 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:43.911338 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:43.911342 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:43.914504 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:43.915152 1096371 node_ready.go:49] node "ha-220492-m02" has status "Ready":"True"
	I0603 12:42:43.915172 1096371 node_ready.go:38] duration metric: took 6.00438251s for node "ha-220492-m02" to be "Ready" ...
	I0603 12:42:43.915182 1096371 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:42:43.915275 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0603 12:42:43.915284 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:43.915291 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:43.915294 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:43.919990 1096371 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:42:43.926100 1096371 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-d2tgp" in "kube-system" namespace to be "Ready" ...
	I0603 12:42:43.926184 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-d2tgp
	I0603 12:42:43.926197 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:43.926204 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:43.926212 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:43.928912 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:43.929620 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492
	I0603 12:42:43.929636 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:43.929644 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:43.929648 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:43.932075 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:43.932651 1096371 pod_ready.go:92] pod "coredns-7db6d8ff4d-d2tgp" in "kube-system" namespace has status "Ready":"True"
	I0603 12:42:43.932678 1096371 pod_ready.go:81] duration metric: took 6.551142ms for pod "coredns-7db6d8ff4d-d2tgp" in "kube-system" namespace to be "Ready" ...
	I0603 12:42:43.932690 1096371 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-q7687" in "kube-system" namespace to be "Ready" ...
	I0603 12:42:43.932745 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-q7687
	I0603 12:42:43.932753 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:43.932759 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:43.932765 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:43.935012 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:43.935763 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492
	I0603 12:42:43.935780 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:43.935787 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:43.935791 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:43.938245 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:43.938797 1096371 pod_ready.go:92] pod "coredns-7db6d8ff4d-q7687" in "kube-system" namespace has status "Ready":"True"
	I0603 12:42:43.938818 1096371 pod_ready.go:81] duration metric: took 6.12059ms for pod "coredns-7db6d8ff4d-q7687" in "kube-system" namespace to be "Ready" ...
	I0603 12:42:43.938831 1096371 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-220492" in "kube-system" namespace to be "Ready" ...
	I0603 12:42:43.938896 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492
	I0603 12:42:43.938906 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:43.938916 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:43.938926 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:43.947608 1096371 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0603 12:42:43.948266 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492
	I0603 12:42:43.948283 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:43.948290 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:43.948293 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:43.950370 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:43.950917 1096371 pod_ready.go:92] pod "etcd-ha-220492" in "kube-system" namespace has status "Ready":"True"
	I0603 12:42:43.950934 1096371 pod_ready.go:81] duration metric: took 12.093433ms for pod "etcd-ha-220492" in "kube-system" namespace to be "Ready" ...
	I0603 12:42:43.950944 1096371 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-220492-m02" in "kube-system" namespace to be "Ready" ...
	I0603 12:42:43.950993 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492-m02
	I0603 12:42:43.951000 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:43.951006 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:43.951010 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:43.953171 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:43.953798 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:43.953814 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:43.953820 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:43.953824 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:43.955996 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:44.452050 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492-m02
	I0603 12:42:44.452080 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:44.452097 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:44.452102 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:44.456918 1096371 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:42:44.457562 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:44.457578 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:44.457586 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:44.457589 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:44.460143 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:44.952171 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492-m02
	I0603 12:42:44.952196 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:44.952204 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:44.952208 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:44.955658 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:44.956445 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:44.956464 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:44.956477 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:44.956484 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:44.959182 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:45.451688 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492-m02
	I0603 12:42:45.451717 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:45.451733 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:45.451742 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:45.455265 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:45.455897 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:45.455917 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:45.455925 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:45.455931 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:45.458762 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:45.951811 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492-m02
	I0603 12:42:45.951835 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:45.951844 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:45.951848 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:45.955024 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:45.955723 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:45.955740 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:45.955747 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:45.955750 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:45.958194 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:45.958765 1096371 pod_ready.go:102] pod "etcd-ha-220492-m02" in "kube-system" namespace has status "Ready":"False"
	I0603 12:42:46.452152 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492-m02
	I0603 12:42:46.452175 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:46.452183 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:46.452188 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:46.455462 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:46.456019 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:46.456035 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:46.456045 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:46.456051 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:46.460469 1096371 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:42:46.951982 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492-m02
	I0603 12:42:46.952007 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:46.952015 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:46.952020 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:46.955468 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:46.956197 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:46.956211 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:46.956218 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:46.956229 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:46.958688 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:47.451803 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492-m02
	I0603 12:42:47.451831 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:47.451838 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:47.451843 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:47.454748 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:47.455383 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:47.455400 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:47.455407 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:47.455418 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:47.457933 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:47.951875 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492-m02
	I0603 12:42:47.951900 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:47.951908 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:47.951913 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:47.955431 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:47.956378 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:47.956396 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:47.956404 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:47.956408 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:47.959048 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:47.959589 1096371 pod_ready.go:102] pod "etcd-ha-220492-m02" in "kube-system" namespace has status "Ready":"False"
	I0603 12:42:48.451935 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492-m02
	I0603 12:42:48.451960 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:48.451970 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:48.451977 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:48.455145 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:48.455754 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:48.455773 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:48.455784 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:48.455789 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:48.458662 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:48.951807 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492-m02
	I0603 12:42:48.951835 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:48.951843 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:48.951847 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:48.955036 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:48.955613 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:48.955628 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:48.955635 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:48.955639 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:48.957794 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:49.451700 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492-m02
	I0603 12:42:49.451726 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:49.451735 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:49.451738 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:49.454867 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:49.455697 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:49.455718 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:49.455729 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:49.455738 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:49.458332 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:49.951291 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492-m02
	I0603 12:42:49.951320 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:49.951329 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:49.951333 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:49.954813 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:49.955504 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:49.955519 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:49.955527 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:49.955531 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:49.958052 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:50.451165 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492-m02
	I0603 12:42:50.451188 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:50.451195 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:50.451203 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:50.454033 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:50.454621 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:50.454636 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:50.454644 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:50.454648 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:50.457230 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:50.457704 1096371 pod_ready.go:102] pod "etcd-ha-220492-m02" in "kube-system" namespace has status "Ready":"False"
	I0603 12:42:50.951248 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492-m02
	I0603 12:42:50.951272 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:50.951280 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:50.951283 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:50.954643 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:50.955330 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:50.955346 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:50.955353 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:50.955357 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:50.957803 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:51.451914 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492-m02
	I0603 12:42:51.451937 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:51.451946 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:51.451949 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:51.455022 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:51.455738 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:51.455752 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:51.455758 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:51.455762 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:51.458085 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:51.458516 1096371 pod_ready.go:92] pod "etcd-ha-220492-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 12:42:51.458535 1096371 pod_ready.go:81] duration metric: took 7.507583146s for pod "etcd-ha-220492-m02" in "kube-system" namespace to be "Ready" ...
	I0603 12:42:51.458550 1096371 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-220492" in "kube-system" namespace to be "Ready" ...
	I0603 12:42:51.458614 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-220492
	I0603 12:42:51.458622 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:51.458629 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:51.458634 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:51.460620 1096371 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0603 12:42:51.461255 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492
	I0603 12:42:51.461272 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:51.461281 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:51.461286 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:51.467387 1096371 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 12:42:51.467827 1096371 pod_ready.go:92] pod "kube-apiserver-ha-220492" in "kube-system" namespace has status "Ready":"True"
	I0603 12:42:51.467847 1096371 pod_ready.go:81] duration metric: took 9.291191ms for pod "kube-apiserver-ha-220492" in "kube-system" namespace to be "Ready" ...
	I0603 12:42:51.467855 1096371 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-220492-m02" in "kube-system" namespace to be "Ready" ...
	I0603 12:42:51.467903 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-220492-m02
	I0603 12:42:51.467910 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:51.467917 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:51.467923 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:51.470179 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:51.470748 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:51.470761 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:51.470768 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:51.470772 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:51.472713 1096371 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0603 12:42:51.473306 1096371 pod_ready.go:92] pod "kube-apiserver-ha-220492-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 12:42:51.473325 1096371 pod_ready.go:81] duration metric: took 5.462411ms for pod "kube-apiserver-ha-220492-m02" in "kube-system" namespace to be "Ready" ...
	I0603 12:42:51.473336 1096371 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-220492" in "kube-system" namespace to be "Ready" ...
	I0603 12:42:51.473388 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220492
	I0603 12:42:51.473399 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:51.473427 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:51.473439 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:51.476070 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:51.477078 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492
	I0603 12:42:51.477094 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:51.477102 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:51.477106 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:51.479871 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:51.480529 1096371 pod_ready.go:92] pod "kube-controller-manager-ha-220492" in "kube-system" namespace has status "Ready":"True"
	I0603 12:42:51.480545 1096371 pod_ready.go:81] duration metric: took 7.202574ms for pod "kube-controller-manager-ha-220492" in "kube-system" namespace to be "Ready" ...
	I0603 12:42:51.480555 1096371 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-220492-m02" in "kube-system" namespace to be "Ready" ...
	I0603 12:42:51.480596 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220492-m02
	I0603 12:42:51.480604 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:51.480611 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:51.480616 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:51.483040 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:51.512090 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:51.512112 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:51.512122 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:51.512126 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:51.515340 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:51.980909 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220492-m02
	I0603 12:42:51.980932 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:51.980938 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:51.980941 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:51.984835 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:51.985779 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:51.985799 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:51.985807 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:51.985813 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:51.988339 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:51.988919 1096371 pod_ready.go:92] pod "kube-controller-manager-ha-220492-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 12:42:51.988941 1096371 pod_ready.go:81] duration metric: took 508.378458ms for pod "kube-controller-manager-ha-220492-m02" in "kube-system" namespace to be "Ready" ...
	I0603 12:42:51.988953 1096371 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dkzgt" in "kube-system" namespace to be "Ready" ...
	I0603 12:42:52.112349 1096371 request.go:629] Waited for 123.283767ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dkzgt
	I0603 12:42:52.112416 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dkzgt
	I0603 12:42:52.112421 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:52.112429 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:52.112435 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:52.117165 1096371 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:42:52.311406 1096371 request.go:629] Waited for 193.273989ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:52.311482 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:52.311488 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:52.311498 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:52.311506 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:52.315177 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:52.315744 1096371 pod_ready.go:92] pod "kube-proxy-dkzgt" in "kube-system" namespace has status "Ready":"True"
	I0603 12:42:52.315762 1096371 pod_ready.go:81] duration metric: took 326.801779ms for pod "kube-proxy-dkzgt" in "kube-system" namespace to be "Ready" ...
	I0603 12:42:52.315777 1096371 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w2hpg" in "kube-system" namespace to be "Ready" ...
	I0603 12:42:52.511944 1096371 request.go:629] Waited for 196.053868ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w2hpg
	I0603 12:42:52.512023 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w2hpg
	I0603 12:42:52.512030 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:52.512043 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:52.512056 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:52.515570 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:52.712057 1096371 request.go:629] Waited for 195.418875ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-220492
	I0603 12:42:52.712139 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492
	I0603 12:42:52.712150 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:52.712162 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:52.712171 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:52.716555 1096371 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:42:52.717768 1096371 pod_ready.go:92] pod "kube-proxy-w2hpg" in "kube-system" namespace has status "Ready":"True"
	I0603 12:42:52.717792 1096371 pod_ready.go:81] duration metric: took 402.006709ms for pod "kube-proxy-w2hpg" in "kube-system" namespace to be "Ready" ...
	I0603 12:42:52.717802 1096371 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-220492" in "kube-system" namespace to be "Ready" ...
	I0603 12:42:52.911792 1096371 request.go:629] Waited for 193.913411ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-220492
	I0603 12:42:52.911866 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-220492
	I0603 12:42:52.911872 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:52.911880 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:52.911884 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:52.915171 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:53.112324 1096371 request.go:629] Waited for 196.393723ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-220492
	I0603 12:42:53.112419 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492
	I0603 12:42:53.112426 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:53.112437 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:53.112443 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:53.116086 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:53.116707 1096371 pod_ready.go:92] pod "kube-scheduler-ha-220492" in "kube-system" namespace has status "Ready":"True"
	I0603 12:42:53.116727 1096371 pod_ready.go:81] duration metric: took 398.918778ms for pod "kube-scheduler-ha-220492" in "kube-system" namespace to be "Ready" ...
	I0603 12:42:53.116740 1096371 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-220492-m02" in "kube-system" namespace to be "Ready" ...
	I0603 12:42:53.311802 1096371 request.go:629] Waited for 194.958528ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-220492-m02
	I0603 12:42:53.311891 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-220492-m02
	I0603 12:42:53.311902 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:53.311914 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:53.311922 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:53.315421 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:53.511641 1096371 request.go:629] Waited for 195.386969ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:53.511734 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:53.511750 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:53.511761 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:53.511766 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:53.515114 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:53.515926 1096371 pod_ready.go:92] pod "kube-scheduler-ha-220492-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 12:42:53.515948 1096371 pod_ready.go:81] duration metric: took 399.201094ms for pod "kube-scheduler-ha-220492-m02" in "kube-system" namespace to be "Ready" ...
	I0603 12:42:53.515959 1096371 pod_ready.go:38] duration metric: took 9.600737698s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:42:53.515975 1096371 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:42:53.516039 1096371 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:42:53.538237 1096371 api_server.go:72] duration metric: took 15.940722259s to wait for apiserver process to appear ...
	I0603 12:42:53.538270 1096371 api_server.go:88] waiting for apiserver healthz status ...
	I0603 12:42:53.538305 1096371 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0603 12:42:53.546310 1096371 api_server.go:279] https://192.168.39.6:8443/healthz returned 200:
	ok
	I0603 12:42:53.546385 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/version
	I0603 12:42:53.546393 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:53.546402 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:53.546408 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:53.547445 1096371 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0603 12:42:53.547567 1096371 api_server.go:141] control plane version: v1.30.1
	I0603 12:42:53.547591 1096371 api_server.go:131] duration metric: took 9.311823ms to wait for apiserver health ...
	I0603 12:42:53.547609 1096371 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 12:42:53.712035 1096371 request.go:629] Waited for 164.33041ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0603 12:42:53.712119 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0603 12:42:53.712130 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:53.712141 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:53.712146 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:53.718089 1096371 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 12:42:53.723286 1096371 system_pods.go:59] 17 kube-system pods found
	I0603 12:42:53.723315 1096371 system_pods.go:61] "coredns-7db6d8ff4d-d2tgp" [534e15ed-2e68-4275-8725-099d7240c25d] Running
	I0603 12:42:53.723320 1096371 system_pods.go:61] "coredns-7db6d8ff4d-q7687" [4e78d9e6-8feb-44ef-b44d-ed6039ab00ee] Running
	I0603 12:42:53.723326 1096371 system_pods.go:61] "etcd-ha-220492" [f2bcc3d2-bb06-4775-9080-72f7652a48b1] Running
	I0603 12:42:53.723330 1096371 system_pods.go:61] "etcd-ha-220492-m02" [035f5269-9ad9-4be8-a582-59bf15e1f6f1] Running
	I0603 12:42:53.723333 1096371 system_pods.go:61] "kindnet-5p8f7" [12b97c9f-e363-42c3-9ac9-d808c47de63a] Running
	I0603 12:42:53.723335 1096371 system_pods.go:61] "kindnet-hbl6v" [9f697f13-4a60-4247-bb5e-a8bcdd3336cd] Running
	I0603 12:42:53.723338 1096371 system_pods.go:61] "kube-apiserver-ha-220492" [a5d2882e-9fb6-4c38-b232-5dc8cb7a009e] Running
	I0603 12:42:53.723341 1096371 system_pods.go:61] "kube-apiserver-ha-220492-m02" [8ef1ba46-3175-4524-8979-fbb5f4d0608a] Running
	I0603 12:42:53.723344 1096371 system_pods.go:61] "kube-controller-manager-ha-220492" [38d8a477-8b59-43d0-9004-a70023c07b14] Running
	I0603 12:42:53.723347 1096371 system_pods.go:61] "kube-controller-manager-ha-220492-m02" [9cde04ca-9c61-4015-9f2f-08c9db8439cc] Running
	I0603 12:42:53.723350 1096371 system_pods.go:61] "kube-proxy-dkzgt" [e1536cb0-2da1-4d9a-a6f7-50adfb8f7c9a] Running
	I0603 12:42:53.723353 1096371 system_pods.go:61] "kube-proxy-w2hpg" [51a52e47-6a1e-4f9c-ba1b-feb3e362531a] Running
	I0603 12:42:53.723356 1096371 system_pods.go:61] "kube-scheduler-ha-220492" [40a56d71-3787-44fa-a3b7-9d2dc2bcf5ac] Running
	I0603 12:42:53.723359 1096371 system_pods.go:61] "kube-scheduler-ha-220492-m02" [6dede50f-8a71-4a7a-97fa-8cc4d2a6ef8c] Running
	I0603 12:42:53.723362 1096371 system_pods.go:61] "kube-vip-ha-220492" [577ecb1f-e5df-4494-b898-7d2d8e79151d] Running
	I0603 12:42:53.723365 1096371 system_pods.go:61] "kube-vip-ha-220492-m02" [a53477a8-aa28-443e-bf5d-1abb3b66ce57] Running
	I0603 12:42:53.723371 1096371 system_pods.go:61] "storage-provisioner" [f85b2808-26fa-4608-a208-2c11eaddc293] Running
	I0603 12:42:53.723378 1096371 system_pods.go:74] duration metric: took 175.75879ms to wait for pod list to return data ...
	I0603 12:42:53.723389 1096371 default_sa.go:34] waiting for default service account to be created ...
	I0603 12:42:53.911860 1096371 request.go:629] Waited for 188.361515ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/default/serviceaccounts
	I0603 12:42:53.911932 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/default/serviceaccounts
	I0603 12:42:53.911937 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:53.911944 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:53.911950 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:53.919345 1096371 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 12:42:53.919698 1096371 default_sa.go:45] found service account: "default"
	I0603 12:42:53.919721 1096371 default_sa.go:55] duration metric: took 196.321286ms for default service account to be created ...
	I0603 12:42:53.919733 1096371 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 12:42:54.112230 1096371 request.go:629] Waited for 192.396547ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0603 12:42:54.112307 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0603 12:42:54.112314 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:54.112325 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:54.112333 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:54.118409 1096371 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 12:42:54.122780 1096371 system_pods.go:86] 17 kube-system pods found
	I0603 12:42:54.122810 1096371 system_pods.go:89] "coredns-7db6d8ff4d-d2tgp" [534e15ed-2e68-4275-8725-099d7240c25d] Running
	I0603 12:42:54.122818 1096371 system_pods.go:89] "coredns-7db6d8ff4d-q7687" [4e78d9e6-8feb-44ef-b44d-ed6039ab00ee] Running
	I0603 12:42:54.122825 1096371 system_pods.go:89] "etcd-ha-220492" [f2bcc3d2-bb06-4775-9080-72f7652a48b1] Running
	I0603 12:42:54.122831 1096371 system_pods.go:89] "etcd-ha-220492-m02" [035f5269-9ad9-4be8-a582-59bf15e1f6f1] Running
	I0603 12:42:54.122837 1096371 system_pods.go:89] "kindnet-5p8f7" [12b97c9f-e363-42c3-9ac9-d808c47de63a] Running
	I0603 12:42:54.122842 1096371 system_pods.go:89] "kindnet-hbl6v" [9f697f13-4a60-4247-bb5e-a8bcdd3336cd] Running
	I0603 12:42:54.122848 1096371 system_pods.go:89] "kube-apiserver-ha-220492" [a5d2882e-9fb6-4c38-b232-5dc8cb7a009e] Running
	I0603 12:42:54.122855 1096371 system_pods.go:89] "kube-apiserver-ha-220492-m02" [8ef1ba46-3175-4524-8979-fbb5f4d0608a] Running
	I0603 12:42:54.122865 1096371 system_pods.go:89] "kube-controller-manager-ha-220492" [38d8a477-8b59-43d0-9004-a70023c07b14] Running
	I0603 12:42:54.122874 1096371 system_pods.go:89] "kube-controller-manager-ha-220492-m02" [9cde04ca-9c61-4015-9f2f-08c9db8439cc] Running
	I0603 12:42:54.122884 1096371 system_pods.go:89] "kube-proxy-dkzgt" [e1536cb0-2da1-4d9a-a6f7-50adfb8f7c9a] Running
	I0603 12:42:54.122895 1096371 system_pods.go:89] "kube-proxy-w2hpg" [51a52e47-6a1e-4f9c-ba1b-feb3e362531a] Running
	I0603 12:42:54.122903 1096371 system_pods.go:89] "kube-scheduler-ha-220492" [40a56d71-3787-44fa-a3b7-9d2dc2bcf5ac] Running
	I0603 12:42:54.122910 1096371 system_pods.go:89] "kube-scheduler-ha-220492-m02" [6dede50f-8a71-4a7a-97fa-8cc4d2a6ef8c] Running
	I0603 12:42:54.122918 1096371 system_pods.go:89] "kube-vip-ha-220492" [577ecb1f-e5df-4494-b898-7d2d8e79151d] Running
	I0603 12:42:54.122924 1096371 system_pods.go:89] "kube-vip-ha-220492-m02" [a53477a8-aa28-443e-bf5d-1abb3b66ce57] Running
	I0603 12:42:54.122930 1096371 system_pods.go:89] "storage-provisioner" [f85b2808-26fa-4608-a208-2c11eaddc293] Running
	I0603 12:42:54.122944 1096371 system_pods.go:126] duration metric: took 203.201242ms to wait for k8s-apps to be running ...
	I0603 12:42:54.122956 1096371 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 12:42:54.123014 1096371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:42:54.138088 1096371 system_svc.go:56] duration metric: took 15.123781ms WaitForService to wait for kubelet
	I0603 12:42:54.138113 1096371 kubeadm.go:576] duration metric: took 16.540607996s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 12:42:54.138133 1096371 node_conditions.go:102] verifying NodePressure condition ...
	I0603 12:42:54.311522 1096371 request.go:629] Waited for 173.274208ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes
	I0603 12:42:54.311597 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes
	I0603 12:42:54.311604 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:54.311623 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:54.311633 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:54.315374 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:54.316091 1096371 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 12:42:54.316116 1096371 node_conditions.go:123] node cpu capacity is 2
	I0603 12:42:54.316128 1096371 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 12:42:54.316131 1096371 node_conditions.go:123] node cpu capacity is 2
	I0603 12:42:54.316135 1096371 node_conditions.go:105] duration metric: took 177.997261ms to run NodePressure ...
	I0603 12:42:54.316149 1096371 start.go:240] waiting for startup goroutines ...
	I0603 12:42:54.316186 1096371 start.go:254] writing updated cluster config ...
	I0603 12:42:54.318264 1096371 out.go:177] 
	I0603 12:42:54.319855 1096371 config.go:182] Loaded profile config "ha-220492": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:42:54.319961 1096371 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/config.json ...
	I0603 12:42:54.321799 1096371 out.go:177] * Starting "ha-220492-m03" control-plane node in "ha-220492" cluster
	I0603 12:42:54.322998 1096371 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 12:42:54.323025 1096371 cache.go:56] Caching tarball of preloaded images
	I0603 12:42:54.323127 1096371 preload.go:173] Found /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 12:42:54.323138 1096371 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0603 12:42:54.323276 1096371 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/config.json ...
	I0603 12:42:54.323466 1096371 start.go:360] acquireMachinesLock for ha-220492-m03: {Name:mk20baaab39609d00406b78ad309423511e633ec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 12:42:54.323549 1096371 start.go:364] duration metric: took 54.152µs to acquireMachinesLock for "ha-220492-m03"
	I0603 12:42:54.323576 1096371 start.go:93] Provisioning new machine with config: &{Name:ha-220492 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-220492 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.106 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 12:42:54.323692 1096371 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0603 12:42:54.325236 1096371 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0603 12:42:54.325321 1096371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:42:54.325356 1096371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:42:54.341059 1096371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38639
	I0603 12:42:54.341606 1096371 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:42:54.342210 1096371 main.go:141] libmachine: Using API Version  1
	I0603 12:42:54.342234 1096371 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:42:54.342575 1096371 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:42:54.342767 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetMachineName
	I0603 12:42:54.342959 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .DriverName
	I0603 12:42:54.343122 1096371 start.go:159] libmachine.API.Create for "ha-220492" (driver="kvm2")
	I0603 12:42:54.343148 1096371 client.go:168] LocalClient.Create starting
	I0603 12:42:54.343186 1096371 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem
	I0603 12:42:54.343227 1096371 main.go:141] libmachine: Decoding PEM data...
	I0603 12:42:54.343244 1096371 main.go:141] libmachine: Parsing certificate...
	I0603 12:42:54.343316 1096371 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem
	I0603 12:42:54.343341 1096371 main.go:141] libmachine: Decoding PEM data...
	I0603 12:42:54.343359 1096371 main.go:141] libmachine: Parsing certificate...
	I0603 12:42:54.343387 1096371 main.go:141] libmachine: Running pre-create checks...
	I0603 12:42:54.343399 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .PreCreateCheck
	I0603 12:42:54.343564 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetConfigRaw
	I0603 12:42:54.343938 1096371 main.go:141] libmachine: Creating machine...
	I0603 12:42:54.343952 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .Create
	I0603 12:42:54.344066 1096371 main.go:141] libmachine: (ha-220492-m03) Creating KVM machine...
	I0603 12:42:54.345206 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | found existing default KVM network
	I0603 12:42:54.345353 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | found existing private KVM network mk-ha-220492
	I0603 12:42:54.345545 1096371 main.go:141] libmachine: (ha-220492-m03) Setting up store path in /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m03 ...
	I0603 12:42:54.345569 1096371 main.go:141] libmachine: (ha-220492-m03) Building disk image from file:///home/jenkins/minikube-integration/19011-1078924/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso
	I0603 12:42:54.345625 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | I0603 12:42:54.345525 1097168 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 12:42:54.345723 1096371 main.go:141] libmachine: (ha-220492-m03) Downloading /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19011-1078924/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0603 12:42:54.620863 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | I0603 12:42:54.620701 1097168 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m03/id_rsa...
	I0603 12:42:55.088497 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | I0603 12:42:55.088351 1097168 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m03/ha-220492-m03.rawdisk...
	I0603 12:42:55.088533 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | Writing magic tar header
	I0603 12:42:55.088547 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | Writing SSH key tar header
	I0603 12:42:55.088559 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | I0603 12:42:55.088471 1097168 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m03 ...
	I0603 12:42:55.088574 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m03
	I0603 12:42:55.088660 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines
	I0603 12:42:55.088686 1096371 main.go:141] libmachine: (ha-220492-m03) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m03 (perms=drwx------)
	I0603 12:42:55.088693 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 12:42:55.088709 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924
	I0603 12:42:55.088726 1096371 main.go:141] libmachine: (ha-220492-m03) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924/.minikube/machines (perms=drwxr-xr-x)
	I0603 12:42:55.088739 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0603 12:42:55.088756 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | Checking permissions on dir: /home/jenkins
	I0603 12:42:55.088764 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | Checking permissions on dir: /home
	I0603 12:42:55.088772 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | Skipping /home - not owner
	I0603 12:42:55.088782 1096371 main.go:141] libmachine: (ha-220492-m03) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924/.minikube (perms=drwxr-xr-x)
	I0603 12:42:55.088793 1096371 main.go:141] libmachine: (ha-220492-m03) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924 (perms=drwxrwxr-x)
	I0603 12:42:55.088807 1096371 main.go:141] libmachine: (ha-220492-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0603 12:42:55.088819 1096371 main.go:141] libmachine: (ha-220492-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0603 12:42:55.088830 1096371 main.go:141] libmachine: (ha-220492-m03) Creating domain...
	I0603 12:42:55.089747 1096371 main.go:141] libmachine: (ha-220492-m03) define libvirt domain using xml: 
	I0603 12:42:55.089775 1096371 main.go:141] libmachine: (ha-220492-m03) <domain type='kvm'>
	I0603 12:42:55.089787 1096371 main.go:141] libmachine: (ha-220492-m03)   <name>ha-220492-m03</name>
	I0603 12:42:55.089817 1096371 main.go:141] libmachine: (ha-220492-m03)   <memory unit='MiB'>2200</memory>
	I0603 12:42:55.089835 1096371 main.go:141] libmachine: (ha-220492-m03)   <vcpu>2</vcpu>
	I0603 12:42:55.089839 1096371 main.go:141] libmachine: (ha-220492-m03)   <features>
	I0603 12:42:55.089844 1096371 main.go:141] libmachine: (ha-220492-m03)     <acpi/>
	I0603 12:42:55.089849 1096371 main.go:141] libmachine: (ha-220492-m03)     <apic/>
	I0603 12:42:55.089854 1096371 main.go:141] libmachine: (ha-220492-m03)     <pae/>
	I0603 12:42:55.089857 1096371 main.go:141] libmachine: (ha-220492-m03)     
	I0603 12:42:55.089865 1096371 main.go:141] libmachine: (ha-220492-m03)   </features>
	I0603 12:42:55.089870 1096371 main.go:141] libmachine: (ha-220492-m03)   <cpu mode='host-passthrough'>
	I0603 12:42:55.089875 1096371 main.go:141] libmachine: (ha-220492-m03)   
	I0603 12:42:55.089879 1096371 main.go:141] libmachine: (ha-220492-m03)   </cpu>
	I0603 12:42:55.089890 1096371 main.go:141] libmachine: (ha-220492-m03)   <os>
	I0603 12:42:55.089900 1096371 main.go:141] libmachine: (ha-220492-m03)     <type>hvm</type>
	I0603 12:42:55.089911 1096371 main.go:141] libmachine: (ha-220492-m03)     <boot dev='cdrom'/>
	I0603 12:42:55.089944 1096371 main.go:141] libmachine: (ha-220492-m03)     <boot dev='hd'/>
	I0603 12:42:55.089966 1096371 main.go:141] libmachine: (ha-220492-m03)     <bootmenu enable='no'/>
	I0603 12:42:55.089973 1096371 main.go:141] libmachine: (ha-220492-m03)   </os>
	I0603 12:42:55.089979 1096371 main.go:141] libmachine: (ha-220492-m03)   <devices>
	I0603 12:42:55.089986 1096371 main.go:141] libmachine: (ha-220492-m03)     <disk type='file' device='cdrom'>
	I0603 12:42:55.090000 1096371 main.go:141] libmachine: (ha-220492-m03)       <source file='/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m03/boot2docker.iso'/>
	I0603 12:42:55.090032 1096371 main.go:141] libmachine: (ha-220492-m03)       <target dev='hdc' bus='scsi'/>
	I0603 12:42:55.090057 1096371 main.go:141] libmachine: (ha-220492-m03)       <readonly/>
	I0603 12:42:55.090065 1096371 main.go:141] libmachine: (ha-220492-m03)     </disk>
	I0603 12:42:55.090082 1096371 main.go:141] libmachine: (ha-220492-m03)     <disk type='file' device='disk'>
	I0603 12:42:55.090096 1096371 main.go:141] libmachine: (ha-220492-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0603 12:42:55.090111 1096371 main.go:141] libmachine: (ha-220492-m03)       <source file='/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m03/ha-220492-m03.rawdisk'/>
	I0603 12:42:55.090124 1096371 main.go:141] libmachine: (ha-220492-m03)       <target dev='hda' bus='virtio'/>
	I0603 12:42:55.090137 1096371 main.go:141] libmachine: (ha-220492-m03)     </disk>
	I0603 12:42:55.090177 1096371 main.go:141] libmachine: (ha-220492-m03)     <interface type='network'>
	I0603 12:42:55.090198 1096371 main.go:141] libmachine: (ha-220492-m03)       <source network='mk-ha-220492'/>
	I0603 12:42:55.090208 1096371 main.go:141] libmachine: (ha-220492-m03)       <model type='virtio'/>
	I0603 12:42:55.090215 1096371 main.go:141] libmachine: (ha-220492-m03)     </interface>
	I0603 12:42:55.090225 1096371 main.go:141] libmachine: (ha-220492-m03)     <interface type='network'>
	I0603 12:42:55.090235 1096371 main.go:141] libmachine: (ha-220492-m03)       <source network='default'/>
	I0603 12:42:55.090244 1096371 main.go:141] libmachine: (ha-220492-m03)       <model type='virtio'/>
	I0603 12:42:55.090255 1096371 main.go:141] libmachine: (ha-220492-m03)     </interface>
	I0603 12:42:55.090281 1096371 main.go:141] libmachine: (ha-220492-m03)     <serial type='pty'>
	I0603 12:42:55.090302 1096371 main.go:141] libmachine: (ha-220492-m03)       <target port='0'/>
	I0603 12:42:55.090315 1096371 main.go:141] libmachine: (ha-220492-m03)     </serial>
	I0603 12:42:55.090325 1096371 main.go:141] libmachine: (ha-220492-m03)     <console type='pty'>
	I0603 12:42:55.090342 1096371 main.go:141] libmachine: (ha-220492-m03)       <target type='serial' port='0'/>
	I0603 12:42:55.090351 1096371 main.go:141] libmachine: (ha-220492-m03)     </console>
	I0603 12:42:55.090362 1096371 main.go:141] libmachine: (ha-220492-m03)     <rng model='virtio'>
	I0603 12:42:55.090372 1096371 main.go:141] libmachine: (ha-220492-m03)       <backend model='random'>/dev/random</backend>
	I0603 12:42:55.090384 1096371 main.go:141] libmachine: (ha-220492-m03)     </rng>
	I0603 12:42:55.090398 1096371 main.go:141] libmachine: (ha-220492-m03)     
	I0603 12:42:55.090409 1096371 main.go:141] libmachine: (ha-220492-m03)     
	I0603 12:42:55.090416 1096371 main.go:141] libmachine: (ha-220492-m03)   </devices>
	I0603 12:42:55.090427 1096371 main.go:141] libmachine: (ha-220492-m03) </domain>
	I0603 12:42:55.090436 1096371 main.go:141] libmachine: (ha-220492-m03) 
	I0603 12:42:55.096811 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:2f:3f:07 in network default
	I0603 12:42:55.097338 1096371 main.go:141] libmachine: (ha-220492-m03) Ensuring networks are active...
	I0603 12:42:55.097360 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:42:55.098017 1096371 main.go:141] libmachine: (ha-220492-m03) Ensuring network default is active
	I0603 12:42:55.098256 1096371 main.go:141] libmachine: (ha-220492-m03) Ensuring network mk-ha-220492 is active
	I0603 12:42:55.098610 1096371 main.go:141] libmachine: (ha-220492-m03) Getting domain xml...
	I0603 12:42:55.099251 1096371 main.go:141] libmachine: (ha-220492-m03) Creating domain...
	I0603 12:42:56.333622 1096371 main.go:141] libmachine: (ha-220492-m03) Waiting to get IP...
	I0603 12:42:56.334452 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:42:56.334805 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | unable to find current IP address of domain ha-220492-m03 in network mk-ha-220492
	I0603 12:42:56.334832 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | I0603 12:42:56.334779 1097168 retry.go:31] will retry after 270.111796ms: waiting for machine to come up
	I0603 12:42:56.607116 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:42:56.607501 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | unable to find current IP address of domain ha-220492-m03 in network mk-ha-220492
	I0603 12:42:56.607534 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | I0603 12:42:56.607452 1097168 retry.go:31] will retry after 259.20477ms: waiting for machine to come up
	I0603 12:42:56.867718 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:42:56.868143 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | unable to find current IP address of domain ha-220492-m03 in network mk-ha-220492
	I0603 12:42:56.868171 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | I0603 12:42:56.868092 1097168 retry.go:31] will retry after 415.070892ms: waiting for machine to come up
	I0603 12:42:57.284930 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:42:57.285525 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | unable to find current IP address of domain ha-220492-m03 in network mk-ha-220492
	I0603 12:42:57.285565 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | I0603 12:42:57.285456 1097168 retry.go:31] will retry after 400.725155ms: waiting for machine to come up
	I0603 12:42:57.687701 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:42:57.688129 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | unable to find current IP address of domain ha-220492-m03 in network mk-ha-220492
	I0603 12:42:57.688165 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | I0603 12:42:57.688062 1097168 retry.go:31] will retry after 678.144187ms: waiting for machine to come up
	I0603 12:42:58.367821 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:42:58.368220 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | unable to find current IP address of domain ha-220492-m03 in network mk-ha-220492
	I0603 12:42:58.368294 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | I0603 12:42:58.368200 1097168 retry.go:31] will retry after 931.821679ms: waiting for machine to come up
	I0603 12:42:59.301346 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:42:59.301831 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | unable to find current IP address of domain ha-220492-m03 in network mk-ha-220492
	I0603 12:42:59.301865 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | I0603 12:42:59.301781 1097168 retry.go:31] will retry after 755.612995ms: waiting for machine to come up
	I0603 12:43:00.058476 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:00.058926 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | unable to find current IP address of domain ha-220492-m03 in network mk-ha-220492
	I0603 12:43:00.058959 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | I0603 12:43:00.058869 1097168 retry.go:31] will retry after 1.26953951s: waiting for machine to come up
	I0603 12:43:01.330176 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:01.330783 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | unable to find current IP address of domain ha-220492-m03 in network mk-ha-220492
	I0603 12:43:01.330816 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | I0603 12:43:01.330729 1097168 retry.go:31] will retry after 1.366168747s: waiting for machine to come up
	I0603 12:43:02.698825 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:02.699340 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | unable to find current IP address of domain ha-220492-m03 in network mk-ha-220492
	I0603 12:43:02.699368 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | I0603 12:43:02.699306 1097168 retry.go:31] will retry after 1.428113816s: waiting for machine to come up
	I0603 12:43:04.128962 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:04.129604 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | unable to find current IP address of domain ha-220492-m03 in network mk-ha-220492
	I0603 12:43:04.129639 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | I0603 12:43:04.129545 1097168 retry.go:31] will retry after 2.201677486s: waiting for machine to come up
	I0603 12:43:06.332618 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:06.333109 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | unable to find current IP address of domain ha-220492-m03 in network mk-ha-220492
	I0603 12:43:06.333168 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | I0603 12:43:06.333082 1097168 retry.go:31] will retry after 3.368027556s: waiting for machine to come up
	I0603 12:43:09.702818 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:09.703237 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | unable to find current IP address of domain ha-220492-m03 in network mk-ha-220492
	I0603 12:43:09.703261 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | I0603 12:43:09.703190 1097168 retry.go:31] will retry after 4.345500761s: waiting for machine to come up
	I0603 12:43:14.050558 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:14.051004 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | unable to find current IP address of domain ha-220492-m03 in network mk-ha-220492
	I0603 12:43:14.051035 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | I0603 12:43:14.050932 1097168 retry.go:31] will retry after 4.935094667s: waiting for machine to come up
	I0603 12:43:18.990788 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:18.991372 1096371 main.go:141] libmachine: (ha-220492-m03) Found IP for machine: 192.168.39.169
	I0603 12:43:18.991419 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has current primary IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:18.991430 1096371 main.go:141] libmachine: (ha-220492-m03) Reserving static IP address...
	I0603 12:43:18.991807 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | unable to find host DHCP lease matching {name: "ha-220492-m03", mac: "52:54:00:ae:60:87", ip: "192.168.39.169"} in network mk-ha-220492
	I0603 12:43:19.065731 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | Getting to WaitForSSH function...
	I0603 12:43:19.065789 1096371 main.go:141] libmachine: (ha-220492-m03) Reserved static IP address: 192.168.39.169
	I0603 12:43:19.065805 1096371 main.go:141] libmachine: (ha-220492-m03) Waiting for SSH to be available...
	I0603 12:43:19.068473 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:19.069095 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ae:60:87}
	I0603 12:43:19.069253 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:19.069613 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | Using SSH client type: external
	I0603 12:43:19.069665 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m03/id_rsa (-rw-------)
	I0603 12:43:19.069734 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.169 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 12:43:19.069762 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | About to run SSH command:
	I0603 12:43:19.069778 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | exit 0
	I0603 12:43:19.201678 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | SSH cmd err, output: <nil>: 
	I0603 12:43:19.202022 1096371 main.go:141] libmachine: (ha-220492-m03) KVM machine creation complete!
	I0603 12:43:19.202377 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetConfigRaw
	I0603 12:43:19.202958 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .DriverName
	I0603 12:43:19.203165 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .DriverName
	I0603 12:43:19.203354 1096371 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0603 12:43:19.203373 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetState
	I0603 12:43:19.204613 1096371 main.go:141] libmachine: Detecting operating system of created instance...
	I0603 12:43:19.204626 1096371 main.go:141] libmachine: Waiting for SSH to be available...
	I0603 12:43:19.204632 1096371 main.go:141] libmachine: Getting to WaitForSSH function...
	I0603 12:43:19.204638 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHHostname
	I0603 12:43:19.207109 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:19.207510 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-220492-m03 Clientid:01:52:54:00:ae:60:87}
	I0603 12:43:19.207536 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:19.207716 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHPort
	I0603 12:43:19.207897 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHKeyPath
	I0603 12:43:19.208077 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHKeyPath
	I0603 12:43:19.208274 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHUsername
	I0603 12:43:19.208431 1096371 main.go:141] libmachine: Using SSH client type: native
	I0603 12:43:19.208686 1096371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I0603 12:43:19.208699 1096371 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0603 12:43:19.316587 1096371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:43:19.316614 1096371 main.go:141] libmachine: Detecting the provisioner...
	I0603 12:43:19.316623 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHHostname
	I0603 12:43:19.319634 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:19.320042 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-220492-m03 Clientid:01:52:54:00:ae:60:87}
	I0603 12:43:19.320077 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:19.320227 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHPort
	I0603 12:43:19.320475 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHKeyPath
	I0603 12:43:19.320672 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHKeyPath
	I0603 12:43:19.320884 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHUsername
	I0603 12:43:19.321063 1096371 main.go:141] libmachine: Using SSH client type: native
	I0603 12:43:19.321232 1096371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I0603 12:43:19.321243 1096371 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0603 12:43:19.434387 1096371 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0603 12:43:19.434463 1096371 main.go:141] libmachine: found compatible host: buildroot
	I0603 12:43:19.434472 1096371 main.go:141] libmachine: Provisioning with buildroot...
	I0603 12:43:19.434482 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetMachineName
	I0603 12:43:19.434811 1096371 buildroot.go:166] provisioning hostname "ha-220492-m03"
	I0603 12:43:19.434843 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetMachineName
	I0603 12:43:19.435078 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHHostname
	I0603 12:43:19.438030 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:19.438406 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-220492-m03 Clientid:01:52:54:00:ae:60:87}
	I0603 12:43:19.438438 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:19.438578 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHPort
	I0603 12:43:19.438798 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHKeyPath
	I0603 12:43:19.439004 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHKeyPath
	I0603 12:43:19.439273 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHUsername
	I0603 12:43:19.439511 1096371 main.go:141] libmachine: Using SSH client type: native
	I0603 12:43:19.439748 1096371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I0603 12:43:19.439766 1096371 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-220492-m03 && echo "ha-220492-m03" | sudo tee /etc/hostname
	I0603 12:43:19.570221 1096371 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-220492-m03
	
	I0603 12:43:19.570258 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHHostname
	I0603 12:43:19.573051 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:19.573536 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-220492-m03 Clientid:01:52:54:00:ae:60:87}
	I0603 12:43:19.573572 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:19.573735 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHPort
	I0603 12:43:19.573941 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHKeyPath
	I0603 12:43:19.574113 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHKeyPath
	I0603 12:43:19.574228 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHUsername
	I0603 12:43:19.574430 1096371 main.go:141] libmachine: Using SSH client type: native
	I0603 12:43:19.574652 1096371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I0603 12:43:19.574677 1096371 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-220492-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-220492-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-220492-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 12:43:19.695101 1096371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:43:19.695145 1096371 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19011-1078924/.minikube CaCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19011-1078924/.minikube}
	I0603 12:43:19.695176 1096371 buildroot.go:174] setting up certificates
	I0603 12:43:19.695188 1096371 provision.go:84] configureAuth start
	I0603 12:43:19.695203 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetMachineName
	I0603 12:43:19.695535 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetIP
	I0603 12:43:19.698321 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:19.698660 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-220492-m03 Clientid:01:52:54:00:ae:60:87}
	I0603 12:43:19.698692 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:19.698820 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHHostname
	I0603 12:43:19.700861 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:19.701183 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-220492-m03 Clientid:01:52:54:00:ae:60:87}
	I0603 12:43:19.701215 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:19.701337 1096371 provision.go:143] copyHostCerts
	I0603 12:43:19.701373 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 12:43:19.701426 1096371 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem, removing ...
	I0603 12:43:19.701439 1096371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 12:43:19.701511 1096371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem (1078 bytes)
	I0603 12:43:19.701586 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 12:43:19.701606 1096371 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem, removing ...
	I0603 12:43:19.701613 1096371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 12:43:19.701636 1096371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem (1123 bytes)
	I0603 12:43:19.701683 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 12:43:19.701699 1096371 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem, removing ...
	I0603 12:43:19.701706 1096371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 12:43:19.701726 1096371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem (1675 bytes)
	I0603 12:43:19.701776 1096371 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem org=jenkins.ha-220492-m03 san=[127.0.0.1 192.168.39.169 ha-220492-m03 localhost minikube]
	I0603 12:43:20.001276 1096371 provision.go:177] copyRemoteCerts
	I0603 12:43:20.001334 1096371 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 12:43:20.001359 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHHostname
	I0603 12:43:20.003939 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:20.004239 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-220492-m03 Clientid:01:52:54:00:ae:60:87}
	I0603 12:43:20.004268 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:20.004520 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHPort
	I0603 12:43:20.004727 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHKeyPath
	I0603 12:43:20.004875 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHUsername
	I0603 12:43:20.005010 1096371 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m03/id_rsa Username:docker}
	I0603 12:43:20.092470 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0603 12:43:20.092538 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0603 12:43:20.118037 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0603 12:43:20.118115 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 12:43:20.143685 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0603 12:43:20.143758 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 12:43:20.168381 1096371 provision.go:87] duration metric: took 473.178136ms to configureAuth
	I0603 12:43:20.168414 1096371 buildroot.go:189] setting minikube options for container-runtime
	I0603 12:43:20.168639 1096371 config.go:182] Loaded profile config "ha-220492": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:43:20.168724 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHHostname
	I0603 12:43:20.171425 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:20.171794 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-220492-m03 Clientid:01:52:54:00:ae:60:87}
	I0603 12:43:20.171822 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:20.171970 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHPort
	I0603 12:43:20.172177 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHKeyPath
	I0603 12:43:20.172336 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHKeyPath
	I0603 12:43:20.172484 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHUsername
	I0603 12:43:20.172643 1096371 main.go:141] libmachine: Using SSH client type: native
	I0603 12:43:20.172821 1096371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I0603 12:43:20.172839 1096371 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 12:43:20.453638 1096371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 12:43:20.453674 1096371 main.go:141] libmachine: Checking connection to Docker...
	I0603 12:43:20.453684 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetURL
	I0603 12:43:20.455245 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | Using libvirt version 6000000
	I0603 12:43:20.457867 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:20.458347 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-220492-m03 Clientid:01:52:54:00:ae:60:87}
	I0603 12:43:20.458396 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:20.458483 1096371 main.go:141] libmachine: Docker is up and running!
	I0603 12:43:20.458495 1096371 main.go:141] libmachine: Reticulating splines...
	I0603 12:43:20.458502 1096371 client.go:171] duration metric: took 26.115344616s to LocalClient.Create
	I0603 12:43:20.458526 1096371 start.go:167] duration metric: took 26.11540413s to libmachine.API.Create "ha-220492"
	I0603 12:43:20.458538 1096371 start.go:293] postStartSetup for "ha-220492-m03" (driver="kvm2")
	I0603 12:43:20.458553 1096371 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 12:43:20.458571 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .DriverName
	I0603 12:43:20.458829 1096371 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 12:43:20.458854 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHHostname
	I0603 12:43:20.461283 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:20.461622 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-220492-m03 Clientid:01:52:54:00:ae:60:87}
	I0603 12:43:20.461649 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:20.461855 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHPort
	I0603 12:43:20.462088 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHKeyPath
	I0603 12:43:20.462274 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHUsername
	I0603 12:43:20.462460 1096371 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m03/id_rsa Username:docker}
	I0603 12:43:20.553625 1096371 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 12:43:20.558471 1096371 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 12:43:20.558535 1096371 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/addons for local assets ...
	I0603 12:43:20.558610 1096371 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/files for local assets ...
	I0603 12:43:20.558691 1096371 filesync.go:149] local asset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> 10862512.pem in /etc/ssl/certs
	I0603 12:43:20.558703 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> /etc/ssl/certs/10862512.pem
	I0603 12:43:20.558783 1096371 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 12:43:20.569972 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 12:43:20.595859 1096371 start.go:296] duration metric: took 137.299966ms for postStartSetup
	I0603 12:43:20.595921 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetConfigRaw
	I0603 12:43:20.596583 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetIP
	I0603 12:43:20.599761 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:20.600203 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-220492-m03 Clientid:01:52:54:00:ae:60:87}
	I0603 12:43:20.600223 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:20.600606 1096371 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/config.json ...
	I0603 12:43:20.600826 1096371 start.go:128] duration metric: took 26.277116804s to createHost
	I0603 12:43:20.600858 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHHostname
	I0603 12:43:20.603058 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:20.603486 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-220492-m03 Clientid:01:52:54:00:ae:60:87}
	I0603 12:43:20.603509 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:20.603699 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHPort
	I0603 12:43:20.603957 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHKeyPath
	I0603 12:43:20.604121 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHKeyPath
	I0603 12:43:20.604249 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHUsername
	I0603 12:43:20.604410 1096371 main.go:141] libmachine: Using SSH client type: native
	I0603 12:43:20.604590 1096371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I0603 12:43:20.604600 1096371 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 12:43:20.720338 1096371 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717418600.696165887
	
	I0603 12:43:20.720374 1096371 fix.go:216] guest clock: 1717418600.696165887
	I0603 12:43:20.720386 1096371 fix.go:229] Guest: 2024-06-03 12:43:20.696165887 +0000 UTC Remote: 2024-06-03 12:43:20.600841955 +0000 UTC m=+155.483411943 (delta=95.323932ms)
	I0603 12:43:20.720414 1096371 fix.go:200] guest clock delta is within tolerance: 95.323932ms
	I0603 12:43:20.720422 1096371 start.go:83] releasing machines lock for "ha-220492-m03", held for 26.396858432s
	I0603 12:43:20.720449 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .DriverName
	I0603 12:43:20.720784 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetIP
	I0603 12:43:20.723898 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:20.724327 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-220492-m03 Clientid:01:52:54:00:ae:60:87}
	I0603 12:43:20.724358 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:20.726351 1096371 out.go:177] * Found network options:
	I0603 12:43:20.728299 1096371 out.go:177]   - NO_PROXY=192.168.39.6,192.168.39.106
	W0603 12:43:20.729832 1096371 proxy.go:119] fail to check proxy env: Error ip not in block
	W0603 12:43:20.729861 1096371 proxy.go:119] fail to check proxy env: Error ip not in block
	I0603 12:43:20.729881 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .DriverName
	I0603 12:43:20.730652 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .DriverName
	I0603 12:43:20.730895 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .DriverName
	I0603 12:43:20.731055 1096371 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 12:43:20.731108 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHHostname
	W0603 12:43:20.731132 1096371 proxy.go:119] fail to check proxy env: Error ip not in block
	W0603 12:43:20.731162 1096371 proxy.go:119] fail to check proxy env: Error ip not in block
	I0603 12:43:20.731283 1096371 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 12:43:20.731300 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHHostname
	I0603 12:43:20.734387 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:20.734430 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:20.734770 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-220492-m03 Clientid:01:52:54:00:ae:60:87}
	I0603 12:43:20.734801 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:20.734827 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-220492-m03 Clientid:01:52:54:00:ae:60:87}
	I0603 12:43:20.734841 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:20.734881 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHPort
	I0603 12:43:20.735097 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHKeyPath
	I0603 12:43:20.735110 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHPort
	I0603 12:43:20.735275 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHKeyPath
	I0603 12:43:20.735344 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHUsername
	I0603 12:43:20.735427 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHUsername
	I0603 12:43:20.735499 1096371 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m03/id_rsa Username:docker}
	I0603 12:43:20.735570 1096371 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m03/id_rsa Username:docker}
	I0603 12:43:20.981022 1096371 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 12:43:20.988392 1096371 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 12:43:20.988507 1096371 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 12:43:21.006315 1096371 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 12:43:21.006443 1096371 start.go:494] detecting cgroup driver to use...
	I0603 12:43:21.006547 1096371 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 12:43:21.025644 1096371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:43:21.040335 1096371 docker.go:217] disabling cri-docker service (if available) ...
	I0603 12:43:21.040399 1096371 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 12:43:21.056817 1096371 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 12:43:21.072459 1096371 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 12:43:21.207222 1096371 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 12:43:21.352527 1096371 docker.go:233] disabling docker service ...
	I0603 12:43:21.352611 1096371 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 12:43:21.369215 1096371 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 12:43:21.383793 1096371 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 12:43:21.530137 1096371 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 12:43:21.643860 1096371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 12:43:21.658602 1096371 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:43:21.678687 1096371 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 12:43:21.678761 1096371 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:43:21.689934 1096371 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 12:43:21.690016 1096371 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:43:21.701661 1096371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:43:21.714292 1096371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:43:21.726116 1096371 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 12:43:21.738093 1096371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:43:21.750313 1096371 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:43:21.770297 1096371 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:43:21.783297 1096371 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 12:43:21.794238 1096371 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 12:43:21.794304 1096371 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 12:43:21.807985 1096371 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 12:43:21.818315 1096371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:43:21.964194 1096371 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 12:43:22.115343 1096371 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 12:43:22.115462 1096371 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 12:43:22.120272 1096371 start.go:562] Will wait 60s for crictl version
	I0603 12:43:22.120327 1096371 ssh_runner.go:195] Run: which crictl
	I0603 12:43:22.124229 1096371 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 12:43:22.172026 1096371 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 12:43:22.172099 1096371 ssh_runner.go:195] Run: crio --version
	I0603 12:43:22.202369 1096371 ssh_runner.go:195] Run: crio --version
	I0603 12:43:22.233707 1096371 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 12:43:22.235401 1096371 out.go:177]   - env NO_PROXY=192.168.39.6
	I0603 12:43:22.236873 1096371 out.go:177]   - env NO_PROXY=192.168.39.6,192.168.39.106
	I0603 12:43:22.238349 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetIP
	I0603 12:43:22.241427 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:22.241843 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-220492-m03 Clientid:01:52:54:00:ae:60:87}
	I0603 12:43:22.241868 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:22.242108 1096371 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0603 12:43:22.246783 1096371 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:43:22.259935 1096371 mustload.go:65] Loading cluster: ha-220492
	I0603 12:43:22.260185 1096371 config.go:182] Loaded profile config "ha-220492": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:43:22.260462 1096371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:43:22.260520 1096371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:43:22.277384 1096371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35971
	I0603 12:43:22.277925 1096371 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:43:22.278444 1096371 main.go:141] libmachine: Using API Version  1
	I0603 12:43:22.278469 1096371 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:43:22.278957 1096371 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:43:22.279163 1096371 main.go:141] libmachine: (ha-220492) Calling .GetState
	I0603 12:43:22.280930 1096371 host.go:66] Checking if "ha-220492" exists ...
	I0603 12:43:22.281238 1096371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:43:22.281286 1096371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:43:22.296092 1096371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37509
	I0603 12:43:22.296571 1096371 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:43:22.297036 1096371 main.go:141] libmachine: Using API Version  1
	I0603 12:43:22.297054 1096371 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:43:22.297385 1096371 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:43:22.297640 1096371 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:43:22.297834 1096371 certs.go:68] Setting up /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492 for IP: 192.168.39.169
	I0603 12:43:22.297849 1096371 certs.go:194] generating shared ca certs ...
	I0603 12:43:22.297870 1096371 certs.go:226] acquiring lock for ca certs: {Name:mkeec5aabce7c9540fcb31b78e4f96c2851d54f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:43:22.298030 1096371 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key
	I0603 12:43:22.298082 1096371 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key
	I0603 12:43:22.298097 1096371 certs.go:256] generating profile certs ...
	I0603 12:43:22.298197 1096371 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/client.key
	I0603 12:43:22.298231 1096371 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key.4fd59e07
	I0603 12:43:22.298272 1096371 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt.4fd59e07 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.6 192.168.39.106 192.168.39.169 192.168.39.254]
	I0603 12:43:22.384345 1096371 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt.4fd59e07 ...
	I0603 12:43:22.384396 1096371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt.4fd59e07: {Name:mk9434cb6dd09b3cdb5570cdf26f69733c2691cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:43:22.384595 1096371 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key.4fd59e07 ...
	I0603 12:43:22.384608 1096371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key.4fd59e07: {Name:mk5ce8cb87692994d1dd4d129a27c585f4731b23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:43:22.384681 1096371 certs.go:381] copying /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt.4fd59e07 -> /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt
	I0603 12:43:22.384833 1096371 certs.go:385] copying /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key.4fd59e07 -> /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key
	I0603 12:43:22.384963 1096371 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/proxy-client.key
	I0603 12:43:22.384980 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0603 12:43:22.384993 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0603 12:43:22.385007 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0603 12:43:22.385020 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0603 12:43:22.385033 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0603 12:43:22.385045 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0603 12:43:22.385057 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0603 12:43:22.385072 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0603 12:43:22.385118 1096371 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem (1338 bytes)
	W0603 12:43:22.385150 1096371 certs.go:480] ignoring /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251_empty.pem, impossibly tiny 0 bytes
	I0603 12:43:22.385166 1096371 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 12:43:22.385189 1096371 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem (1078 bytes)
	I0603 12:43:22.385211 1096371 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem (1123 bytes)
	I0603 12:43:22.385232 1096371 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem (1675 bytes)
	I0603 12:43:22.385272 1096371 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 12:43:22.385299 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem -> /usr/share/ca-certificates/1086251.pem
	I0603 12:43:22.385316 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> /usr/share/ca-certificates/10862512.pem
	I0603 12:43:22.385328 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:43:22.385362 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:43:22.388803 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:43:22.389462 1096371 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:43:22.389491 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:43:22.389727 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:43:22.389957 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:43:22.390116 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:43:22.390327 1096371 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/id_rsa Username:docker}
	I0603 12:43:22.465777 1096371 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0603 12:43:22.471023 1096371 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0603 12:43:22.485916 1096371 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0603 12:43:22.491124 1096371 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0603 12:43:22.504898 1096371 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0603 12:43:22.509773 1096371 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0603 12:43:22.523642 1096371 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0603 12:43:22.527983 1096371 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0603 12:43:22.539008 1096371 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0603 12:43:22.543891 1096371 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0603 12:43:22.557986 1096371 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0603 12:43:22.565354 1096371 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0603 12:43:22.577915 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 12:43:22.604742 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 12:43:22.629391 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 12:43:22.659078 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0603 12:43:22.686576 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0603 12:43:22.712999 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0603 12:43:22.738945 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 12:43:22.765149 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 12:43:22.793642 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem --> /usr/share/ca-certificates/1086251.pem (1338 bytes)
	I0603 12:43:22.819440 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /usr/share/ca-certificates/10862512.pem (1708 bytes)
	I0603 12:43:22.845526 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 12:43:22.869544 1096371 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0603 12:43:22.886657 1096371 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0603 12:43:22.904446 1096371 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0603 12:43:22.921153 1096371 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0603 12:43:22.938320 1096371 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0603 12:43:22.957084 1096371 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0603 12:43:22.974794 1096371 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0603 12:43:22.992634 1096371 ssh_runner.go:195] Run: openssl version
	I0603 12:43:22.999057 1096371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 12:43:23.011182 1096371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:43:23.016117 1096371 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:24 /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:43:23.016177 1096371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:43:23.022552 1096371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 12:43:23.034290 1096371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1086251.pem && ln -fs /usr/share/ca-certificates/1086251.pem /etc/ssl/certs/1086251.pem"
	I0603 12:43:23.046115 1096371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1086251.pem
	I0603 12:43:23.050785 1096371 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:37 /usr/share/ca-certificates/1086251.pem
	I0603 12:43:23.050845 1096371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1086251.pem
	I0603 12:43:23.056647 1096371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1086251.pem /etc/ssl/certs/51391683.0"
	I0603 12:43:23.069008 1096371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10862512.pem && ln -fs /usr/share/ca-certificates/10862512.pem /etc/ssl/certs/10862512.pem"
	I0603 12:43:23.080741 1096371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10862512.pem
	I0603 12:43:23.085671 1096371 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:37 /usr/share/ca-certificates/10862512.pem
	I0603 12:43:23.085736 1096371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10862512.pem
	I0603 12:43:23.092133 1096371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10862512.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 12:43:23.105182 1096371 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 12:43:23.109925 1096371 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 12:43:23.109994 1096371 kubeadm.go:928] updating node {m03 192.168.39.169 8443 v1.30.1 crio true true} ...
	I0603 12:43:23.110105 1096371 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-220492-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.169
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-220492 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 12:43:23.110148 1096371 kube-vip.go:115] generating kube-vip config ...
	I0603 12:43:23.110204 1096371 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0603 12:43:23.131514 1096371 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0603 12:43:23.131635 1096371 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0603 12:43:23.131705 1096371 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 12:43:23.142473 1096371 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0603 12:43:23.142542 1096371 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0603 12:43:23.153137 1096371 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0603 12:43:23.153138 1096371 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256
	I0603 12:43:23.153143 1096371 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256
	I0603 12:43:23.153183 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/linux/amd64/v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0603 12:43:23.153166 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/linux/amd64/v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0603 12:43:23.153215 1096371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:43:23.153253 1096371 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0603 12:43:23.153285 1096371 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0603 12:43:23.172473 1096371 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0603 12:43:23.172525 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/linux/amd64/v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0603 12:43:23.172581 1096371 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0603 12:43:23.172604 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/linux/amd64/v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0603 12:43:23.172621 1096371 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0603 12:43:23.172526 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/linux/amd64/v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0603 12:43:23.185256 1096371 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0603 12:43:23.185296 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/linux/amd64/v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0603 12:43:24.134865 1096371 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0603 12:43:24.145082 1096371 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0603 12:43:24.162562 1096371 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 12:43:24.179437 1096371 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0603 12:43:24.196174 1096371 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0603 12:43:24.200209 1096371 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:43:24.212797 1096371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:43:24.344234 1096371 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:43:24.362738 1096371 host.go:66] Checking if "ha-220492" exists ...
	I0603 12:43:24.363407 1096371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:43:24.363478 1096371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:43:24.382750 1096371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41011
	I0603 12:43:24.383293 1096371 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:43:24.383844 1096371 main.go:141] libmachine: Using API Version  1
	I0603 12:43:24.383869 1096371 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:43:24.384258 1096371 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:43:24.384452 1096371 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:43:24.384614 1096371 start.go:316] joinCluster: &{Name:ha-220492 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cluster
Name:ha-220492 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.106 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.169 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:43:24.384749 1096371 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0603 12:43:24.384776 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:43:24.388207 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:43:24.388797 1096371 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:43:24.388830 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:43:24.389000 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:43:24.389214 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:43:24.389426 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:43:24.389609 1096371 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/id_rsa Username:docker}
	I0603 12:43:24.551779 1096371 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.169 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 12:43:24.551825 1096371 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vhxak4.tfp86wxpifu70ily --discovery-token-ca-cert-hash sha256:c33e9516f6d05db03b44f9194bafe44692a1b8ae1d860b8bc74f77578e93fdb1 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-220492-m03 --control-plane --apiserver-advertise-address=192.168.39.169 --apiserver-bind-port=8443"
	I0603 12:43:48.466021 1096371 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vhxak4.tfp86wxpifu70ily --discovery-token-ca-cert-hash sha256:c33e9516f6d05db03b44f9194bafe44692a1b8ae1d860b8bc74f77578e93fdb1 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-220492-m03 --control-plane --apiserver-advertise-address=192.168.39.169 --apiserver-bind-port=8443": (23.914162266s)
	I0603 12:43:48.466076 1096371 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0603 12:43:49.074811 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-220492-m03 minikube.k8s.io/updated_at=2024_06_03T12_43_49_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354 minikube.k8s.io/name=ha-220492 minikube.k8s.io/primary=false
	I0603 12:43:49.193601 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-220492-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0603 12:43:49.309011 1096371 start.go:318] duration metric: took 24.924388938s to joinCluster
	I0603 12:43:49.309110 1096371 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.169 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 12:43:49.310641 1096371 out.go:177] * Verifying Kubernetes components...
	I0603 12:43:49.309516 1096371 config.go:182] Loaded profile config "ha-220492": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:43:49.311715 1096371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:43:49.581989 1096371 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:43:49.608808 1096371 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 12:43:49.609148 1096371 kapi.go:59] client config for ha-220492: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/client.crt", KeyFile:"/home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/client.key", CAFile:"/home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfa500), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0603 12:43:49.609240 1096371 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.6:8443
	I0603 12:43:49.609573 1096371 node_ready.go:35] waiting up to 6m0s for node "ha-220492-m03" to be "Ready" ...
	I0603 12:43:49.609704 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:49.609719 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:49.609729 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:49.609739 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:49.613506 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:43:50.110524 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:50.110557 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:50.110565 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:50.110574 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:50.113651 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:43:50.610441 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:50.610464 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:50.610472 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:50.610476 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:50.613944 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:43:51.109896 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:51.109918 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:51.109927 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:51.109930 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:51.115692 1096371 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 12:43:51.610724 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:51.610745 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:51.610753 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:51.610757 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:51.614405 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:43:51.614931 1096371 node_ready.go:53] node "ha-220492-m03" has status "Ready":"False"
	I0603 12:43:52.110574 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:52.110598 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:52.110607 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:52.110610 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:52.115335 1096371 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:43:52.610167 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:52.610191 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:52.610199 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:52.610203 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:52.614880 1096371 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:43:53.110742 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:53.110771 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:53.110783 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:53.110791 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:53.114681 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:43:53.610742 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:53.610774 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:53.610785 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:53.610789 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:53.615916 1096371 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 12:43:53.616844 1096371 node_ready.go:53] node "ha-220492-m03" has status "Ready":"False"
	I0603 12:43:54.110157 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:54.110205 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:54.110214 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:54.110218 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:54.114472 1096371 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:43:54.610217 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:54.610240 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:54.610250 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:54.610254 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:54.613866 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:43:55.110220 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:55.110245 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:55.110253 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:55.110257 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:55.113487 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:43:55.610455 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:55.610479 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:55.610489 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:55.610496 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:55.614341 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:43:55.614994 1096371 node_ready.go:49] node "ha-220492-m03" has status "Ready":"True"
	I0603 12:43:55.615014 1096371 node_ready.go:38] duration metric: took 6.005414937s for node "ha-220492-m03" to be "Ready" ...
	I0603 12:43:55.615023 1096371 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:43:55.615083 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0603 12:43:55.615092 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:55.615099 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:55.615102 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:55.621643 1096371 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 12:43:55.629636 1096371 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-d2tgp" in "kube-system" namespace to be "Ready" ...
	I0603 12:43:55.629772 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-d2tgp
	I0603 12:43:55.629785 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:55.629793 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:55.629797 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:55.632667 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:43:55.633375 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492
	I0603 12:43:55.633393 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:55.633420 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:55.633426 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:55.636047 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:43:55.636441 1096371 pod_ready.go:92] pod "coredns-7db6d8ff4d-d2tgp" in "kube-system" namespace has status "Ready":"True"
	I0603 12:43:55.636458 1096371 pod_ready.go:81] duration metric: took 6.798231ms for pod "coredns-7db6d8ff4d-d2tgp" in "kube-system" namespace to be "Ready" ...
	I0603 12:43:55.636465 1096371 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-q7687" in "kube-system" namespace to be "Ready" ...
	I0603 12:43:55.636515 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-q7687
	I0603 12:43:55.636523 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:55.636530 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:55.636537 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:55.638896 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:43:55.639602 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492
	I0603 12:43:55.639617 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:55.639623 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:55.639627 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:55.643342 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:43:55.643839 1096371 pod_ready.go:92] pod "coredns-7db6d8ff4d-q7687" in "kube-system" namespace has status "Ready":"True"
	I0603 12:43:55.643860 1096371 pod_ready.go:81] duration metric: took 7.385134ms for pod "coredns-7db6d8ff4d-q7687" in "kube-system" namespace to be "Ready" ...
	I0603 12:43:55.643871 1096371 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-220492" in "kube-system" namespace to be "Ready" ...
	I0603 12:43:55.643931 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492
	I0603 12:43:55.643947 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:55.643957 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:55.643966 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:55.646755 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:43:55.647316 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492
	I0603 12:43:55.647331 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:55.647338 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:55.647343 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:55.650630 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:43:55.651339 1096371 pod_ready.go:92] pod "etcd-ha-220492" in "kube-system" namespace has status "Ready":"True"
	I0603 12:43:55.651355 1096371 pod_ready.go:81] duration metric: took 7.477443ms for pod "etcd-ha-220492" in "kube-system" namespace to be "Ready" ...
	I0603 12:43:55.651363 1096371 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-220492-m02" in "kube-system" namespace to be "Ready" ...
	I0603 12:43:55.651405 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492-m02
	I0603 12:43:55.651413 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:55.651419 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:55.651424 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:55.653855 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:43:55.654466 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:43:55.654488 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:55.654495 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:55.654499 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:55.656908 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:43:55.657483 1096371 pod_ready.go:92] pod "etcd-ha-220492-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 12:43:55.657500 1096371 pod_ready.go:81] duration metric: took 6.129437ms for pod "etcd-ha-220492-m02" in "kube-system" namespace to be "Ready" ...
	I0603 12:43:55.657508 1096371 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-220492-m03" in "kube-system" namespace to be "Ready" ...
	I0603 12:43:55.810790 1096371 request.go:629] Waited for 153.183486ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492-m03
	I0603 12:43:55.810879 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492-m03
	I0603 12:43:55.810887 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:55.810898 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:55.810909 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:55.814643 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:43:56.010887 1096371 request.go:629] Waited for 195.364967ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:56.010959 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:56.010966 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:56.010974 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:56.010978 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:56.016371 1096371 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 12:43:56.211372 1096371 request.go:629] Waited for 53.221404ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492-m03
	I0603 12:43:56.211443 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492-m03
	I0603 12:43:56.211449 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:56.211456 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:56.211461 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:56.215030 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:43:56.411325 1096371 request.go:629] Waited for 195.396738ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:56.411387 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:56.411392 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:56.411400 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:56.411404 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:56.414791 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:43:56.658682 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492-m03
	I0603 12:43:56.658707 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:56.658714 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:56.658720 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:56.662087 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:43:56.811307 1096371 request.go:629] Waited for 148.327175ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:56.811410 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:56.811419 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:56.811429 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:56.811441 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:56.815038 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:43:57.158412 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492-m03
	I0603 12:43:57.158436 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:57.158445 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:57.158449 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:57.161908 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:43:57.210946 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:57.210969 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:57.210978 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:57.210982 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:57.214225 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:43:57.658114 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492-m03
	I0603 12:43:57.658149 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:57.658162 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:57.658168 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:57.661706 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:43:57.662561 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:57.662584 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:57.662593 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:57.662599 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:57.665464 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:43:57.666181 1096371 pod_ready.go:92] pod "etcd-ha-220492-m03" in "kube-system" namespace has status "Ready":"True"
	I0603 12:43:57.666203 1096371 pod_ready.go:81] duration metric: took 2.008687571s for pod "etcd-ha-220492-m03" in "kube-system" namespace to be "Ready" ...
	I0603 12:43:57.666226 1096371 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-220492" in "kube-system" namespace to be "Ready" ...
	I0603 12:43:57.810534 1096371 request.go:629] Waited for 144.228037ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-220492
	I0603 12:43:57.810625 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-220492
	I0603 12:43:57.810633 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:57.810641 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:57.810646 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:57.815060 1096371 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:43:58.011169 1096371 request.go:629] Waited for 195.365307ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-220492
	I0603 12:43:58.011257 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492
	I0603 12:43:58.011263 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:58.011271 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:58.011279 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:58.015015 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:43:58.015841 1096371 pod_ready.go:92] pod "kube-apiserver-ha-220492" in "kube-system" namespace has status "Ready":"True"
	I0603 12:43:58.015863 1096371 pod_ready.go:81] duration metric: took 349.622915ms for pod "kube-apiserver-ha-220492" in "kube-system" namespace to be "Ready" ...
	I0603 12:43:58.015872 1096371 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-220492-m02" in "kube-system" namespace to be "Ready" ...
	I0603 12:43:58.210933 1096371 request.go:629] Waited for 194.958077ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-220492-m02
	I0603 12:43:58.211005 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-220492-m02
	I0603 12:43:58.211010 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:58.211018 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:58.211026 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:58.215371 1096371 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:43:58.411477 1096371 request.go:629] Waited for 194.387478ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:43:58.411558 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:43:58.411566 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:58.411577 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:58.411597 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:58.426670 1096371 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0603 12:43:58.427342 1096371 pod_ready.go:92] pod "kube-apiserver-ha-220492-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 12:43:58.427369 1096371 pod_ready.go:81] duration metric: took 411.489767ms for pod "kube-apiserver-ha-220492-m02" in "kube-system" namespace to be "Ready" ...
	I0603 12:43:58.427396 1096371 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-220492-m03" in "kube-system" namespace to be "Ready" ...
	I0603 12:43:58.611227 1096371 request.go:629] Waited for 183.715657ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-220492-m03
	I0603 12:43:58.611320 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-220492-m03
	I0603 12:43:58.611336 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:58.611347 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:58.611354 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:58.614851 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:43:58.811360 1096371 request.go:629] Waited for 195.351281ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:58.811467 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:58.811473 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:58.811481 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:58.811486 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:58.815323 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:43:59.010969 1096371 request.go:629] Waited for 83.254261ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-220492-m03
	I0603 12:43:59.011051 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-220492-m03
	I0603 12:43:59.011064 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:59.011079 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:59.011090 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:59.014874 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:43:59.210930 1096371 request.go:629] Waited for 195.37591ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:59.211008 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:59.211015 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:59.211027 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:59.211038 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:59.214615 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:43:59.428256 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-220492-m03
	I0603 12:43:59.428284 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:59.428293 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:59.428297 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:59.432197 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:43:59.611345 1096371 request.go:629] Waited for 178.353076ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:59.611453 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:59.611460 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:59.611467 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:59.611472 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:59.614758 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:43:59.928634 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-220492-m03
	I0603 12:43:59.928659 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:59.928668 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:59.928674 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:59.932105 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:00.011090 1096371 request.go:629] Waited for 78.235001ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:44:00.011155 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:44:00.011160 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:00.011168 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:00.011173 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:00.014697 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:00.428581 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-220492-m03
	I0603 12:44:00.428606 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:00.428613 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:00.428616 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:00.432148 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:00.433088 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:44:00.433107 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:00.433118 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:00.433127 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:00.436412 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:00.437015 1096371 pod_ready.go:92] pod "kube-apiserver-ha-220492-m03" in "kube-system" namespace has status "Ready":"True"
	I0603 12:44:00.437042 1096371 pod_ready.go:81] duration metric: took 2.009636337s for pod "kube-apiserver-ha-220492-m03" in "kube-system" namespace to be "Ready" ...
	I0603 12:44:00.437055 1096371 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-220492" in "kube-system" namespace to be "Ready" ...
	I0603 12:44:00.611461 1096371 request.go:629] Waited for 174.328001ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220492
	I0603 12:44:00.611561 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220492
	I0603 12:44:00.611581 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:00.611593 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:00.611603 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:00.615409 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:00.811511 1096371 request.go:629] Waited for 195.407138ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-220492
	I0603 12:44:00.811579 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492
	I0603 12:44:00.811584 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:00.811592 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:00.811596 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:00.815065 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:00.815768 1096371 pod_ready.go:92] pod "kube-controller-manager-ha-220492" in "kube-system" namespace has status "Ready":"True"
	I0603 12:44:00.815787 1096371 pod_ready.go:81] duration metric: took 378.723871ms for pod "kube-controller-manager-ha-220492" in "kube-system" namespace to be "Ready" ...
	I0603 12:44:00.815797 1096371 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-220492-m02" in "kube-system" namespace to be "Ready" ...
	I0603 12:44:01.010875 1096371 request.go:629] Waited for 194.987952ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220492-m02
	I0603 12:44:01.010941 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220492-m02
	I0603 12:44:01.010946 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:01.010953 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:01.010957 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:01.014830 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:01.210945 1096371 request.go:629] Waited for 195.366991ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:44:01.211031 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:44:01.211038 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:01.211051 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:01.211062 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:01.214923 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:01.215608 1096371 pod_ready.go:92] pod "kube-controller-manager-ha-220492-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 12:44:01.215632 1096371 pod_ready.go:81] duration metric: took 399.828657ms for pod "kube-controller-manager-ha-220492-m02" in "kube-system" namespace to be "Ready" ...
	I0603 12:44:01.215644 1096371 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-220492-m03" in "kube-system" namespace to be "Ready" ...
	I0603 12:44:01.410600 1096371 request.go:629] Waited for 194.859894ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220492-m03
	I0603 12:44:01.410673 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220492-m03
	I0603 12:44:01.410678 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:01.410686 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:01.410690 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:01.414051 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:01.611128 1096371 request.go:629] Waited for 196.374452ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:44:01.611213 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:44:01.611223 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:01.611234 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:01.611244 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:01.614275 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:44:01.811094 1096371 request.go:629] Waited for 95.2615ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220492-m03
	I0603 12:44:01.811183 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220492-m03
	I0603 12:44:01.811190 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:01.811199 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:01.811205 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:01.815552 1096371 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:44:02.011530 1096371 request.go:629] Waited for 195.337494ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:44:02.011613 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:44:02.011621 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:02.011632 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:02.011638 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:02.015514 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:02.216018 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220492-m03
	I0603 12:44:02.216043 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:02.216052 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:02.216058 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:02.219384 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:02.410775 1096371 request.go:629] Waited for 190.706668ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:44:02.410846 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:44:02.410856 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:02.410865 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:02.410870 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:02.414994 1096371 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:44:02.716753 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220492-m03
	I0603 12:44:02.716786 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:02.716797 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:02.716805 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:02.720353 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:02.811450 1096371 request.go:629] Waited for 90.271231ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:44:02.811537 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:44:02.811543 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:02.811552 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:02.811559 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:02.814813 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:03.216672 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220492-m03
	I0603 12:44:03.216698 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:03.216706 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:03.216710 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:03.220641 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:03.221740 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:44:03.221764 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:03.221775 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:03.221782 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:03.225530 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:03.226056 1096371 pod_ready.go:102] pod "kube-controller-manager-ha-220492-m03" in "kube-system" namespace has status "Ready":"False"
	I0603 12:44:03.716608 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220492-m03
	I0603 12:44:03.716642 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:03.716654 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:03.716660 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:03.720707 1096371 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:44:03.721672 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:44:03.721691 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:03.721698 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:03.721703 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:03.724982 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:04.215874 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220492-m03
	I0603 12:44:04.215903 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:04.215915 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:04.215922 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:04.219253 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:04.219961 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:44:04.219978 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:04.219986 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:04.219992 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:04.222791 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:44:04.716689 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220492-m03
	I0603 12:44:04.716713 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:04.716721 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:04.716725 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:04.720184 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:04.721021 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:44:04.721038 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:04.721046 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:04.721050 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:04.724144 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:04.724770 1096371 pod_ready.go:92] pod "kube-controller-manager-ha-220492-m03" in "kube-system" namespace has status "Ready":"True"
	I0603 12:44:04.724796 1096371 pod_ready.go:81] duration metric: took 3.509143073s for pod "kube-controller-manager-ha-220492-m03" in "kube-system" namespace to be "Ready" ...
	I0603 12:44:04.724810 1096371 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dkzgt" in "kube-system" namespace to be "Ready" ...
	I0603 12:44:04.724903 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dkzgt
	I0603 12:44:04.724915 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:04.724926 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:04.724935 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:04.727679 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:44:04.810611 1096371 request.go:629] Waited for 82.1903ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:44:04.810730 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:44:04.810749 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:04.810757 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:04.810762 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:04.815132 1096371 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:44:04.815977 1096371 pod_ready.go:92] pod "kube-proxy-dkzgt" in "kube-system" namespace has status "Ready":"True"
	I0603 12:44:04.815999 1096371 pod_ready.go:81] duration metric: took 91.179243ms for pod "kube-proxy-dkzgt" in "kube-system" namespace to be "Ready" ...
	I0603 12:44:04.816008 1096371 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m5l8r" in "kube-system" namespace to be "Ready" ...
	I0603 12:44:05.011523 1096371 request.go:629] Waited for 195.432467ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m5l8r
	I0603 12:44:05.011607 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m5l8r
	I0603 12:44:05.011614 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:05.011632 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:05.011647 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:05.014986 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:05.210787 1096371 request.go:629] Waited for 194.851603ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:44:05.210859 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:44:05.210864 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:05.210873 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:05.210878 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:05.214896 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:05.215591 1096371 pod_ready.go:92] pod "kube-proxy-m5l8r" in "kube-system" namespace has status "Ready":"True"
	I0603 12:44:05.215617 1096371 pod_ready.go:81] duration metric: took 399.601076ms for pod "kube-proxy-m5l8r" in "kube-system" namespace to be "Ready" ...
	I0603 12:44:05.215632 1096371 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w2hpg" in "kube-system" namespace to be "Ready" ...
	I0603 12:44:05.410992 1096371 request.go:629] Waited for 195.257151ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w2hpg
	I0603 12:44:05.411099 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w2hpg
	I0603 12:44:05.411113 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:05.411124 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:05.411140 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:05.415054 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:05.610577 1096371 request.go:629] Waited for 194.232033ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-220492
	I0603 12:44:05.610664 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492
	I0603 12:44:05.610669 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:05.610676 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:05.610680 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:05.614567 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:05.615166 1096371 pod_ready.go:92] pod "kube-proxy-w2hpg" in "kube-system" namespace has status "Ready":"True"
	I0603 12:44:05.615192 1096371 pod_ready.go:81] duration metric: took 399.552426ms for pod "kube-proxy-w2hpg" in "kube-system" namespace to be "Ready" ...
	I0603 12:44:05.615201 1096371 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-220492" in "kube-system" namespace to be "Ready" ...
	I0603 12:44:05.811327 1096371 request.go:629] Waited for 196.001211ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-220492
	I0603 12:44:05.811428 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-220492
	I0603 12:44:05.811437 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:05.811447 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:05.811454 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:05.817845 1096371 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 12:44:06.010933 1096371 request.go:629] Waited for 191.357857ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-220492
	I0603 12:44:06.010999 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492
	I0603 12:44:06.011004 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:06.011018 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:06.011022 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:06.014838 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:06.015579 1096371 pod_ready.go:92] pod "kube-scheduler-ha-220492" in "kube-system" namespace has status "Ready":"True"
	I0603 12:44:06.015598 1096371 pod_ready.go:81] duration metric: took 400.390489ms for pod "kube-scheduler-ha-220492" in "kube-system" namespace to be "Ready" ...
	I0603 12:44:06.015609 1096371 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-220492-m02" in "kube-system" namespace to be "Ready" ...
	I0603 12:44:06.210751 1096371 request.go:629] Waited for 195.041049ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-220492-m02
	I0603 12:44:06.210819 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-220492-m02
	I0603 12:44:06.210824 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:06.210832 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:06.210836 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:06.215303 1096371 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:44:06.411210 1096371 request.go:629] Waited for 195.279246ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:44:06.411325 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:44:06.411337 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:06.411348 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:06.411361 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:06.414932 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:06.415448 1096371 pod_ready.go:92] pod "kube-scheduler-ha-220492-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 12:44:06.415467 1096371 pod_ready.go:81] duration metric: took 399.852489ms for pod "kube-scheduler-ha-220492-m02" in "kube-system" namespace to be "Ready" ...
	I0603 12:44:06.415477 1096371 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-220492-m03" in "kube-system" namespace to be "Ready" ...
	I0603 12:44:06.610489 1096371 request.go:629] Waited for 194.919273ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-220492-m03
	I0603 12:44:06.610608 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-220492-m03
	I0603 12:44:06.610612 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:06.610620 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:06.610625 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:06.614770 1096371 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:44:06.810914 1096371 request.go:629] Waited for 195.37202ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:44:06.810997 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:44:06.811002 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:06.811010 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:06.811015 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:06.815258 1096371 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:44:06.816177 1096371 pod_ready.go:92] pod "kube-scheduler-ha-220492-m03" in "kube-system" namespace has status "Ready":"True"
	I0603 12:44:06.816196 1096371 pod_ready.go:81] duration metric: took 400.712759ms for pod "kube-scheduler-ha-220492-m03" in "kube-system" namespace to be "Ready" ...
	I0603 12:44:06.816216 1096371 pod_ready.go:38] duration metric: took 11.201183722s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:44:06.816239 1096371 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:44:06.816303 1096371 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:44:06.833784 1096371 api_server.go:72] duration metric: took 17.524633386s to wait for apiserver process to appear ...
	I0603 12:44:06.833813 1096371 api_server.go:88] waiting for apiserver healthz status ...
	I0603 12:44:06.833848 1096371 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0603 12:44:06.838436 1096371 api_server.go:279] https://192.168.39.6:8443/healthz returned 200:
	ok
	I0603 12:44:06.838515 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/version
	I0603 12:44:06.838524 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:06.838531 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:06.838535 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:06.839487 1096371 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 12:44:06.839667 1096371 api_server.go:141] control plane version: v1.30.1
	I0603 12:44:06.839685 1096371 api_server.go:131] duration metric: took 5.86597ms to wait for apiserver health ...
	I0603 12:44:06.839693 1096371 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 12:44:07.011386 1096371 request.go:629] Waited for 171.61689ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0603 12:44:07.011494 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0603 12:44:07.011505 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:07.011518 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:07.011530 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:07.019917 1096371 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0603 12:44:07.026879 1096371 system_pods.go:59] 24 kube-system pods found
	I0603 12:44:07.026905 1096371 system_pods.go:61] "coredns-7db6d8ff4d-d2tgp" [534e15ed-2e68-4275-8725-099d7240c25d] Running
	I0603 12:44:07.026910 1096371 system_pods.go:61] "coredns-7db6d8ff4d-q7687" [4e78d9e6-8feb-44ef-b44d-ed6039ab00ee] Running
	I0603 12:44:07.026913 1096371 system_pods.go:61] "etcd-ha-220492" [f2bcc3d2-bb06-4775-9080-72f7652a48b1] Running
	I0603 12:44:07.026916 1096371 system_pods.go:61] "etcd-ha-220492-m02" [035f5269-9ad9-4be8-a582-59bf15e1f6f1] Running
	I0603 12:44:07.026919 1096371 system_pods.go:61] "etcd-ha-220492-m03" [04c1c8e0-cd55-4bcc-99fd-d8a51aa3dde5] Running
	I0603 12:44:07.026922 1096371 system_pods.go:61] "kindnet-5p8f7" [12b97c9f-e363-42c3-9ac9-d808c47de63a] Running
	I0603 12:44:07.026925 1096371 system_pods.go:61] "kindnet-gkd6p" [f810b6d5-e0e8-4b1a-a5ef-c0a44452ecb7] Running
	I0603 12:44:07.026928 1096371 system_pods.go:61] "kindnet-hbl6v" [9f697f13-4a60-4247-bb5e-a8bcdd3336cd] Running
	I0603 12:44:07.026930 1096371 system_pods.go:61] "kube-apiserver-ha-220492" [a5d2882e-9fb6-4c38-b232-5dc8cb7a009e] Running
	I0603 12:44:07.026933 1096371 system_pods.go:61] "kube-apiserver-ha-220492-m02" [8ef1ba46-3175-4524-8979-fbb5f4d0608a] Running
	I0603 12:44:07.026936 1096371 system_pods.go:61] "kube-apiserver-ha-220492-m03" [f91fd8b8-eb1c-4441-88fc-2955f82c8cda] Running
	I0603 12:44:07.026939 1096371 system_pods.go:61] "kube-controller-manager-ha-220492" [38d8a477-8b59-43d0-9004-a70023c07b14] Running
	I0603 12:44:07.026944 1096371 system_pods.go:61] "kube-controller-manager-ha-220492-m02" [9cde04ca-9c61-4015-9f2f-08c9db8439cc] Running
	I0603 12:44:07.026947 1096371 system_pods.go:61] "kube-controller-manager-ha-220492-m03" [98b6bd4a-cc01-489d-a1c6-97428cac9348] Running
	I0603 12:44:07.026950 1096371 system_pods.go:61] "kube-proxy-dkzgt" [e1536cb0-2da1-4d9a-a6f7-50adfb8f7c9a] Running
	I0603 12:44:07.026953 1096371 system_pods.go:61] "kube-proxy-m5l8r" [de526b5c-27a0-4830-9634-039d4eab49b5] Running
	I0603 12:44:07.026956 1096371 system_pods.go:61] "kube-proxy-w2hpg" [51a52e47-6a1e-4f9c-ba1b-feb3e362531a] Running
	I0603 12:44:07.026959 1096371 system_pods.go:61] "kube-scheduler-ha-220492" [40a56d71-3787-44fa-a3b7-9d2dc2bcf5ac] Running
	I0603 12:44:07.026962 1096371 system_pods.go:61] "kube-scheduler-ha-220492-m02" [6dede50f-8a71-4a7a-97fa-8cc4d2a6ef8c] Running
	I0603 12:44:07.026966 1096371 system_pods.go:61] "kube-scheduler-ha-220492-m03" [f3205a74-3d7e-465a-ac13-f1e36535f16a] Running
	I0603 12:44:07.026973 1096371 system_pods.go:61] "kube-vip-ha-220492" [577ecb1f-e5df-4494-b898-7d2d8e79151d] Running
	I0603 12:44:07.026976 1096371 system_pods.go:61] "kube-vip-ha-220492-m02" [a53477a8-aa28-443e-bf5d-1abb3b66ce57] Running
	I0603 12:44:07.026979 1096371 system_pods.go:61] "kube-vip-ha-220492-m03" [6495d959-2043-486b-b207-6314877f6d43] Running
	I0603 12:44:07.026982 1096371 system_pods.go:61] "storage-provisioner" [f85b2808-26fa-4608-a208-2c11eaddc293] Running
	I0603 12:44:07.026987 1096371 system_pods.go:74] duration metric: took 187.288244ms to wait for pod list to return data ...
	I0603 12:44:07.026996 1096371 default_sa.go:34] waiting for default service account to be created ...
	I0603 12:44:07.211455 1096371 request.go:629] Waited for 184.374699ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/default/serviceaccounts
	I0603 12:44:07.211526 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/default/serviceaccounts
	I0603 12:44:07.211532 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:07.211540 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:07.211545 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:07.215131 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:07.215283 1096371 default_sa.go:45] found service account: "default"
	I0603 12:44:07.215298 1096371 default_sa.go:55] duration metric: took 188.293905ms for default service account to be created ...
	I0603 12:44:07.215307 1096371 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 12:44:07.411495 1096371 request.go:629] Waited for 196.082869ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0603 12:44:07.411581 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0603 12:44:07.411590 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:07.411598 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:07.411604 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:07.418357 1096371 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 12:44:07.424425 1096371 system_pods.go:86] 24 kube-system pods found
	I0603 12:44:07.424456 1096371 system_pods.go:89] "coredns-7db6d8ff4d-d2tgp" [534e15ed-2e68-4275-8725-099d7240c25d] Running
	I0603 12:44:07.424460 1096371 system_pods.go:89] "coredns-7db6d8ff4d-q7687" [4e78d9e6-8feb-44ef-b44d-ed6039ab00ee] Running
	I0603 12:44:07.424465 1096371 system_pods.go:89] "etcd-ha-220492" [f2bcc3d2-bb06-4775-9080-72f7652a48b1] Running
	I0603 12:44:07.424468 1096371 system_pods.go:89] "etcd-ha-220492-m02" [035f5269-9ad9-4be8-a582-59bf15e1f6f1] Running
	I0603 12:44:07.424472 1096371 system_pods.go:89] "etcd-ha-220492-m03" [04c1c8e0-cd55-4bcc-99fd-d8a51aa3dde5] Running
	I0603 12:44:07.424476 1096371 system_pods.go:89] "kindnet-5p8f7" [12b97c9f-e363-42c3-9ac9-d808c47de63a] Running
	I0603 12:44:07.424480 1096371 system_pods.go:89] "kindnet-gkd6p" [f810b6d5-e0e8-4b1a-a5ef-c0a44452ecb7] Running
	I0603 12:44:07.424484 1096371 system_pods.go:89] "kindnet-hbl6v" [9f697f13-4a60-4247-bb5e-a8bcdd3336cd] Running
	I0603 12:44:07.424488 1096371 system_pods.go:89] "kube-apiserver-ha-220492" [a5d2882e-9fb6-4c38-b232-5dc8cb7a009e] Running
	I0603 12:44:07.424492 1096371 system_pods.go:89] "kube-apiserver-ha-220492-m02" [8ef1ba46-3175-4524-8979-fbb5f4d0608a] Running
	I0603 12:44:07.424495 1096371 system_pods.go:89] "kube-apiserver-ha-220492-m03" [f91fd8b8-eb1c-4441-88fc-2955f82c8cda] Running
	I0603 12:44:07.424499 1096371 system_pods.go:89] "kube-controller-manager-ha-220492" [38d8a477-8b59-43d0-9004-a70023c07b14] Running
	I0603 12:44:07.424504 1096371 system_pods.go:89] "kube-controller-manager-ha-220492-m02" [9cde04ca-9c61-4015-9f2f-08c9db8439cc] Running
	I0603 12:44:07.424508 1096371 system_pods.go:89] "kube-controller-manager-ha-220492-m03" [98b6bd4a-cc01-489d-a1c6-97428cac9348] Running
	I0603 12:44:07.424511 1096371 system_pods.go:89] "kube-proxy-dkzgt" [e1536cb0-2da1-4d9a-a6f7-50adfb8f7c9a] Running
	I0603 12:44:07.424515 1096371 system_pods.go:89] "kube-proxy-m5l8r" [de526b5c-27a0-4830-9634-039d4eab49b5] Running
	I0603 12:44:07.424523 1096371 system_pods.go:89] "kube-proxy-w2hpg" [51a52e47-6a1e-4f9c-ba1b-feb3e362531a] Running
	I0603 12:44:07.424526 1096371 system_pods.go:89] "kube-scheduler-ha-220492" [40a56d71-3787-44fa-a3b7-9d2dc2bcf5ac] Running
	I0603 12:44:07.424530 1096371 system_pods.go:89] "kube-scheduler-ha-220492-m02" [6dede50f-8a71-4a7a-97fa-8cc4d2a6ef8c] Running
	I0603 12:44:07.424536 1096371 system_pods.go:89] "kube-scheduler-ha-220492-m03" [f3205a74-3d7e-465a-ac13-f1e36535f16a] Running
	I0603 12:44:07.424540 1096371 system_pods.go:89] "kube-vip-ha-220492" [577ecb1f-e5df-4494-b898-7d2d8e79151d] Running
	I0603 12:44:07.424545 1096371 system_pods.go:89] "kube-vip-ha-220492-m02" [a53477a8-aa28-443e-bf5d-1abb3b66ce57] Running
	I0603 12:44:07.424550 1096371 system_pods.go:89] "kube-vip-ha-220492-m03" [6495d959-2043-486b-b207-6314877f6d43] Running
	I0603 12:44:07.424556 1096371 system_pods.go:89] "storage-provisioner" [f85b2808-26fa-4608-a208-2c11eaddc293] Running
	I0603 12:44:07.424562 1096371 system_pods.go:126] duration metric: took 209.250131ms to wait for k8s-apps to be running ...
	I0603 12:44:07.424571 1096371 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 12:44:07.424620 1096371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:44:07.442242 1096371 system_svc.go:56] duration metric: took 17.658928ms WaitForService to wait for kubelet
	I0603 12:44:07.442285 1096371 kubeadm.go:576] duration metric: took 18.133140007s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 12:44:07.442306 1096371 node_conditions.go:102] verifying NodePressure condition ...
	I0603 12:44:07.610604 1096371 request.go:629] Waited for 168.185982ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes
	I0603 12:44:07.610688 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes
	I0603 12:44:07.610697 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:07.610705 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:07.610711 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:07.614118 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:07.615044 1096371 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 12:44:07.615063 1096371 node_conditions.go:123] node cpu capacity is 2
	I0603 12:44:07.615075 1096371 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 12:44:07.615078 1096371 node_conditions.go:123] node cpu capacity is 2
	I0603 12:44:07.615083 1096371 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 12:44:07.615085 1096371 node_conditions.go:123] node cpu capacity is 2
	I0603 12:44:07.615089 1096371 node_conditions.go:105] duration metric: took 172.779216ms to run NodePressure ...
	I0603 12:44:07.615101 1096371 start.go:240] waiting for startup goroutines ...
	I0603 12:44:07.615124 1096371 start.go:254] writing updated cluster config ...
	I0603 12:44:07.615413 1096371 ssh_runner.go:195] Run: rm -f paused
	I0603 12:44:07.672218 1096371 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 12:44:07.674263 1096371 out.go:177] * Done! kubectl is now configured to use "ha-220492" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jun 03 12:47:32 ha-220492 crio[683]: time="2024-06-03 12:47:32.966799317Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717418852966778986,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=66845789-5463-4791-b41e-8db0e10a6041 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:47:32 ha-220492 crio[683]: time="2024-06-03 12:47:32.967349318Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b58a4f05-a72b-44ac-bcaf-349375835cc8 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:47:32 ha-220492 crio[683]: time="2024-06-03 12:47:32.967418828Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b58a4f05-a72b-44ac-bcaf-349375835cc8 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:47:32 ha-220492 crio[683]: time="2024-06-03 12:47:32.967640549Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:76c9e115804f2ef0da9fc274ec7485309eb70b62384b05254dfa5ac8c6728e13,PodSandboxId:c73634cd0ed838aca7c928c176507e2dfa568ab421aa9e03f90ddc76ca3f89e3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717418649829753613,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5z6j2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 776fef6b-c7d6-4793-a168-5102737dd302,},Annotations:map[string]string{io.kubernetes.container.hash: be6db159,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b000c5164ef9debe3c82089d543b68405e7ae72c0f46e233daab8b658621dac,PodSandboxId:dfdd288abd0dbb2c347f4b52b1f4e58c064f35d506428817069f29bad7ee5c6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717418505770304298,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f85b2808-26fa-4608-a208-2c11eaddc293,},Annotations:map[string]string{io.kubernetes.container.hash: 90de17f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c67da4b30c5f444556405b8a25da6fbb0b38f383d298669f9f21785ed464934,PodSandboxId:1b5bd65416e85f6689ee15ecb3ab55e907fbd1077ac5e5439bec68eb110a6f2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717418505830825065,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-q7687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e78d9e6-8feb-44ef-b44d-ed6039ab00ee,},Annotations:map[string]string{io.kubernetes.container.hash: 62ef9a49,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50f524d71cd1f2697116e7f21f2de4dce2f9e5561c46a64f6c24713c3a56514e,PodSandboxId:6d9c5f1a45b9ec2700f63795dc3d92103fc5b6472ac9f6d2a638e50fb379eb54,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717418505830864910,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-d2tgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534e15ed-2e
68-4275-8725-099d7240c25d,},Annotations:map[string]string{io.kubernetes.container.hash: 5fcebe2b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e802c94fbf7b652f64d20242a16e1a092bc293af274ffda5f7da2cdb3726110f,PodSandboxId:2f740d6ed5034e98ab9ab5be6be150e83efaaa3ca14221bf7e2586014c1790e5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CO
NTAINER_RUNNING,CreatedAt:1717418503917588747,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hbl6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f697f13-4a60-4247-bb5e-a8bcdd3336cd,},Annotations:map[string]string{io.kubernetes.container.hash: be10c8dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16c93dcdad420f0831a36fd31ab05cb7c3a9fefd9706a928d0b31b781e1cbcb5,PodSandboxId:4d41713a63ac581773f2729e379c68f79cb014627aff280bda59d0e8a7cf22e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:171741850
1295981097,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w2hpg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a52e47-6a1e-4f9c-ba1b-feb3e362531a,},Annotations:map[string]string{io.kubernetes.container.hash: c1dc988b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fe31d7dcb7c4bece73cdae47d1e4f870a32eb28d62d5b5be6ba47c7aebeef6b,PodSandboxId:c6c8762f9acbca1e30161dad5fa6f6e02048664609c121db69cc5ffa0fb414fb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17174184833
00952885,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 304b8acf7929042c2226774df1f72a6f,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c6a50d20a2f169936062c7c4c41810fed1d7c1fbfd8db5b78066436668c44c,PodSandboxId:5c63ebce798f7c7bfe5cbe2b12cef00c703beddbd786066f6d5df12732ae6a1b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717418481117930462,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4620ab680afec05b26612f993071a866,},Annotations:map[string]string{io.kubernetes.container.hash: 4b374434,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24aa5625e9a8ad09c021e567710cafe54b2645a693d4daeb7b4e26ef9afea15b,PodSandboxId:03368aff48ff11a612512e49f5c15d496c6a6ada9e73d6b27bef501137483fc4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717418481049274321,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39cb6903f3b5aa40ef8bd7e72aabe415,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1c2bb32752f666af65f18178d8dd09b063abaa5dd50c071c9f8f377fc63156,PodSandboxId:ba8b6aec50011e9fe8c42d7e92a51e5a0907e4e61a030192574cfa79773380e4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717418481055328781,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4887038ca9eb66694db5e7bd6f010727,},Annotations:map[string]string{io.kubernetes.container.hash: d01e1cae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86f8a60e5333435d8ac7bc454e10cecb904b633e2ae00b080728114f5f1b1f35,PodSandboxId:b96e7f287499d7304c1d1aa216ee6aea5b51e6bbe5bfda82d347772d73f33297,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717418480896785560,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.na
me: kube-scheduler-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7cb138fa0228e501515fc556113859c,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b58a4f05-a72b-44ac-bcaf-349375835cc8 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:47:33 ha-220492 crio[683]: time="2024-06-03 12:47:33.003936705Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2c6529e6-31b6-4ade-8da7-80d6e4edb04c name=/runtime.v1.RuntimeService/Version
	Jun 03 12:47:33 ha-220492 crio[683]: time="2024-06-03 12:47:33.004171784Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2c6529e6-31b6-4ade-8da7-80d6e4edb04c name=/runtime.v1.RuntimeService/Version
	Jun 03 12:47:33 ha-220492 crio[683]: time="2024-06-03 12:47:33.005406130Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2cefbc04-9f45-43d4-adad-41010e18b1bd name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:47:33 ha-220492 crio[683]: time="2024-06-03 12:47:33.005830375Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717418853005809214,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2cefbc04-9f45-43d4-adad-41010e18b1bd name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:47:33 ha-220492 crio[683]: time="2024-06-03 12:47:33.006401166Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=df12b68e-1851-4b3e-acd6-f8531da125e1 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:47:33 ha-220492 crio[683]: time="2024-06-03 12:47:33.006485106Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=df12b68e-1851-4b3e-acd6-f8531da125e1 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:47:33 ha-220492 crio[683]: time="2024-06-03 12:47:33.006704668Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:76c9e115804f2ef0da9fc274ec7485309eb70b62384b05254dfa5ac8c6728e13,PodSandboxId:c73634cd0ed838aca7c928c176507e2dfa568ab421aa9e03f90ddc76ca3f89e3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717418649829753613,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5z6j2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 776fef6b-c7d6-4793-a168-5102737dd302,},Annotations:map[string]string{io.kubernetes.container.hash: be6db159,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b000c5164ef9debe3c82089d543b68405e7ae72c0f46e233daab8b658621dac,PodSandboxId:dfdd288abd0dbb2c347f4b52b1f4e58c064f35d506428817069f29bad7ee5c6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717418505770304298,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f85b2808-26fa-4608-a208-2c11eaddc293,},Annotations:map[string]string{io.kubernetes.container.hash: 90de17f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c67da4b30c5f444556405b8a25da6fbb0b38f383d298669f9f21785ed464934,PodSandboxId:1b5bd65416e85f6689ee15ecb3ab55e907fbd1077ac5e5439bec68eb110a6f2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717418505830825065,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-q7687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e78d9e6-8feb-44ef-b44d-ed6039ab00ee,},Annotations:map[string]string{io.kubernetes.container.hash: 62ef9a49,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50f524d71cd1f2697116e7f21f2de4dce2f9e5561c46a64f6c24713c3a56514e,PodSandboxId:6d9c5f1a45b9ec2700f63795dc3d92103fc5b6472ac9f6d2a638e50fb379eb54,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717418505830864910,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-d2tgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534e15ed-2e
68-4275-8725-099d7240c25d,},Annotations:map[string]string{io.kubernetes.container.hash: 5fcebe2b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e802c94fbf7b652f64d20242a16e1a092bc293af274ffda5f7da2cdb3726110f,PodSandboxId:2f740d6ed5034e98ab9ab5be6be150e83efaaa3ca14221bf7e2586014c1790e5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CO
NTAINER_RUNNING,CreatedAt:1717418503917588747,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hbl6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f697f13-4a60-4247-bb5e-a8bcdd3336cd,},Annotations:map[string]string{io.kubernetes.container.hash: be10c8dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16c93dcdad420f0831a36fd31ab05cb7c3a9fefd9706a928d0b31b781e1cbcb5,PodSandboxId:4d41713a63ac581773f2729e379c68f79cb014627aff280bda59d0e8a7cf22e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:171741850
1295981097,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w2hpg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a52e47-6a1e-4f9c-ba1b-feb3e362531a,},Annotations:map[string]string{io.kubernetes.container.hash: c1dc988b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fe31d7dcb7c4bece73cdae47d1e4f870a32eb28d62d5b5be6ba47c7aebeef6b,PodSandboxId:c6c8762f9acbca1e30161dad5fa6f6e02048664609c121db69cc5ffa0fb414fb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17174184833
00952885,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 304b8acf7929042c2226774df1f72a6f,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c6a50d20a2f169936062c7c4c41810fed1d7c1fbfd8db5b78066436668c44c,PodSandboxId:5c63ebce798f7c7bfe5cbe2b12cef00c703beddbd786066f6d5df12732ae6a1b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717418481117930462,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4620ab680afec05b26612f993071a866,},Annotations:map[string]string{io.kubernetes.container.hash: 4b374434,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24aa5625e9a8ad09c021e567710cafe54b2645a693d4daeb7b4e26ef9afea15b,PodSandboxId:03368aff48ff11a612512e49f5c15d496c6a6ada9e73d6b27bef501137483fc4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717418481049274321,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39cb6903f3b5aa40ef8bd7e72aabe415,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1c2bb32752f666af65f18178d8dd09b063abaa5dd50c071c9f8f377fc63156,PodSandboxId:ba8b6aec50011e9fe8c42d7e92a51e5a0907e4e61a030192574cfa79773380e4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717418481055328781,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4887038ca9eb66694db5e7bd6f010727,},Annotations:map[string]string{io.kubernetes.container.hash: d01e1cae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86f8a60e5333435d8ac7bc454e10cecb904b633e2ae00b080728114f5f1b1f35,PodSandboxId:b96e7f287499d7304c1d1aa216ee6aea5b51e6bbe5bfda82d347772d73f33297,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717418480896785560,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.na
me: kube-scheduler-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7cb138fa0228e501515fc556113859c,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=df12b68e-1851-4b3e-acd6-f8531da125e1 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:47:33 ha-220492 crio[683]: time="2024-06-03 12:47:33.045072909Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e274eb4e-6a24-482b-a3ab-90762c9e80f2 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:47:33 ha-220492 crio[683]: time="2024-06-03 12:47:33.045148680Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e274eb4e-6a24-482b-a3ab-90762c9e80f2 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:47:33 ha-220492 crio[683]: time="2024-06-03 12:47:33.046550321Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3da064dc-4bfc-472e-9fef-175a8ba7bdcf name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:47:33 ha-220492 crio[683]: time="2024-06-03 12:47:33.047283035Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717418853047259292,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3da064dc-4bfc-472e-9fef-175a8ba7bdcf name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:47:33 ha-220492 crio[683]: time="2024-06-03 12:47:33.047994022Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=91e96ea1-2193-4101-ab48-361eaa1134e3 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:47:33 ha-220492 crio[683]: time="2024-06-03 12:47:33.048227469Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=91e96ea1-2193-4101-ab48-361eaa1134e3 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:47:33 ha-220492 crio[683]: time="2024-06-03 12:47:33.048452196Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:76c9e115804f2ef0da9fc274ec7485309eb70b62384b05254dfa5ac8c6728e13,PodSandboxId:c73634cd0ed838aca7c928c176507e2dfa568ab421aa9e03f90ddc76ca3f89e3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717418649829753613,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5z6j2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 776fef6b-c7d6-4793-a168-5102737dd302,},Annotations:map[string]string{io.kubernetes.container.hash: be6db159,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b000c5164ef9debe3c82089d543b68405e7ae72c0f46e233daab8b658621dac,PodSandboxId:dfdd288abd0dbb2c347f4b52b1f4e58c064f35d506428817069f29bad7ee5c6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717418505770304298,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f85b2808-26fa-4608-a208-2c11eaddc293,},Annotations:map[string]string{io.kubernetes.container.hash: 90de17f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c67da4b30c5f444556405b8a25da6fbb0b38f383d298669f9f21785ed464934,PodSandboxId:1b5bd65416e85f6689ee15ecb3ab55e907fbd1077ac5e5439bec68eb110a6f2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717418505830825065,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-q7687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e78d9e6-8feb-44ef-b44d-ed6039ab00ee,},Annotations:map[string]string{io.kubernetes.container.hash: 62ef9a49,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50f524d71cd1f2697116e7f21f2de4dce2f9e5561c46a64f6c24713c3a56514e,PodSandboxId:6d9c5f1a45b9ec2700f63795dc3d92103fc5b6472ac9f6d2a638e50fb379eb54,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717418505830864910,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-d2tgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534e15ed-2e
68-4275-8725-099d7240c25d,},Annotations:map[string]string{io.kubernetes.container.hash: 5fcebe2b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e802c94fbf7b652f64d20242a16e1a092bc293af274ffda5f7da2cdb3726110f,PodSandboxId:2f740d6ed5034e98ab9ab5be6be150e83efaaa3ca14221bf7e2586014c1790e5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CO
NTAINER_RUNNING,CreatedAt:1717418503917588747,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hbl6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f697f13-4a60-4247-bb5e-a8bcdd3336cd,},Annotations:map[string]string{io.kubernetes.container.hash: be10c8dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16c93dcdad420f0831a36fd31ab05cb7c3a9fefd9706a928d0b31b781e1cbcb5,PodSandboxId:4d41713a63ac581773f2729e379c68f79cb014627aff280bda59d0e8a7cf22e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:171741850
1295981097,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w2hpg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a52e47-6a1e-4f9c-ba1b-feb3e362531a,},Annotations:map[string]string{io.kubernetes.container.hash: c1dc988b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fe31d7dcb7c4bece73cdae47d1e4f870a32eb28d62d5b5be6ba47c7aebeef6b,PodSandboxId:c6c8762f9acbca1e30161dad5fa6f6e02048664609c121db69cc5ffa0fb414fb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17174184833
00952885,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 304b8acf7929042c2226774df1f72a6f,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c6a50d20a2f169936062c7c4c41810fed1d7c1fbfd8db5b78066436668c44c,PodSandboxId:5c63ebce798f7c7bfe5cbe2b12cef00c703beddbd786066f6d5df12732ae6a1b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717418481117930462,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4620ab680afec05b26612f993071a866,},Annotations:map[string]string{io.kubernetes.container.hash: 4b374434,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24aa5625e9a8ad09c021e567710cafe54b2645a693d4daeb7b4e26ef9afea15b,PodSandboxId:03368aff48ff11a612512e49f5c15d496c6a6ada9e73d6b27bef501137483fc4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717418481049274321,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39cb6903f3b5aa40ef8bd7e72aabe415,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1c2bb32752f666af65f18178d8dd09b063abaa5dd50c071c9f8f377fc63156,PodSandboxId:ba8b6aec50011e9fe8c42d7e92a51e5a0907e4e61a030192574cfa79773380e4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717418481055328781,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4887038ca9eb66694db5e7bd6f010727,},Annotations:map[string]string{io.kubernetes.container.hash: d01e1cae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86f8a60e5333435d8ac7bc454e10cecb904b633e2ae00b080728114f5f1b1f35,PodSandboxId:b96e7f287499d7304c1d1aa216ee6aea5b51e6bbe5bfda82d347772d73f33297,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717418480896785560,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.na
me: kube-scheduler-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7cb138fa0228e501515fc556113859c,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=91e96ea1-2193-4101-ab48-361eaa1134e3 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:47:33 ha-220492 crio[683]: time="2024-06-03 12:47:33.089926866Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=18bbaf9f-63dd-4663-a9ad-30277d1b8cc2 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:47:33 ha-220492 crio[683]: time="2024-06-03 12:47:33.089999596Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=18bbaf9f-63dd-4663-a9ad-30277d1b8cc2 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:47:33 ha-220492 crio[683]: time="2024-06-03 12:47:33.091278359Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dd45e144-5903-4485-b782-87eb5ff4b9d9 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:47:33 ha-220492 crio[683]: time="2024-06-03 12:47:33.092206690Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717418853092181176,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dd45e144-5903-4485-b782-87eb5ff4b9d9 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:47:33 ha-220492 crio[683]: time="2024-06-03 12:47:33.092795959Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a080216e-38bf-43be-8c2b-f94f5466a87a name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:47:33 ha-220492 crio[683]: time="2024-06-03 12:47:33.092847883Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a080216e-38bf-43be-8c2b-f94f5466a87a name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:47:33 ha-220492 crio[683]: time="2024-06-03 12:47:33.093157187Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:76c9e115804f2ef0da9fc274ec7485309eb70b62384b05254dfa5ac8c6728e13,PodSandboxId:c73634cd0ed838aca7c928c176507e2dfa568ab421aa9e03f90ddc76ca3f89e3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717418649829753613,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5z6j2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 776fef6b-c7d6-4793-a168-5102737dd302,},Annotations:map[string]string{io.kubernetes.container.hash: be6db159,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b000c5164ef9debe3c82089d543b68405e7ae72c0f46e233daab8b658621dac,PodSandboxId:dfdd288abd0dbb2c347f4b52b1f4e58c064f35d506428817069f29bad7ee5c6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717418505770304298,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f85b2808-26fa-4608-a208-2c11eaddc293,},Annotations:map[string]string{io.kubernetes.container.hash: 90de17f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c67da4b30c5f444556405b8a25da6fbb0b38f383d298669f9f21785ed464934,PodSandboxId:1b5bd65416e85f6689ee15ecb3ab55e907fbd1077ac5e5439bec68eb110a6f2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717418505830825065,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-q7687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e78d9e6-8feb-44ef-b44d-ed6039ab00ee,},Annotations:map[string]string{io.kubernetes.container.hash: 62ef9a49,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50f524d71cd1f2697116e7f21f2de4dce2f9e5561c46a64f6c24713c3a56514e,PodSandboxId:6d9c5f1a45b9ec2700f63795dc3d92103fc5b6472ac9f6d2a638e50fb379eb54,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717418505830864910,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-d2tgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534e15ed-2e
68-4275-8725-099d7240c25d,},Annotations:map[string]string{io.kubernetes.container.hash: 5fcebe2b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e802c94fbf7b652f64d20242a16e1a092bc293af274ffda5f7da2cdb3726110f,PodSandboxId:2f740d6ed5034e98ab9ab5be6be150e83efaaa3ca14221bf7e2586014c1790e5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CO
NTAINER_RUNNING,CreatedAt:1717418503917588747,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hbl6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f697f13-4a60-4247-bb5e-a8bcdd3336cd,},Annotations:map[string]string{io.kubernetes.container.hash: be10c8dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16c93dcdad420f0831a36fd31ab05cb7c3a9fefd9706a928d0b31b781e1cbcb5,PodSandboxId:4d41713a63ac581773f2729e379c68f79cb014627aff280bda59d0e8a7cf22e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:171741850
1295981097,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w2hpg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a52e47-6a1e-4f9c-ba1b-feb3e362531a,},Annotations:map[string]string{io.kubernetes.container.hash: c1dc988b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fe31d7dcb7c4bece73cdae47d1e4f870a32eb28d62d5b5be6ba47c7aebeef6b,PodSandboxId:c6c8762f9acbca1e30161dad5fa6f6e02048664609c121db69cc5ffa0fb414fb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17174184833
00952885,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 304b8acf7929042c2226774df1f72a6f,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c6a50d20a2f169936062c7c4c41810fed1d7c1fbfd8db5b78066436668c44c,PodSandboxId:5c63ebce798f7c7bfe5cbe2b12cef00c703beddbd786066f6d5df12732ae6a1b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717418481117930462,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4620ab680afec05b26612f993071a866,},Annotations:map[string]string{io.kubernetes.container.hash: 4b374434,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24aa5625e9a8ad09c021e567710cafe54b2645a693d4daeb7b4e26ef9afea15b,PodSandboxId:03368aff48ff11a612512e49f5c15d496c6a6ada9e73d6b27bef501137483fc4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717418481049274321,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39cb6903f3b5aa40ef8bd7e72aabe415,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1c2bb32752f666af65f18178d8dd09b063abaa5dd50c071c9f8f377fc63156,PodSandboxId:ba8b6aec50011e9fe8c42d7e92a51e5a0907e4e61a030192574cfa79773380e4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717418481055328781,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4887038ca9eb66694db5e7bd6f010727,},Annotations:map[string]string{io.kubernetes.container.hash: d01e1cae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86f8a60e5333435d8ac7bc454e10cecb904b633e2ae00b080728114f5f1b1f35,PodSandboxId:b96e7f287499d7304c1d1aa216ee6aea5b51e6bbe5bfda82d347772d73f33297,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717418480896785560,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.na
me: kube-scheduler-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7cb138fa0228e501515fc556113859c,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a080216e-38bf-43be-8c2b-f94f5466a87a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	76c9e115804f2       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   c73634cd0ed83       busybox-fc5497c4f-5z6j2
	50f524d71cd1f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   6d9c5f1a45b9e       coredns-7db6d8ff4d-d2tgp
	7c67da4b30c5f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   1b5bd65416e85       coredns-7db6d8ff4d-q7687
	1b000c5164ef9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   dfdd288abd0db       storage-provisioner
	e802c94fbf7b6       docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266    5 minutes ago       Running             kindnet-cni               0                   2f740d6ed5034       kindnet-hbl6v
	16c93dcdad420       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      5 minutes ago       Running             kube-proxy                0                   4d41713a63ac5       kube-proxy-w2hpg
	1fe31d7dcb7c4       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   c6c8762f9acbc       kube-vip-ha-220492
	f2c6a50d20a2f       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      6 minutes ago       Running             kube-apiserver            0                   5c63ebce798f7       kube-apiserver-ha-220492
	3f1c2bb32752f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      6 minutes ago       Running             etcd                      0                   ba8b6aec50011       etcd-ha-220492
	24aa5625e9a8a       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      6 minutes ago       Running             kube-controller-manager   0                   03368aff48ff1       kube-controller-manager-ha-220492
	86f8a60e53334       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      6 minutes ago       Running             kube-scheduler            0                   b96e7f287499d       kube-scheduler-ha-220492
	
	
	==> coredns [50f524d71cd1f2697116e7f21f2de4dce2f9e5561c46a64f6c24713c3a56514e] <==
	[INFO] 10.244.0.4:58974 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000157943s
	[INFO] 10.244.0.4:40096 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002616941s
	[INFO] 10.244.0.4:60549 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000175831s
	[INFO] 10.244.0.4:38004 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110811s
	[INFO] 10.244.0.4:35443 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076183s
	[INFO] 10.244.1.2:40738 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000162292s
	[INFO] 10.244.1.2:47526 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000136462s
	[INFO] 10.244.1.2:53322 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114547s
	[INFO] 10.244.2.2:47547 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145066s
	[INFO] 10.244.2.2:43785 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000094815s
	[INFO] 10.244.2.2:54501 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001319495s
	[INFO] 10.244.2.2:55983 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000086973s
	[INFO] 10.244.2.2:56195 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069334s
	[INFO] 10.244.0.4:42110 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000064533s
	[INFO] 10.244.0.4:48697 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000058629s
	[INFO] 10.244.1.2:42865 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168668s
	[INFO] 10.244.1.2:56794 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000111494s
	[INFO] 10.244.1.2:58581 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000084125s
	[INFO] 10.244.1.2:50954 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099179s
	[INFO] 10.244.2.2:42915 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142235s
	[INFO] 10.244.2.2:49410 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000102812s
	[INFO] 10.244.0.4:51178 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00019093s
	[INFO] 10.244.1.2:40502 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168017s
	[INFO] 10.244.1.2:35921 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000180824s
	[INFO] 10.244.1.2:40369 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000155572s
	
	
	==> coredns [7c67da4b30c5f444556405b8a25da6fbb0b38f383d298669f9f21785ed464934] <==
	[INFO] 10.244.2.2:43079 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000108982s
	[INFO] 10.244.2.2:44322 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001648137s
	[INFO] 10.244.0.4:45431 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000092001s
	[INFO] 10.244.0.4:56388 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.032198469s
	[INFO] 10.244.0.4:55805 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000292911s
	[INFO] 10.244.1.2:36984 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001651953s
	[INFO] 10.244.1.2:59707 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00013257s
	[INFO] 10.244.1.2:43132 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001294041s
	[INFO] 10.244.1.2:50044 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000148444s
	[INFO] 10.244.1.2:46108 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000262338s
	[INFO] 10.244.2.2:59857 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001619455s
	[INFO] 10.244.2.2:37703 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098955s
	[INFO] 10.244.2.2:51044 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000180769s
	[INFO] 10.244.0.4:56245 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000077571s
	[INFO] 10.244.0.4:40429 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00005283s
	[INFO] 10.244.2.2:55900 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000100265s
	[INFO] 10.244.2.2:57003 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000955s
	[INFO] 10.244.0.4:39653 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107486s
	[INFO] 10.244.0.4:50505 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000152153s
	[INFO] 10.244.0.4:40598 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000156098s
	[INFO] 10.244.1.2:37651 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000154868s
	[INFO] 10.244.2.2:47903 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111761s
	[INFO] 10.244.2.2:55067 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000076585s
	[INFO] 10.244.2.2:39348 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000123715s
	[INFO] 10.244.2.2:33705 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000109704s
	
	
	==> describe nodes <==
	Name:               ha-220492
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-220492
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	                    minikube.k8s.io/name=ha-220492
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_03T12_41_28_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 12:41:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-220492
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 12:47:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 12:44:31 +0000   Mon, 03 Jun 2024 12:41:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 12:44:31 +0000   Mon, 03 Jun 2024 12:41:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 12:44:31 +0000   Mon, 03 Jun 2024 12:41:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 12:44:31 +0000   Mon, 03 Jun 2024 12:41:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.6
	  Hostname:    ha-220492
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bebf6ef8229e4a0498f737d165a96550
	  System UUID:                bebf6ef8-229e-4a04-98f7-37d165a96550
	  Boot ID:                    38c7d220-f8e0-4890-a7e1-09c3bc826d0b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-5z6j2              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m25s
	  kube-system                 coredns-7db6d8ff4d-d2tgp             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     5m53s
	  kube-system                 coredns-7db6d8ff4d-q7687             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     5m53s
	  kube-system                 etcd-ha-220492                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m6s
	  kube-system                 kindnet-hbl6v                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m53s
	  kube-system                 kube-apiserver-ha-220492             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m6s
	  kube-system                 kube-controller-manager-ha-220492    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m6s
	  kube-system                 kube-proxy-w2hpg                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m53s
	  kube-system                 kube-scheduler-ha-220492             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m6s
	  kube-system                 kube-vip-ha-220492                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m6s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m51s  kube-proxy       
	  Normal  Starting                 6m6s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m6s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m6s   kubelet          Node ha-220492 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m6s   kubelet          Node ha-220492 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m6s   kubelet          Node ha-220492 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m54s  node-controller  Node ha-220492 event: Registered Node ha-220492 in Controller
	  Normal  NodeReady                5m48s  kubelet          Node ha-220492 status is now: NodeReady
	  Normal  RegisteredNode           4m41s  node-controller  Node ha-220492 event: Registered Node ha-220492 in Controller
	  Normal  RegisteredNode           3m30s  node-controller  Node ha-220492 event: Registered Node ha-220492 in Controller
	
	
	Name:               ha-220492-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-220492-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	                    minikube.k8s.io/name=ha-220492
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_03T12_42_37_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 12:42:34 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-220492-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 12:45:08 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 03 Jun 2024 12:44:37 +0000   Mon, 03 Jun 2024 12:45:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 03 Jun 2024 12:44:37 +0000   Mon, 03 Jun 2024 12:45:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 03 Jun 2024 12:44:37 +0000   Mon, 03 Jun 2024 12:45:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 03 Jun 2024 12:44:37 +0000   Mon, 03 Jun 2024 12:45:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.106
	  Hostname:    ha-220492-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1658a5c6e8394d57a265332808e714ab
	  System UUID:                1658a5c6-e839-4d57-a265-332808e714ab
	  Boot ID:                    a5e41f0e-e9a1-4e3c-9d02-9b2c849b1b76
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-m229v                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m25s
	  kube-system                 etcd-ha-220492-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m57s
	  kube-system                 kindnet-5p8f7                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m59s
	  kube-system                 kube-apiserver-ha-220492-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  kube-system                 kube-controller-manager-ha-220492-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  kube-system                 kube-proxy-dkzgt                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	  kube-system                 kube-scheduler-ha-220492-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  kube-system                 kube-vip-ha-220492-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m54s                  kube-proxy       
	  Normal  RegisteredNode           4m59s                  node-controller  Node ha-220492-m02 event: Registered Node ha-220492-m02 in Controller
	  Normal  NodeHasSufficientMemory  4m59s (x8 over 4m59s)  kubelet          Node ha-220492-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m59s (x8 over 4m59s)  kubelet          Node ha-220492-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m59s (x7 over 4m59s)  kubelet          Node ha-220492-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m41s                  node-controller  Node ha-220492-m02 event: Registered Node ha-220492-m02 in Controller
	  Normal  RegisteredNode           3m30s                  node-controller  Node ha-220492-m02 event: Registered Node ha-220492-m02 in Controller
	  Normal  NodeNotReady             105s                   node-controller  Node ha-220492-m02 status is now: NodeNotReady
	
	
	Name:               ha-220492-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-220492-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	                    minikube.k8s.io/name=ha-220492
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_03T12_43_49_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 12:43:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-220492-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 12:47:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 12:44:15 +0000   Mon, 03 Jun 2024 12:43:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 12:44:15 +0000   Mon, 03 Jun 2024 12:43:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 12:44:15 +0000   Mon, 03 Jun 2024 12:43:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 12:44:15 +0000   Mon, 03 Jun 2024 12:43:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.169
	  Hostname:    ha-220492-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c1055ed032f443e996570a5a0e130a0f
	  System UUID:                c1055ed0-32f4-43e9-9657-0a5a0e130a0f
	  Boot ID:                    eb7a7193-bab4-4090-948a-7512b86c5924
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-stmtj                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m25s
	  kube-system                 etcd-ha-220492-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m46s
	  kube-system                 kindnet-gkd6p                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m48s
	  kube-system                 kube-apiserver-ha-220492-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m46s
	  kube-system                 kube-controller-manager-ha-220492-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m39s
	  kube-system                 kube-proxy-m5l8r                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m48s
	  kube-system                 kube-scheduler-ha-220492-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m46s
	  kube-system                 kube-vip-ha-220492-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m43s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m48s (x8 over 3m48s)  kubelet          Node ha-220492-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m48s (x8 over 3m48s)  kubelet          Node ha-220492-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m48s (x7 over 3m48s)  kubelet          Node ha-220492-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m46s                  node-controller  Node ha-220492-m03 event: Registered Node ha-220492-m03 in Controller
	  Normal  RegisteredNode           3m44s                  node-controller  Node ha-220492-m03 event: Registered Node ha-220492-m03 in Controller
	  Normal  RegisteredNode           3m30s                  node-controller  Node ha-220492-m03 event: Registered Node ha-220492-m03 in Controller
	
	
	Name:               ha-220492-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-220492-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	                    minikube.k8s.io/name=ha-220492
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_03T12_44_42_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 12:44:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-220492-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 12:47:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 12:45:12 +0000   Mon, 03 Jun 2024 12:44:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 12:45:12 +0000   Mon, 03 Jun 2024 12:44:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 12:45:12 +0000   Mon, 03 Jun 2024 12:44:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 12:45:12 +0000   Mon, 03 Jun 2024 12:44:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.76
	  Hostname:    ha-220492-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c6e57d6c6ec64017a56a85c3aa55fe71
	  System UUID:                c6e57d6c-6ec6-4017-a56a-85c3aa55fe71
	  Boot ID:                    89c71749-9840-4d2b-813a-335eed63de23
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-l7rsb       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m51s
	  kube-system                 kube-proxy-ggdgz    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m46s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m51s (x2 over 2m52s)  kubelet          Node ha-220492-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m51s (x2 over 2m52s)  kubelet          Node ha-220492-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m51s (x2 over 2m52s)  kubelet          Node ha-220492-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m50s                  node-controller  Node ha-220492-m04 event: Registered Node ha-220492-m04 in Controller
	  Normal  RegisteredNode           2m49s                  node-controller  Node ha-220492-m04 event: Registered Node ha-220492-m04 in Controller
	  Normal  RegisteredNode           2m46s                  node-controller  Node ha-220492-m04 event: Registered Node ha-220492-m04 in Controller
	  Normal  NodeReady                2m42s                  kubelet          Node ha-220492-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jun 3 12:40] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051399] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040129] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.496498] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.471458] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.572370] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jun 3 12:41] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.058474] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056951] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.164438] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.150241] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.258150] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.221845] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +4.557727] systemd-fstab-generator[946]: Ignoring "noauto" option for root device
	[  +0.059101] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.202766] systemd-fstab-generator[1365]: Ignoring "noauto" option for root device
	[  +0.082984] kauditd_printk_skb: 79 callbacks suppressed
	[ +14.083493] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.330072] kauditd_printk_skb: 68 callbacks suppressed
	
	
	==> etcd [3f1c2bb32752f666af65f18178d8dd09b063abaa5dd50c071c9f8f377fc63156] <==
	{"level":"warn","ts":"2024-06-03T12:47:33.357107Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T12:47:33.362583Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T12:47:33.369931Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T12:47:33.374939Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T12:47:33.388708Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T12:47:33.395194Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T12:47:33.40184Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T12:47:33.405691Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T12:47:33.409489Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T12:47:33.417968Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T12:47:33.423898Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T12:47:33.42929Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T12:47:33.433997Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T12:47:33.43756Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T12:47:33.444667Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T12:47:33.450754Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T12:47:33.455632Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T12:47:33.455817Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T12:47:33.458838Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T12:47:33.461623Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T12:47:33.466351Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T12:47:33.471328Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T12:47:33.477153Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T12:47:33.545235Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T12:47:33.556095Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 12:47:33 up 6 min,  0 users,  load average: 0.29, 0.30, 0.15
	Linux ha-220492 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [e802c94fbf7b652f64d20242a16e1a092bc293af274ffda5f7da2cdb3726110f] <==
	I0603 12:46:55.073544       1 main.go:250] Node ha-220492-m04 has CIDR [10.244.3.0/24] 
	I0603 12:47:05.080115       1 main.go:223] Handling node with IPs: map[192.168.39.6:{}]
	I0603 12:47:05.080170       1 main.go:227] handling current node
	I0603 12:47:05.080180       1 main.go:223] Handling node with IPs: map[192.168.39.106:{}]
	I0603 12:47:05.080186       1 main.go:250] Node ha-220492-m02 has CIDR [10.244.1.0/24] 
	I0603 12:47:05.080298       1 main.go:223] Handling node with IPs: map[192.168.39.169:{}]
	I0603 12:47:05.080323       1 main.go:250] Node ha-220492-m03 has CIDR [10.244.2.0/24] 
	I0603 12:47:05.080373       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0603 12:47:05.080395       1 main.go:250] Node ha-220492-m04 has CIDR [10.244.3.0/24] 
	I0603 12:47:15.091300       1 main.go:223] Handling node with IPs: map[192.168.39.6:{}]
	I0603 12:47:15.091372       1 main.go:227] handling current node
	I0603 12:47:15.091397       1 main.go:223] Handling node with IPs: map[192.168.39.106:{}]
	I0603 12:47:15.091415       1 main.go:250] Node ha-220492-m02 has CIDR [10.244.1.0/24] 
	I0603 12:47:15.091747       1 main.go:223] Handling node with IPs: map[192.168.39.169:{}]
	I0603 12:47:15.091778       1 main.go:250] Node ha-220492-m03 has CIDR [10.244.2.0/24] 
	I0603 12:47:15.092070       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0603 12:47:15.092097       1 main.go:250] Node ha-220492-m04 has CIDR [10.244.3.0/24] 
	I0603 12:47:25.101207       1 main.go:223] Handling node with IPs: map[192.168.39.6:{}]
	I0603 12:47:25.101285       1 main.go:227] handling current node
	I0603 12:47:25.101315       1 main.go:223] Handling node with IPs: map[192.168.39.106:{}]
	I0603 12:47:25.101339       1 main.go:250] Node ha-220492-m02 has CIDR [10.244.1.0/24] 
	I0603 12:47:25.101485       1 main.go:223] Handling node with IPs: map[192.168.39.169:{}]
	I0603 12:47:25.101535       1 main.go:250] Node ha-220492-m03 has CIDR [10.244.2.0/24] 
	I0603 12:47:25.101607       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0603 12:47:25.101625       1 main.go:250] Node ha-220492-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [f2c6a50d20a2f169936062c7c4c41810fed1d7c1fbfd8db5b78066436668c44c] <==
	I0603 12:41:26.555382       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0603 12:41:27.119199       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0603 12:41:27.135822       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0603 12:41:27.303429       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0603 12:41:40.412270       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0603 12:41:40.610430       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0603 12:44:11.353702       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40030: use of closed network connection
	E0603 12:44:11.562518       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40048: use of closed network connection
	E0603 12:44:11.742844       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40062: use of closed network connection
	E0603 12:44:11.979592       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40072: use of closed network connection
	E0603 12:44:12.166800       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40090: use of closed network connection
	E0603 12:44:12.353276       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40118: use of closed network connection
	E0603 12:44:12.541514       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40124: use of closed network connection
	E0603 12:44:12.730492       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40138: use of closed network connection
	E0603 12:44:12.935958       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40158: use of closed network connection
	E0603 12:44:13.258657       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40180: use of closed network connection
	E0603 12:44:13.443719       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40206: use of closed network connection
	E0603 12:44:13.629404       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40216: use of closed network connection
	E0603 12:44:13.809363       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40230: use of closed network connection
	E0603 12:44:14.007269       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40252: use of closed network connection
	E0603 12:44:14.198902       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40272: use of closed network connection
	E0603 12:44:42.929146       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0603 12:44:42.929186       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0603 12:44:42.930374       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0603 12:44:42.930593       1 timeout.go:142] post-timeout activity - time-elapsed: 2.024799ms, GET "/api/v1/nodes" result: <nil>
	
	
	==> kube-controller-manager [24aa5625e9a8ad09c021e567710cafe54b2645a693d4daeb7b4e26ef9afea15b] <==
	I0603 12:43:49.913966       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-220492-m03"
	I0603 12:44:08.594527       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.593843ms"
	I0603 12:44:08.627660       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.878377ms"
	I0603 12:44:08.627814       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.191µs"
	I0603 12:44:08.637872       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="412.096µs"
	I0603 12:44:08.798573       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="151.084361ms"
	I0603 12:44:08.990690       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="192.043093ms"
	E0603 12:44:08.990947       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0603 12:44:09.170301       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="179.042579ms"
	I0603 12:44:09.170723       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="213.135µs"
	I0603 12:44:10.059779       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.898187ms"
	I0603 12:44:10.060098       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="109.975µs"
	I0603 12:44:10.364592       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.921701ms"
	I0603 12:44:10.365156       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="145.576µs"
	I0603 12:44:10.857084       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.566086ms"
	I0603 12:44:10.857350       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.316µs"
	E0603 12:44:41.979137       1 certificate_controller.go:146] Sync csr-dsj4z failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-dsj4z": the object has been modified; please apply your changes to the latest version and try again
	E0603 12:44:41.982628       1 certificate_controller.go:146] Sync csr-dsj4z failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-dsj4z": the object has been modified; please apply your changes to the latest version and try again
	I0603 12:44:42.261853       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-220492-m04\" does not exist"
	I0603 12:44:42.277634       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-220492-m04" podCIDRs=["10.244.3.0/24"]
	I0603 12:44:44.938801       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-220492-m04"
	I0603 12:44:51.906684       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-220492-m04"
	I0603 12:45:48.763656       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-220492-m04"
	I0603 12:45:48.860503       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.875491ms"
	I0603 12:45:48.861605       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="103.062µs"
	
	
	==> kube-proxy [16c93dcdad420f0831a36fd31ab05cb7c3a9fefd9706a928d0b31b781e1cbcb5] <==
	I0603 12:41:41.653918       1 server_linux.go:69] "Using iptables proxy"
	I0603 12:41:41.666625       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.6"]
	I0603 12:41:41.746204       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 12:41:41.746284       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 12:41:41.746307       1 server_linux.go:165] "Using iptables Proxier"
	I0603 12:41:41.756637       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 12:41:41.759208       1 server.go:872] "Version info" version="v1.30.1"
	I0603 12:41:41.759292       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 12:41:41.764714       1 config.go:101] "Starting endpoint slice config controller"
	I0603 12:41:41.764758       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 12:41:41.764789       1 config.go:192] "Starting service config controller"
	I0603 12:41:41.764793       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 12:41:41.765665       1 config.go:319] "Starting node config controller"
	I0603 12:41:41.765696       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 12:41:41.865499       1 shared_informer.go:320] Caches are synced for service config
	I0603 12:41:41.865545       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 12:41:41.865850       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [86f8a60e5333435d8ac7bc454e10cecb904b633e2ae00b080728114f5f1b1f35] <==
	W0603 12:41:25.685528       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0603 12:41:25.685568       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0603 12:41:25.795923       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0603 12:41:25.796132       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0603 12:41:25.805134       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0603 12:41:25.805179       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0603 12:41:25.890812       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0603 12:41:25.890868       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0603 12:41:25.955492       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0603 12:41:25.955540       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0603 12:41:26.130687       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0603 12:41:26.130750       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 12:41:29.273960       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0603 12:43:45.274599       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-m5l8r\": pod kube-proxy-m5l8r is already assigned to node \"ha-220492-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-m5l8r" node="ha-220492-m03"
	E0603 12:43:45.274917       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-m5l8r\": pod kube-proxy-m5l8r is already assigned to node \"ha-220492-m03\"" pod="kube-system/kube-proxy-m5l8r"
	I0603 12:43:45.274999       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-m5l8r" node="ha-220492-m03"
	I0603 12:44:08.559432       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="9566ea60-1af8-46e1-93a0-071ebaa32d09" pod="default/busybox-fc5497c4f-m229v" assumedNode="ha-220492-m02" currentNode="ha-220492-m03"
	E0603 12:44:08.570382       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-m229v\": pod busybox-fc5497c4f-m229v is already assigned to node \"ha-220492-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-m229v" node="ha-220492-m03"
	E0603 12:44:08.570476       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 9566ea60-1af8-46e1-93a0-071ebaa32d09(default/busybox-fc5497c4f-m229v) was assumed on ha-220492-m03 but assigned to ha-220492-m02" pod="default/busybox-fc5497c4f-m229v"
	E0603 12:44:08.570509       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-m229v\": pod busybox-fc5497c4f-m229v is already assigned to node \"ha-220492-m02\"" pod="default/busybox-fc5497c4f-m229v"
	I0603 12:44:08.570570       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-m229v" node="ha-220492-m02"
	E0603 12:44:42.337402       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-ggdgz\": pod kube-proxy-ggdgz is already assigned to node \"ha-220492-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-ggdgz" node="ha-220492-m04"
	E0603 12:44:42.337477       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 6de7aa57-0339-4982-a792-5adf344ad155(kube-system/kube-proxy-ggdgz) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-ggdgz"
	E0603 12:44:42.337499       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-ggdgz\": pod kube-proxy-ggdgz is already assigned to node \"ha-220492-m04\"" pod="kube-system/kube-proxy-ggdgz"
	I0603 12:44:42.337519       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-ggdgz" node="ha-220492-m04"
	
	
	==> kubelet <==
	Jun 03 12:43:27 ha-220492 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 12:43:27 ha-220492 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 12:43:27 ha-220492 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 12:44:08 ha-220492 kubelet[1372]: I0603 12:44:08.615731    1372 topology_manager.go:215] "Topology Admit Handler" podUID="776fef6b-c7d6-4793-a168-5102737dd302" podNamespace="default" podName="busybox-fc5497c4f-5z6j2"
	Jun 03 12:44:08 ha-220492 kubelet[1372]: I0603 12:44:08.714675    1372 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8qft\" (UniqueName: \"kubernetes.io/projected/776fef6b-c7d6-4793-a168-5102737dd302-kube-api-access-n8qft\") pod \"busybox-fc5497c4f-5z6j2\" (UID: \"776fef6b-c7d6-4793-a168-5102737dd302\") " pod="default/busybox-fc5497c4f-5z6j2"
	Jun 03 12:44:27 ha-220492 kubelet[1372]: E0603 12:44:27.283768    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 12:44:27 ha-220492 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 12:44:27 ha-220492 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 12:44:27 ha-220492 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 12:44:27 ha-220492 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 12:45:27 ha-220492 kubelet[1372]: E0603 12:45:27.279817    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 12:45:27 ha-220492 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 12:45:27 ha-220492 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 12:45:27 ha-220492 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 12:45:27 ha-220492 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 12:46:27 ha-220492 kubelet[1372]: E0603 12:46:27.280890    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 12:46:27 ha-220492 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 12:46:27 ha-220492 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 12:46:27 ha-220492 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 12:46:27 ha-220492 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 12:47:27 ha-220492 kubelet[1372]: E0603 12:47:27.280298    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 12:47:27 ha-220492 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 12:47:27 ha-220492 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 12:47:27 ha-220492 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 12:47:27 ha-220492 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-220492 -n ha-220492
helpers_test.go:261: (dbg) Run:  kubectl --context ha-220492 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (142.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (55.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-220492 status -v=7 --alsologtostderr: exit status 3 (3.211725164s)

                                                
                                                
-- stdout --
	ha-220492
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-220492-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-220492-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-220492-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0603 12:47:38.105878 1101149 out.go:291] Setting OutFile to fd 1 ...
	I0603 12:47:38.105984 1101149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:47:38.105992 1101149 out.go:304] Setting ErrFile to fd 2...
	I0603 12:47:38.105996 1101149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:47:38.106212 1101149 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
	I0603 12:47:38.106372 1101149 out.go:298] Setting JSON to false
	I0603 12:47:38.106395 1101149 mustload.go:65] Loading cluster: ha-220492
	I0603 12:47:38.106520 1101149 notify.go:220] Checking for updates...
	I0603 12:47:38.106757 1101149 config.go:182] Loaded profile config "ha-220492": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:47:38.106773 1101149 status.go:255] checking status of ha-220492 ...
	I0603 12:47:38.107144 1101149 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:38.107218 1101149 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:38.122481 1101149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35699
	I0603 12:47:38.122926 1101149 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:38.123711 1101149 main.go:141] libmachine: Using API Version  1
	I0603 12:47:38.123740 1101149 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:38.124087 1101149 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:38.124289 1101149 main.go:141] libmachine: (ha-220492) Calling .GetState
	I0603 12:47:38.125947 1101149 status.go:330] ha-220492 host status = "Running" (err=<nil>)
	I0603 12:47:38.125966 1101149 host.go:66] Checking if "ha-220492" exists ...
	I0603 12:47:38.126273 1101149 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:38.126313 1101149 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:38.141293 1101149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38003
	I0603 12:47:38.141713 1101149 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:38.142151 1101149 main.go:141] libmachine: Using API Version  1
	I0603 12:47:38.142172 1101149 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:38.142462 1101149 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:38.142650 1101149 main.go:141] libmachine: (ha-220492) Calling .GetIP
	I0603 12:47:38.145377 1101149 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:47:38.145833 1101149 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:47:38.145868 1101149 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:47:38.146009 1101149 host.go:66] Checking if "ha-220492" exists ...
	I0603 12:47:38.146307 1101149 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:38.146351 1101149 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:38.161621 1101149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43135
	I0603 12:47:38.162053 1101149 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:38.162549 1101149 main.go:141] libmachine: Using API Version  1
	I0603 12:47:38.162574 1101149 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:38.162886 1101149 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:38.163088 1101149 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:47:38.163311 1101149 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 12:47:38.163345 1101149 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:47:38.166124 1101149 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:47:38.166528 1101149 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:47:38.166562 1101149 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:47:38.166717 1101149 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:47:38.166864 1101149 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:47:38.167021 1101149 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:47:38.167188 1101149 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/id_rsa Username:docker}
	I0603 12:47:38.253136 1101149 ssh_runner.go:195] Run: systemctl --version
	I0603 12:47:38.259198 1101149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:47:38.274334 1101149 kubeconfig.go:125] found "ha-220492" server: "https://192.168.39.254:8443"
	I0603 12:47:38.274382 1101149 api_server.go:166] Checking apiserver status ...
	I0603 12:47:38.274426 1101149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:47:38.289102 1101149 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1226/cgroup
	W0603 12:47:38.298835 1101149 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1226/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 12:47:38.298886 1101149 ssh_runner.go:195] Run: ls
	I0603 12:47:38.303116 1101149 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0603 12:47:38.309461 1101149 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0603 12:47:38.309484 1101149 status.go:422] ha-220492 apiserver status = Running (err=<nil>)
	I0603 12:47:38.309495 1101149 status.go:257] ha-220492 status: &{Name:ha-220492 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 12:47:38.309512 1101149 status.go:255] checking status of ha-220492-m02 ...
	I0603 12:47:38.309812 1101149 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:38.309852 1101149 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:38.324939 1101149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41325
	I0603 12:47:38.325343 1101149 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:38.325908 1101149 main.go:141] libmachine: Using API Version  1
	I0603 12:47:38.325935 1101149 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:38.326236 1101149 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:38.326425 1101149 main.go:141] libmachine: (ha-220492-m02) Calling .GetState
	I0603 12:47:38.328307 1101149 status.go:330] ha-220492-m02 host status = "Running" (err=<nil>)
	I0603 12:47:38.328325 1101149 host.go:66] Checking if "ha-220492-m02" exists ...
	I0603 12:47:38.328642 1101149 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:38.328684 1101149 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:38.343884 1101149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35233
	I0603 12:47:38.344365 1101149 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:38.344820 1101149 main.go:141] libmachine: Using API Version  1
	I0603 12:47:38.344848 1101149 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:38.345134 1101149 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:38.345367 1101149 main.go:141] libmachine: (ha-220492-m02) Calling .GetIP
	I0603 12:47:38.348136 1101149 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:47:38.348569 1101149 main.go:141] libmachine: (ha-220492-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:56:2b", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:41:56 +0000 UTC Type:0 Mac:52:54:00:5d:56:2b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-220492-m02 Clientid:01:52:54:00:5d:56:2b}
	I0603 12:47:38.348598 1101149 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:47:38.348907 1101149 host.go:66] Checking if "ha-220492-m02" exists ...
	I0603 12:47:38.349221 1101149 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:38.349273 1101149 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:38.363877 1101149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38377
	I0603 12:47:38.364246 1101149 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:38.364771 1101149 main.go:141] libmachine: Using API Version  1
	I0603 12:47:38.364789 1101149 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:38.365100 1101149 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:38.365349 1101149 main.go:141] libmachine: (ha-220492-m02) Calling .DriverName
	I0603 12:47:38.365559 1101149 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 12:47:38.365580 1101149 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHHostname
	I0603 12:47:38.368304 1101149 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:47:38.368758 1101149 main.go:141] libmachine: (ha-220492-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:56:2b", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:41:56 +0000 UTC Type:0 Mac:52:54:00:5d:56:2b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-220492-m02 Clientid:01:52:54:00:5d:56:2b}
	I0603 12:47:38.368779 1101149 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:47:38.368901 1101149 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHPort
	I0603 12:47:38.369076 1101149 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHKeyPath
	I0603 12:47:38.369216 1101149 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHUsername
	I0603 12:47:38.369365 1101149 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m02/id_rsa Username:docker}
	W0603 12:47:40.909686 1101149 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.106:22: connect: no route to host
	W0603 12:47:40.909795 1101149 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.106:22: connect: no route to host
	E0603 12:47:40.909814 1101149 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.106:22: connect: no route to host
	I0603 12:47:40.909827 1101149 status.go:257] ha-220492-m02 status: &{Name:ha-220492-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0603 12:47:40.909851 1101149 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.106:22: connect: no route to host
	I0603 12:47:40.909861 1101149 status.go:255] checking status of ha-220492-m03 ...
	I0603 12:47:40.910300 1101149 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:40.910356 1101149 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:40.926850 1101149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34033
	I0603 12:47:40.927319 1101149 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:40.927855 1101149 main.go:141] libmachine: Using API Version  1
	I0603 12:47:40.927885 1101149 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:40.928307 1101149 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:40.928608 1101149 main.go:141] libmachine: (ha-220492-m03) Calling .GetState
	I0603 12:47:40.930628 1101149 status.go:330] ha-220492-m03 host status = "Running" (err=<nil>)
	I0603 12:47:40.930649 1101149 host.go:66] Checking if "ha-220492-m03" exists ...
	I0603 12:47:40.930985 1101149 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:40.931029 1101149 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:40.946377 1101149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44951
	I0603 12:47:40.946885 1101149 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:40.947477 1101149 main.go:141] libmachine: Using API Version  1
	I0603 12:47:40.947503 1101149 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:40.947809 1101149 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:40.948005 1101149 main.go:141] libmachine: (ha-220492-m03) Calling .GetIP
	I0603 12:47:40.950664 1101149 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:47:40.951109 1101149 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-220492-m03 Clientid:01:52:54:00:ae:60:87}
	I0603 12:47:40.951137 1101149 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:47:40.951286 1101149 host.go:66] Checking if "ha-220492-m03" exists ...
	I0603 12:47:40.951594 1101149 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:40.951631 1101149 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:40.967365 1101149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35655
	I0603 12:47:40.967780 1101149 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:40.968196 1101149 main.go:141] libmachine: Using API Version  1
	I0603 12:47:40.968218 1101149 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:40.968531 1101149 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:40.968736 1101149 main.go:141] libmachine: (ha-220492-m03) Calling .DriverName
	I0603 12:47:40.968956 1101149 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 12:47:40.968996 1101149 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHHostname
	I0603 12:47:40.971852 1101149 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:47:40.972272 1101149 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-220492-m03 Clientid:01:52:54:00:ae:60:87}
	I0603 12:47:40.972300 1101149 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:47:40.972484 1101149 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHPort
	I0603 12:47:40.972689 1101149 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHKeyPath
	I0603 12:47:40.972834 1101149 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHUsername
	I0603 12:47:40.973019 1101149 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m03/id_rsa Username:docker}
	I0603 12:47:41.057022 1101149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:47:41.072445 1101149 kubeconfig.go:125] found "ha-220492" server: "https://192.168.39.254:8443"
	I0603 12:47:41.072474 1101149 api_server.go:166] Checking apiserver status ...
	I0603 12:47:41.072505 1101149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:47:41.086483 1101149 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1506/cgroup
	W0603 12:47:41.098924 1101149 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1506/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 12:47:41.098986 1101149 ssh_runner.go:195] Run: ls
	I0603 12:47:41.103543 1101149 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0603 12:47:41.110249 1101149 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0603 12:47:41.110271 1101149 status.go:422] ha-220492-m03 apiserver status = Running (err=<nil>)
	I0603 12:47:41.110281 1101149 status.go:257] ha-220492-m03 status: &{Name:ha-220492-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 12:47:41.110296 1101149 status.go:255] checking status of ha-220492-m04 ...
	I0603 12:47:41.110596 1101149 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:41.110631 1101149 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:41.126722 1101149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34821
	I0603 12:47:41.127182 1101149 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:41.127785 1101149 main.go:141] libmachine: Using API Version  1
	I0603 12:47:41.127814 1101149 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:41.128184 1101149 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:41.128406 1101149 main.go:141] libmachine: (ha-220492-m04) Calling .GetState
	I0603 12:47:41.130004 1101149 status.go:330] ha-220492-m04 host status = "Running" (err=<nil>)
	I0603 12:47:41.130020 1101149 host.go:66] Checking if "ha-220492-m04" exists ...
	I0603 12:47:41.130367 1101149 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:41.130403 1101149 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:41.146939 1101149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42079
	I0603 12:47:41.147304 1101149 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:41.147760 1101149 main.go:141] libmachine: Using API Version  1
	I0603 12:47:41.147779 1101149 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:41.148069 1101149 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:41.148257 1101149 main.go:141] libmachine: (ha-220492-m04) Calling .GetIP
	I0603 12:47:41.151088 1101149 main.go:141] libmachine: (ha-220492-m04) DBG | domain ha-220492-m04 has defined MAC address 52:54:00:ce:45:9f in network mk-ha-220492
	I0603 12:47:41.151586 1101149 main.go:141] libmachine: (ha-220492-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:45:9f", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:44:29 +0000 UTC Type:0 Mac:52:54:00:ce:45:9f Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-220492-m04 Clientid:01:52:54:00:ce:45:9f}
	I0603 12:47:41.151619 1101149 main.go:141] libmachine: (ha-220492-m04) DBG | domain ha-220492-m04 has defined IP address 192.168.39.76 and MAC address 52:54:00:ce:45:9f in network mk-ha-220492
	I0603 12:47:41.151775 1101149 host.go:66] Checking if "ha-220492-m04" exists ...
	I0603 12:47:41.152121 1101149 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:41.152167 1101149 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:41.167098 1101149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45299
	I0603 12:47:41.167467 1101149 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:41.167921 1101149 main.go:141] libmachine: Using API Version  1
	I0603 12:47:41.167944 1101149 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:41.168221 1101149 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:41.168404 1101149 main.go:141] libmachine: (ha-220492-m04) Calling .DriverName
	I0603 12:47:41.168571 1101149 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 12:47:41.168591 1101149 main.go:141] libmachine: (ha-220492-m04) Calling .GetSSHHostname
	I0603 12:47:41.171419 1101149 main.go:141] libmachine: (ha-220492-m04) DBG | domain ha-220492-m04 has defined MAC address 52:54:00:ce:45:9f in network mk-ha-220492
	I0603 12:47:41.171852 1101149 main.go:141] libmachine: (ha-220492-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:45:9f", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:44:29 +0000 UTC Type:0 Mac:52:54:00:ce:45:9f Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-220492-m04 Clientid:01:52:54:00:ce:45:9f}
	I0603 12:47:41.171871 1101149 main.go:141] libmachine: (ha-220492-m04) DBG | domain ha-220492-m04 has defined IP address 192.168.39.76 and MAC address 52:54:00:ce:45:9f in network mk-ha-220492
	I0603 12:47:41.172020 1101149 main.go:141] libmachine: (ha-220492-m04) Calling .GetSSHPort
	I0603 12:47:41.172200 1101149 main.go:141] libmachine: (ha-220492-m04) Calling .GetSSHKeyPath
	I0603 12:47:41.172355 1101149 main.go:141] libmachine: (ha-220492-m04) Calling .GetSSHUsername
	I0603 12:47:41.172496 1101149 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m04/id_rsa Username:docker}
	I0603 12:47:41.257615 1101149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:47:41.272120 1101149 status.go:257] ha-220492-m04 status: &{Name:ha-220492-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
E0603 12:47:42.073224 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/functional-093300/client.crt: no such file or directory
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 status -v=7 --alsologtostderr
E0603 12:47:45.541535 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.crt: no such file or directory
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-220492 status -v=7 --alsologtostderr: exit status 3 (4.888542986s)

                                                
                                                
-- stdout --
	ha-220492
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-220492-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-220492-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-220492-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0603 12:47:42.576971 1101250 out.go:291] Setting OutFile to fd 1 ...
	I0603 12:47:42.577233 1101250 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:47:42.577244 1101250 out.go:304] Setting ErrFile to fd 2...
	I0603 12:47:42.577265 1101250 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:47:42.577575 1101250 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
	I0603 12:47:42.577798 1101250 out.go:298] Setting JSON to false
	I0603 12:47:42.577825 1101250 mustload.go:65] Loading cluster: ha-220492
	I0603 12:47:42.577995 1101250 notify.go:220] Checking for updates...
	I0603 12:47:42.578393 1101250 config.go:182] Loaded profile config "ha-220492": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:47:42.578414 1101250 status.go:255] checking status of ha-220492 ...
	I0603 12:47:42.578835 1101250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:42.578914 1101250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:42.599630 1101250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34747
	I0603 12:47:42.600095 1101250 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:42.600732 1101250 main.go:141] libmachine: Using API Version  1
	I0603 12:47:42.600765 1101250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:42.601428 1101250 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:42.601680 1101250 main.go:141] libmachine: (ha-220492) Calling .GetState
	I0603 12:47:42.603414 1101250 status.go:330] ha-220492 host status = "Running" (err=<nil>)
	I0603 12:47:42.603437 1101250 host.go:66] Checking if "ha-220492" exists ...
	I0603 12:47:42.603725 1101250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:42.603777 1101250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:42.618688 1101250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43993
	I0603 12:47:42.619097 1101250 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:42.619574 1101250 main.go:141] libmachine: Using API Version  1
	I0603 12:47:42.619596 1101250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:42.619933 1101250 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:42.620158 1101250 main.go:141] libmachine: (ha-220492) Calling .GetIP
	I0603 12:47:42.622978 1101250 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:47:42.623458 1101250 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:47:42.623481 1101250 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:47:42.623602 1101250 host.go:66] Checking if "ha-220492" exists ...
	I0603 12:47:42.623875 1101250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:42.623913 1101250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:42.638679 1101250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45849
	I0603 12:47:42.639111 1101250 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:42.639622 1101250 main.go:141] libmachine: Using API Version  1
	I0603 12:47:42.639647 1101250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:42.640020 1101250 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:42.640224 1101250 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:47:42.640506 1101250 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 12:47:42.640535 1101250 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:47:42.643411 1101250 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:47:42.643851 1101250 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:47:42.643873 1101250 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:47:42.644035 1101250 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:47:42.644228 1101250 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:47:42.644359 1101250 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:47:42.644518 1101250 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/id_rsa Username:docker}
	I0603 12:47:42.729634 1101250 ssh_runner.go:195] Run: systemctl --version
	I0603 12:47:42.736304 1101250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:47:42.751856 1101250 kubeconfig.go:125] found "ha-220492" server: "https://192.168.39.254:8443"
	I0603 12:47:42.751907 1101250 api_server.go:166] Checking apiserver status ...
	I0603 12:47:42.751956 1101250 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:47:42.768615 1101250 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1226/cgroup
	W0603 12:47:42.779694 1101250 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1226/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 12:47:42.779752 1101250 ssh_runner.go:195] Run: ls
	I0603 12:47:42.784288 1101250 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0603 12:47:42.788496 1101250 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0603 12:47:42.788518 1101250 status.go:422] ha-220492 apiserver status = Running (err=<nil>)
	I0603 12:47:42.788529 1101250 status.go:257] ha-220492 status: &{Name:ha-220492 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 12:47:42.788545 1101250 status.go:255] checking status of ha-220492-m02 ...
	I0603 12:47:42.788824 1101250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:42.788863 1101250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:42.804982 1101250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39415
	I0603 12:47:42.805418 1101250 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:42.805958 1101250 main.go:141] libmachine: Using API Version  1
	I0603 12:47:42.805986 1101250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:42.806362 1101250 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:42.806603 1101250 main.go:141] libmachine: (ha-220492-m02) Calling .GetState
	I0603 12:47:42.808103 1101250 status.go:330] ha-220492-m02 host status = "Running" (err=<nil>)
	I0603 12:47:42.808124 1101250 host.go:66] Checking if "ha-220492-m02" exists ...
	I0603 12:47:42.808402 1101250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:42.808442 1101250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:42.823616 1101250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34877
	I0603 12:47:42.824017 1101250 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:42.824506 1101250 main.go:141] libmachine: Using API Version  1
	I0603 12:47:42.824529 1101250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:42.824834 1101250 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:42.825039 1101250 main.go:141] libmachine: (ha-220492-m02) Calling .GetIP
	I0603 12:47:42.827917 1101250 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:47:42.828452 1101250 main.go:141] libmachine: (ha-220492-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:56:2b", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:41:56 +0000 UTC Type:0 Mac:52:54:00:5d:56:2b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-220492-m02 Clientid:01:52:54:00:5d:56:2b}
	I0603 12:47:42.828484 1101250 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:47:42.828563 1101250 host.go:66] Checking if "ha-220492-m02" exists ...
	I0603 12:47:42.828919 1101250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:42.828960 1101250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:42.843712 1101250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39891
	I0603 12:47:42.844285 1101250 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:42.844904 1101250 main.go:141] libmachine: Using API Version  1
	I0603 12:47:42.844925 1101250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:42.845266 1101250 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:42.845481 1101250 main.go:141] libmachine: (ha-220492-m02) Calling .DriverName
	I0603 12:47:42.845693 1101250 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 12:47:42.845715 1101250 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHHostname
	I0603 12:47:42.848602 1101250 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:47:42.849026 1101250 main.go:141] libmachine: (ha-220492-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:56:2b", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:41:56 +0000 UTC Type:0 Mac:52:54:00:5d:56:2b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-220492-m02 Clientid:01:52:54:00:5d:56:2b}
	I0603 12:47:42.849058 1101250 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:47:42.849199 1101250 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHPort
	I0603 12:47:42.849392 1101250 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHKeyPath
	I0603 12:47:42.849579 1101250 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHUsername
	I0603 12:47:42.849730 1101250 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m02/id_rsa Username:docker}
	W0603 12:47:43.977700 1101250 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.106:22: connect: no route to host
	I0603 12:47:43.977776 1101250 retry.go:31] will retry after 185.680298ms: dial tcp 192.168.39.106:22: connect: no route to host
	W0603 12:47:47.049758 1101250 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.106:22: connect: no route to host
	W0603 12:47:47.049875 1101250 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.106:22: connect: no route to host
	E0603 12:47:47.049901 1101250 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.106:22: connect: no route to host
	I0603 12:47:47.049915 1101250 status.go:257] ha-220492-m02 status: &{Name:ha-220492-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0603 12:47:47.049961 1101250 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.106:22: connect: no route to host
	I0603 12:47:47.049972 1101250 status.go:255] checking status of ha-220492-m03 ...
	I0603 12:47:47.050335 1101250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:47.050392 1101250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:47.066049 1101250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37605
	I0603 12:47:47.066612 1101250 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:47.067130 1101250 main.go:141] libmachine: Using API Version  1
	I0603 12:47:47.067150 1101250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:47.067505 1101250 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:47.067755 1101250 main.go:141] libmachine: (ha-220492-m03) Calling .GetState
	I0603 12:47:47.069192 1101250 status.go:330] ha-220492-m03 host status = "Running" (err=<nil>)
	I0603 12:47:47.069212 1101250 host.go:66] Checking if "ha-220492-m03" exists ...
	I0603 12:47:47.069543 1101250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:47.069580 1101250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:47.085884 1101250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44069
	I0603 12:47:47.086312 1101250 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:47.086830 1101250 main.go:141] libmachine: Using API Version  1
	I0603 12:47:47.086866 1101250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:47.087199 1101250 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:47.087418 1101250 main.go:141] libmachine: (ha-220492-m03) Calling .GetIP
	I0603 12:47:47.090167 1101250 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:47:47.090610 1101250 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-220492-m03 Clientid:01:52:54:00:ae:60:87}
	I0603 12:47:47.090631 1101250 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:47:47.090763 1101250 host.go:66] Checking if "ha-220492-m03" exists ...
	I0603 12:47:47.091164 1101250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:47.091208 1101250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:47.105881 1101250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32967
	I0603 12:47:47.106255 1101250 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:47.106724 1101250 main.go:141] libmachine: Using API Version  1
	I0603 12:47:47.106754 1101250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:47.107075 1101250 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:47.107314 1101250 main.go:141] libmachine: (ha-220492-m03) Calling .DriverName
	I0603 12:47:47.107543 1101250 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 12:47:47.107564 1101250 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHHostname
	I0603 12:47:47.110589 1101250 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:47:47.111007 1101250 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-220492-m03 Clientid:01:52:54:00:ae:60:87}
	I0603 12:47:47.111035 1101250 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:47:47.111134 1101250 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHPort
	I0603 12:47:47.111346 1101250 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHKeyPath
	I0603 12:47:47.111521 1101250 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHUsername
	I0603 12:47:47.111661 1101250 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m03/id_rsa Username:docker}
	I0603 12:47:47.193175 1101250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:47:47.215173 1101250 kubeconfig.go:125] found "ha-220492" server: "https://192.168.39.254:8443"
	I0603 12:47:47.215211 1101250 api_server.go:166] Checking apiserver status ...
	I0603 12:47:47.215253 1101250 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:47:47.231005 1101250 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1506/cgroup
	W0603 12:47:47.242738 1101250 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1506/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 12:47:47.242802 1101250 ssh_runner.go:195] Run: ls
	I0603 12:47:47.247694 1101250 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0603 12:47:47.252026 1101250 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0603 12:47:47.252054 1101250 status.go:422] ha-220492-m03 apiserver status = Running (err=<nil>)
	I0603 12:47:47.252068 1101250 status.go:257] ha-220492-m03 status: &{Name:ha-220492-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 12:47:47.252089 1101250 status.go:255] checking status of ha-220492-m04 ...
	I0603 12:47:47.252394 1101250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:47.252428 1101250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:47.267611 1101250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39027
	I0603 12:47:47.268096 1101250 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:47.268629 1101250 main.go:141] libmachine: Using API Version  1
	I0603 12:47:47.268668 1101250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:47.269020 1101250 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:47.269239 1101250 main.go:141] libmachine: (ha-220492-m04) Calling .GetState
	I0603 12:47:47.270881 1101250 status.go:330] ha-220492-m04 host status = "Running" (err=<nil>)
	I0603 12:47:47.270897 1101250 host.go:66] Checking if "ha-220492-m04" exists ...
	I0603 12:47:47.271185 1101250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:47.271221 1101250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:47.286295 1101250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39643
	I0603 12:47:47.286731 1101250 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:47.287226 1101250 main.go:141] libmachine: Using API Version  1
	I0603 12:47:47.287255 1101250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:47.287687 1101250 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:47.288062 1101250 main.go:141] libmachine: (ha-220492-m04) Calling .GetIP
	I0603 12:47:47.290836 1101250 main.go:141] libmachine: (ha-220492-m04) DBG | domain ha-220492-m04 has defined MAC address 52:54:00:ce:45:9f in network mk-ha-220492
	I0603 12:47:47.291320 1101250 main.go:141] libmachine: (ha-220492-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:45:9f", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:44:29 +0000 UTC Type:0 Mac:52:54:00:ce:45:9f Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-220492-m04 Clientid:01:52:54:00:ce:45:9f}
	I0603 12:47:47.291349 1101250 main.go:141] libmachine: (ha-220492-m04) DBG | domain ha-220492-m04 has defined IP address 192.168.39.76 and MAC address 52:54:00:ce:45:9f in network mk-ha-220492
	I0603 12:47:47.291522 1101250 host.go:66] Checking if "ha-220492-m04" exists ...
	I0603 12:47:47.291935 1101250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:47.291982 1101250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:47.307597 1101250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39951
	I0603 12:47:47.307972 1101250 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:47.308524 1101250 main.go:141] libmachine: Using API Version  1
	I0603 12:47:47.308554 1101250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:47.308846 1101250 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:47.309039 1101250 main.go:141] libmachine: (ha-220492-m04) Calling .DriverName
	I0603 12:47:47.309212 1101250 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 12:47:47.309233 1101250 main.go:141] libmachine: (ha-220492-m04) Calling .GetSSHHostname
	I0603 12:47:47.311782 1101250 main.go:141] libmachine: (ha-220492-m04) DBG | domain ha-220492-m04 has defined MAC address 52:54:00:ce:45:9f in network mk-ha-220492
	I0603 12:47:47.312206 1101250 main.go:141] libmachine: (ha-220492-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:45:9f", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:44:29 +0000 UTC Type:0 Mac:52:54:00:ce:45:9f Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-220492-m04 Clientid:01:52:54:00:ce:45:9f}
	I0603 12:47:47.312228 1101250 main.go:141] libmachine: (ha-220492-m04) DBG | domain ha-220492-m04 has defined IP address 192.168.39.76 and MAC address 52:54:00:ce:45:9f in network mk-ha-220492
	I0603 12:47:47.312424 1101250 main.go:141] libmachine: (ha-220492-m04) Calling .GetSSHPort
	I0603 12:47:47.312589 1101250 main.go:141] libmachine: (ha-220492-m04) Calling .GetSSHKeyPath
	I0603 12:47:47.312712 1101250 main.go:141] libmachine: (ha-220492-m04) Calling .GetSSHUsername
	I0603 12:47:47.312848 1101250 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m04/id_rsa Username:docker}
	I0603 12:47:47.397781 1101250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:47:47.414744 1101250 status.go:257] ha-220492-m04 status: &{Name:ha-220492-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-220492 status -v=7 --alsologtostderr: exit status 3 (5.088468847s)

                                                
                                                
-- stdout --
	ha-220492
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-220492-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-220492-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-220492-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0603 12:47:48.509831 1101358 out.go:291] Setting OutFile to fd 1 ...
	I0603 12:47:48.510099 1101358 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:47:48.510109 1101358 out.go:304] Setting ErrFile to fd 2...
	I0603 12:47:48.510114 1101358 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:47:48.510344 1101358 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
	I0603 12:47:48.510556 1101358 out.go:298] Setting JSON to false
	I0603 12:47:48.510585 1101358 mustload.go:65] Loading cluster: ha-220492
	I0603 12:47:48.510635 1101358 notify.go:220] Checking for updates...
	I0603 12:47:48.511025 1101358 config.go:182] Loaded profile config "ha-220492": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:47:48.511042 1101358 status.go:255] checking status of ha-220492 ...
	I0603 12:47:48.511422 1101358 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:48.511483 1101358 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:48.528306 1101358 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34189
	I0603 12:47:48.528712 1101358 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:48.529257 1101358 main.go:141] libmachine: Using API Version  1
	I0603 12:47:48.529287 1101358 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:48.529736 1101358 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:48.529958 1101358 main.go:141] libmachine: (ha-220492) Calling .GetState
	I0603 12:47:48.531651 1101358 status.go:330] ha-220492 host status = "Running" (err=<nil>)
	I0603 12:47:48.531666 1101358 host.go:66] Checking if "ha-220492" exists ...
	I0603 12:47:48.531990 1101358 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:48.532032 1101358 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:48.548464 1101358 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33681
	I0603 12:47:48.548867 1101358 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:48.549331 1101358 main.go:141] libmachine: Using API Version  1
	I0603 12:47:48.549356 1101358 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:48.549701 1101358 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:48.549887 1101358 main.go:141] libmachine: (ha-220492) Calling .GetIP
	I0603 12:47:48.552761 1101358 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:47:48.553172 1101358 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:47:48.553201 1101358 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:47:48.553347 1101358 host.go:66] Checking if "ha-220492" exists ...
	I0603 12:47:48.553691 1101358 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:48.553729 1101358 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:48.569090 1101358 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38553
	I0603 12:47:48.569486 1101358 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:48.570216 1101358 main.go:141] libmachine: Using API Version  1
	I0603 12:47:48.570236 1101358 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:48.570560 1101358 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:48.570797 1101358 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:47:48.571007 1101358 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 12:47:48.571029 1101358 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:47:48.574024 1101358 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:47:48.574512 1101358 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:47:48.574540 1101358 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:47:48.574689 1101358 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:47:48.574898 1101358 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:47:48.575078 1101358 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:47:48.575237 1101358 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/id_rsa Username:docker}
	I0603 12:47:48.662601 1101358 ssh_runner.go:195] Run: systemctl --version
	I0603 12:47:48.668994 1101358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:47:48.686415 1101358 kubeconfig.go:125] found "ha-220492" server: "https://192.168.39.254:8443"
	I0603 12:47:48.686451 1101358 api_server.go:166] Checking apiserver status ...
	I0603 12:47:48.686487 1101358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:47:48.700279 1101358 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1226/cgroup
	W0603 12:47:48.710581 1101358 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1226/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 12:47:48.710641 1101358 ssh_runner.go:195] Run: ls
	I0603 12:47:48.715066 1101358 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0603 12:47:48.719088 1101358 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0603 12:47:48.719115 1101358 status.go:422] ha-220492 apiserver status = Running (err=<nil>)
	I0603 12:47:48.719128 1101358 status.go:257] ha-220492 status: &{Name:ha-220492 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 12:47:48.719161 1101358 status.go:255] checking status of ha-220492-m02 ...
	I0603 12:47:48.719569 1101358 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:48.719612 1101358 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:48.735671 1101358 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42547
	I0603 12:47:48.736268 1101358 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:48.736849 1101358 main.go:141] libmachine: Using API Version  1
	I0603 12:47:48.736870 1101358 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:48.737277 1101358 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:48.737547 1101358 main.go:141] libmachine: (ha-220492-m02) Calling .GetState
	I0603 12:47:48.739062 1101358 status.go:330] ha-220492-m02 host status = "Running" (err=<nil>)
	I0603 12:47:48.739078 1101358 host.go:66] Checking if "ha-220492-m02" exists ...
	I0603 12:47:48.739377 1101358 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:48.739411 1101358 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:48.755576 1101358 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45543
	I0603 12:47:48.755953 1101358 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:48.756433 1101358 main.go:141] libmachine: Using API Version  1
	I0603 12:47:48.756458 1101358 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:48.756792 1101358 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:48.756974 1101358 main.go:141] libmachine: (ha-220492-m02) Calling .GetIP
	I0603 12:47:48.759897 1101358 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:47:48.760354 1101358 main.go:141] libmachine: (ha-220492-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:56:2b", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:41:56 +0000 UTC Type:0 Mac:52:54:00:5d:56:2b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-220492-m02 Clientid:01:52:54:00:5d:56:2b}
	I0603 12:47:48.760376 1101358 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:47:48.760555 1101358 host.go:66] Checking if "ha-220492-m02" exists ...
	I0603 12:47:48.760903 1101358 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:48.760955 1101358 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:48.776222 1101358 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37807
	I0603 12:47:48.776680 1101358 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:48.777184 1101358 main.go:141] libmachine: Using API Version  1
	I0603 12:47:48.777223 1101358 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:48.777623 1101358 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:48.777825 1101358 main.go:141] libmachine: (ha-220492-m02) Calling .DriverName
	I0603 12:47:48.778058 1101358 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 12:47:48.778080 1101358 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHHostname
	I0603 12:47:48.780861 1101358 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:47:48.781329 1101358 main.go:141] libmachine: (ha-220492-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:56:2b", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:41:56 +0000 UTC Type:0 Mac:52:54:00:5d:56:2b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-220492-m02 Clientid:01:52:54:00:5d:56:2b}
	I0603 12:47:48.781357 1101358 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:47:48.781504 1101358 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHPort
	I0603 12:47:48.781704 1101358 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHKeyPath
	I0603 12:47:48.781876 1101358 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHUsername
	I0603 12:47:48.782044 1101358 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m02/id_rsa Username:docker}
	W0603 12:47:50.121776 1101358 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.106:22: connect: no route to host
	I0603 12:47:50.121844 1101358 retry.go:31] will retry after 242.255442ms: dial tcp 192.168.39.106:22: connect: no route to host
	W0603 12:47:53.193794 1101358 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.106:22: connect: no route to host
	W0603 12:47:53.193914 1101358 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.106:22: connect: no route to host
	E0603 12:47:53.193934 1101358 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.106:22: connect: no route to host
	I0603 12:47:53.193941 1101358 status.go:257] ha-220492-m02 status: &{Name:ha-220492-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0603 12:47:53.193962 1101358 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.106:22: connect: no route to host
	I0603 12:47:53.193970 1101358 status.go:255] checking status of ha-220492-m03 ...
	I0603 12:47:53.194288 1101358 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:53.194333 1101358 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:53.210983 1101358 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32899
	I0603 12:47:53.211506 1101358 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:53.212116 1101358 main.go:141] libmachine: Using API Version  1
	I0603 12:47:53.212143 1101358 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:53.212549 1101358 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:53.212746 1101358 main.go:141] libmachine: (ha-220492-m03) Calling .GetState
	I0603 12:47:53.214439 1101358 status.go:330] ha-220492-m03 host status = "Running" (err=<nil>)
	I0603 12:47:53.214455 1101358 host.go:66] Checking if "ha-220492-m03" exists ...
	I0603 12:47:53.214744 1101358 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:53.214780 1101358 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:53.231339 1101358 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43407
	I0603 12:47:53.231761 1101358 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:53.232289 1101358 main.go:141] libmachine: Using API Version  1
	I0603 12:47:53.232314 1101358 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:53.232645 1101358 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:53.232872 1101358 main.go:141] libmachine: (ha-220492-m03) Calling .GetIP
	I0603 12:47:53.235693 1101358 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:47:53.236113 1101358 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-220492-m03 Clientid:01:52:54:00:ae:60:87}
	I0603 12:47:53.236142 1101358 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:47:53.236330 1101358 host.go:66] Checking if "ha-220492-m03" exists ...
	I0603 12:47:53.236649 1101358 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:53.236695 1101358 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:53.251519 1101358 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44875
	I0603 12:47:53.251976 1101358 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:53.252486 1101358 main.go:141] libmachine: Using API Version  1
	I0603 12:47:53.252509 1101358 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:53.252808 1101358 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:53.253014 1101358 main.go:141] libmachine: (ha-220492-m03) Calling .DriverName
	I0603 12:47:53.253210 1101358 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 12:47:53.253231 1101358 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHHostname
	I0603 12:47:53.256135 1101358 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:47:53.256545 1101358 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-220492-m03 Clientid:01:52:54:00:ae:60:87}
	I0603 12:47:53.256570 1101358 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:47:53.256710 1101358 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHPort
	I0603 12:47:53.256875 1101358 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHKeyPath
	I0603 12:47:53.257064 1101358 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHUsername
	I0603 12:47:53.257226 1101358 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m03/id_rsa Username:docker}
	I0603 12:47:53.342306 1101358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:47:53.358387 1101358 kubeconfig.go:125] found "ha-220492" server: "https://192.168.39.254:8443"
	I0603 12:47:53.358427 1101358 api_server.go:166] Checking apiserver status ...
	I0603 12:47:53.358475 1101358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:47:53.372576 1101358 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1506/cgroup
	W0603 12:47:53.383786 1101358 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1506/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 12:47:53.383838 1101358 ssh_runner.go:195] Run: ls
	I0603 12:47:53.388216 1101358 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0603 12:47:53.392430 1101358 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0603 12:47:53.392460 1101358 status.go:422] ha-220492-m03 apiserver status = Running (err=<nil>)
	I0603 12:47:53.392472 1101358 status.go:257] ha-220492-m03 status: &{Name:ha-220492-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 12:47:53.392487 1101358 status.go:255] checking status of ha-220492-m04 ...
	I0603 12:47:53.392814 1101358 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:53.392851 1101358 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:53.408063 1101358 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42905
	I0603 12:47:53.408468 1101358 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:53.408948 1101358 main.go:141] libmachine: Using API Version  1
	I0603 12:47:53.408970 1101358 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:53.409328 1101358 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:53.409553 1101358 main.go:141] libmachine: (ha-220492-m04) Calling .GetState
	I0603 12:47:53.411344 1101358 status.go:330] ha-220492-m04 host status = "Running" (err=<nil>)
	I0603 12:47:53.411364 1101358 host.go:66] Checking if "ha-220492-m04" exists ...
	I0603 12:47:53.411716 1101358 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:53.411758 1101358 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:53.427251 1101358 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35447
	I0603 12:47:53.427724 1101358 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:53.428182 1101358 main.go:141] libmachine: Using API Version  1
	I0603 12:47:53.428204 1101358 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:53.428616 1101358 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:53.428816 1101358 main.go:141] libmachine: (ha-220492-m04) Calling .GetIP
	I0603 12:47:53.431385 1101358 main.go:141] libmachine: (ha-220492-m04) DBG | domain ha-220492-m04 has defined MAC address 52:54:00:ce:45:9f in network mk-ha-220492
	I0603 12:47:53.431866 1101358 main.go:141] libmachine: (ha-220492-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:45:9f", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:44:29 +0000 UTC Type:0 Mac:52:54:00:ce:45:9f Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-220492-m04 Clientid:01:52:54:00:ce:45:9f}
	I0603 12:47:53.431894 1101358 main.go:141] libmachine: (ha-220492-m04) DBG | domain ha-220492-m04 has defined IP address 192.168.39.76 and MAC address 52:54:00:ce:45:9f in network mk-ha-220492
	I0603 12:47:53.432062 1101358 host.go:66] Checking if "ha-220492-m04" exists ...
	I0603 12:47:53.432363 1101358 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:53.432396 1101358 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:53.448050 1101358 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38915
	I0603 12:47:53.448428 1101358 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:53.448932 1101358 main.go:141] libmachine: Using API Version  1
	I0603 12:47:53.448954 1101358 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:53.449267 1101358 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:53.449491 1101358 main.go:141] libmachine: (ha-220492-m04) Calling .DriverName
	I0603 12:47:53.449677 1101358 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 12:47:53.449700 1101358 main.go:141] libmachine: (ha-220492-m04) Calling .GetSSHHostname
	I0603 12:47:53.452415 1101358 main.go:141] libmachine: (ha-220492-m04) DBG | domain ha-220492-m04 has defined MAC address 52:54:00:ce:45:9f in network mk-ha-220492
	I0603 12:47:53.452822 1101358 main.go:141] libmachine: (ha-220492-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:45:9f", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:44:29 +0000 UTC Type:0 Mac:52:54:00:ce:45:9f Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-220492-m04 Clientid:01:52:54:00:ce:45:9f}
	I0603 12:47:53.452859 1101358 main.go:141] libmachine: (ha-220492-m04) DBG | domain ha-220492-m04 has defined IP address 192.168.39.76 and MAC address 52:54:00:ce:45:9f in network mk-ha-220492
	I0603 12:47:53.452979 1101358 main.go:141] libmachine: (ha-220492-m04) Calling .GetSSHPort
	I0603 12:47:53.453163 1101358 main.go:141] libmachine: (ha-220492-m04) Calling .GetSSHKeyPath
	I0603 12:47:53.453341 1101358 main.go:141] libmachine: (ha-220492-m04) Calling .GetSSHUsername
	I0603 12:47:53.453553 1101358 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m04/id_rsa Username:docker}
	I0603 12:47:53.537289 1101358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:47:53.552066 1101358 status.go:257] ha-220492-m04 status: &{Name:ha-220492-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-220492 status -v=7 --alsologtostderr: exit status 3 (4.443165783s)

                                                
                                                
-- stdout --
	ha-220492
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-220492-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-220492-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-220492-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0603 12:47:55.620133 1101458 out.go:291] Setting OutFile to fd 1 ...
	I0603 12:47:55.620267 1101458 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:47:55.620279 1101458 out.go:304] Setting ErrFile to fd 2...
	I0603 12:47:55.620285 1101458 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:47:55.620577 1101458 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
	I0603 12:47:55.620767 1101458 out.go:298] Setting JSON to false
	I0603 12:47:55.620794 1101458 mustload.go:65] Loading cluster: ha-220492
	I0603 12:47:55.620925 1101458 notify.go:220] Checking for updates...
	I0603 12:47:55.621521 1101458 config.go:182] Loaded profile config "ha-220492": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:47:55.621550 1101458 status.go:255] checking status of ha-220492 ...
	I0603 12:47:55.622799 1101458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:55.622858 1101458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:55.640435 1101458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40837
	I0603 12:47:55.640957 1101458 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:55.641576 1101458 main.go:141] libmachine: Using API Version  1
	I0603 12:47:55.641596 1101458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:55.642029 1101458 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:55.642215 1101458 main.go:141] libmachine: (ha-220492) Calling .GetState
	I0603 12:47:55.643694 1101458 status.go:330] ha-220492 host status = "Running" (err=<nil>)
	I0603 12:47:55.643714 1101458 host.go:66] Checking if "ha-220492" exists ...
	I0603 12:47:55.644103 1101458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:55.644161 1101458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:55.659423 1101458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38275
	I0603 12:47:55.659821 1101458 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:55.660265 1101458 main.go:141] libmachine: Using API Version  1
	I0603 12:47:55.660298 1101458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:55.660601 1101458 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:55.660800 1101458 main.go:141] libmachine: (ha-220492) Calling .GetIP
	I0603 12:47:55.663612 1101458 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:47:55.664045 1101458 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:47:55.664076 1101458 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:47:55.664202 1101458 host.go:66] Checking if "ha-220492" exists ...
	I0603 12:47:55.664514 1101458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:55.664554 1101458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:55.680061 1101458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44053
	I0603 12:47:55.680512 1101458 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:55.680918 1101458 main.go:141] libmachine: Using API Version  1
	I0603 12:47:55.680937 1101458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:55.681256 1101458 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:55.681488 1101458 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:47:55.681670 1101458 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 12:47:55.681693 1101458 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:47:55.684562 1101458 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:47:55.685017 1101458 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:47:55.685043 1101458 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:47:55.685189 1101458 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:47:55.685386 1101458 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:47:55.685570 1101458 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:47:55.685750 1101458 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/id_rsa Username:docker}
	I0603 12:47:55.769625 1101458 ssh_runner.go:195] Run: systemctl --version
	I0603 12:47:55.776751 1101458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:47:55.791072 1101458 kubeconfig.go:125] found "ha-220492" server: "https://192.168.39.254:8443"
	I0603 12:47:55.791122 1101458 api_server.go:166] Checking apiserver status ...
	I0603 12:47:55.791167 1101458 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:47:55.811004 1101458 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1226/cgroup
	W0603 12:47:55.820656 1101458 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1226/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 12:47:55.820714 1101458 ssh_runner.go:195] Run: ls
	I0603 12:47:55.825238 1101458 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0603 12:47:55.833996 1101458 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0603 12:47:55.834032 1101458 status.go:422] ha-220492 apiserver status = Running (err=<nil>)
	I0603 12:47:55.834046 1101458 status.go:257] ha-220492 status: &{Name:ha-220492 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 12:47:55.834079 1101458 status.go:255] checking status of ha-220492-m02 ...
	I0603 12:47:55.834523 1101458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:55.834572 1101458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:55.852994 1101458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39879
	I0603 12:47:55.853415 1101458 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:55.853899 1101458 main.go:141] libmachine: Using API Version  1
	I0603 12:47:55.853922 1101458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:55.854216 1101458 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:55.854428 1101458 main.go:141] libmachine: (ha-220492-m02) Calling .GetState
	I0603 12:47:55.855919 1101458 status.go:330] ha-220492-m02 host status = "Running" (err=<nil>)
	I0603 12:47:55.855938 1101458 host.go:66] Checking if "ha-220492-m02" exists ...
	I0603 12:47:55.856244 1101458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:55.856282 1101458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:55.872056 1101458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36251
	I0603 12:47:55.872512 1101458 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:55.873058 1101458 main.go:141] libmachine: Using API Version  1
	I0603 12:47:55.873082 1101458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:55.873438 1101458 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:55.873667 1101458 main.go:141] libmachine: (ha-220492-m02) Calling .GetIP
	I0603 12:47:55.876867 1101458 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:47:55.877326 1101458 main.go:141] libmachine: (ha-220492-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:56:2b", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:41:56 +0000 UTC Type:0 Mac:52:54:00:5d:56:2b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-220492-m02 Clientid:01:52:54:00:5d:56:2b}
	I0603 12:47:55.877355 1101458 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:47:55.877444 1101458 host.go:66] Checking if "ha-220492-m02" exists ...
	I0603 12:47:55.877777 1101458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:55.877816 1101458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:55.893134 1101458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40107
	I0603 12:47:55.893672 1101458 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:55.894198 1101458 main.go:141] libmachine: Using API Version  1
	I0603 12:47:55.894227 1101458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:55.894549 1101458 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:55.894739 1101458 main.go:141] libmachine: (ha-220492-m02) Calling .DriverName
	I0603 12:47:55.894950 1101458 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 12:47:55.894977 1101458 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHHostname
	I0603 12:47:55.897663 1101458 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:47:55.898007 1101458 main.go:141] libmachine: (ha-220492-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:56:2b", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:41:56 +0000 UTC Type:0 Mac:52:54:00:5d:56:2b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-220492-m02 Clientid:01:52:54:00:5d:56:2b}
	I0603 12:47:55.898035 1101458 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:47:55.898204 1101458 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHPort
	I0603 12:47:55.898369 1101458 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHKeyPath
	I0603 12:47:55.898524 1101458 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHUsername
	I0603 12:47:55.898654 1101458 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m02/id_rsa Username:docker}
	W0603 12:47:56.265715 1101458 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.106:22: connect: no route to host
	I0603 12:47:56.265797 1101458 retry.go:31] will retry after 327.782561ms: dial tcp 192.168.39.106:22: connect: no route to host
	W0603 12:47:59.661652 1101458 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.106:22: connect: no route to host
	W0603 12:47:59.661770 1101458 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.106:22: connect: no route to host
	E0603 12:47:59.661798 1101458 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.106:22: connect: no route to host
	I0603 12:47:59.661808 1101458 status.go:257] ha-220492-m02 status: &{Name:ha-220492-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0603 12:47:59.661858 1101458 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.106:22: connect: no route to host
	I0603 12:47:59.661868 1101458 status.go:255] checking status of ha-220492-m03 ...
	I0603 12:47:59.662302 1101458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:59.662360 1101458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:59.678065 1101458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42147
	I0603 12:47:59.678534 1101458 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:59.679100 1101458 main.go:141] libmachine: Using API Version  1
	I0603 12:47:59.679126 1101458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:59.679453 1101458 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:59.679670 1101458 main.go:141] libmachine: (ha-220492-m03) Calling .GetState
	I0603 12:47:59.681066 1101458 status.go:330] ha-220492-m03 host status = "Running" (err=<nil>)
	I0603 12:47:59.681082 1101458 host.go:66] Checking if "ha-220492-m03" exists ...
	I0603 12:47:59.681380 1101458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:59.681434 1101458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:59.696227 1101458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44583
	I0603 12:47:59.696606 1101458 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:59.697025 1101458 main.go:141] libmachine: Using API Version  1
	I0603 12:47:59.697047 1101458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:59.697381 1101458 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:59.697615 1101458 main.go:141] libmachine: (ha-220492-m03) Calling .GetIP
	I0603 12:47:59.700300 1101458 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:47:59.700758 1101458 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-220492-m03 Clientid:01:52:54:00:ae:60:87}
	I0603 12:47:59.700780 1101458 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:47:59.700922 1101458 host.go:66] Checking if "ha-220492-m03" exists ...
	I0603 12:47:59.701243 1101458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:59.701297 1101458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:59.715967 1101458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34553
	I0603 12:47:59.716419 1101458 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:59.716991 1101458 main.go:141] libmachine: Using API Version  1
	I0603 12:47:59.717019 1101458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:59.717362 1101458 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:59.717579 1101458 main.go:141] libmachine: (ha-220492-m03) Calling .DriverName
	I0603 12:47:59.717790 1101458 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 12:47:59.717815 1101458 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHHostname
	I0603 12:47:59.720837 1101458 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:47:59.721278 1101458 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-220492-m03 Clientid:01:52:54:00:ae:60:87}
	I0603 12:47:59.721324 1101458 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:47:59.721523 1101458 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHPort
	I0603 12:47:59.721688 1101458 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHKeyPath
	I0603 12:47:59.721871 1101458 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHUsername
	I0603 12:47:59.722009 1101458 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m03/id_rsa Username:docker}
	I0603 12:47:59.805253 1101458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:47:59.819846 1101458 kubeconfig.go:125] found "ha-220492" server: "https://192.168.39.254:8443"
	I0603 12:47:59.819898 1101458 api_server.go:166] Checking apiserver status ...
	I0603 12:47:59.819944 1101458 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:47:59.834291 1101458 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1506/cgroup
	W0603 12:47:59.844251 1101458 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1506/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 12:47:59.844328 1101458 ssh_runner.go:195] Run: ls
	I0603 12:47:59.850445 1101458 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0603 12:47:59.856493 1101458 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0603 12:47:59.856530 1101458 status.go:422] ha-220492-m03 apiserver status = Running (err=<nil>)
	I0603 12:47:59.856543 1101458 status.go:257] ha-220492-m03 status: &{Name:ha-220492-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 12:47:59.856565 1101458 status.go:255] checking status of ha-220492-m04 ...
	I0603 12:47:59.856868 1101458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:59.856904 1101458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:59.872338 1101458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46677
	I0603 12:47:59.872736 1101458 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:59.873225 1101458 main.go:141] libmachine: Using API Version  1
	I0603 12:47:59.873250 1101458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:59.873617 1101458 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:59.873891 1101458 main.go:141] libmachine: (ha-220492-m04) Calling .GetState
	I0603 12:47:59.875589 1101458 status.go:330] ha-220492-m04 host status = "Running" (err=<nil>)
	I0603 12:47:59.875609 1101458 host.go:66] Checking if "ha-220492-m04" exists ...
	I0603 12:47:59.875915 1101458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:59.875959 1101458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:59.890995 1101458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34313
	I0603 12:47:59.891366 1101458 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:59.891832 1101458 main.go:141] libmachine: Using API Version  1
	I0603 12:47:59.891854 1101458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:59.892169 1101458 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:59.892371 1101458 main.go:141] libmachine: (ha-220492-m04) Calling .GetIP
	I0603 12:47:59.895101 1101458 main.go:141] libmachine: (ha-220492-m04) DBG | domain ha-220492-m04 has defined MAC address 52:54:00:ce:45:9f in network mk-ha-220492
	I0603 12:47:59.895499 1101458 main.go:141] libmachine: (ha-220492-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:45:9f", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:44:29 +0000 UTC Type:0 Mac:52:54:00:ce:45:9f Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-220492-m04 Clientid:01:52:54:00:ce:45:9f}
	I0603 12:47:59.895534 1101458 main.go:141] libmachine: (ha-220492-m04) DBG | domain ha-220492-m04 has defined IP address 192.168.39.76 and MAC address 52:54:00:ce:45:9f in network mk-ha-220492
	I0603 12:47:59.895650 1101458 host.go:66] Checking if "ha-220492-m04" exists ...
	I0603 12:47:59.895945 1101458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:47:59.895995 1101458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:47:59.911950 1101458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46807
	I0603 12:47:59.912414 1101458 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:47:59.912935 1101458 main.go:141] libmachine: Using API Version  1
	I0603 12:47:59.912965 1101458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:47:59.913262 1101458 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:47:59.913458 1101458 main.go:141] libmachine: (ha-220492-m04) Calling .DriverName
	I0603 12:47:59.913665 1101458 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 12:47:59.913686 1101458 main.go:141] libmachine: (ha-220492-m04) Calling .GetSSHHostname
	I0603 12:47:59.917010 1101458 main.go:141] libmachine: (ha-220492-m04) DBG | domain ha-220492-m04 has defined MAC address 52:54:00:ce:45:9f in network mk-ha-220492
	I0603 12:47:59.917584 1101458 main.go:141] libmachine: (ha-220492-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:45:9f", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:44:29 +0000 UTC Type:0 Mac:52:54:00:ce:45:9f Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-220492-m04 Clientid:01:52:54:00:ce:45:9f}
	I0603 12:47:59.917622 1101458 main.go:141] libmachine: (ha-220492-m04) DBG | domain ha-220492-m04 has defined IP address 192.168.39.76 and MAC address 52:54:00:ce:45:9f in network mk-ha-220492
	I0603 12:47:59.917771 1101458 main.go:141] libmachine: (ha-220492-m04) Calling .GetSSHPort
	I0603 12:47:59.917992 1101458 main.go:141] libmachine: (ha-220492-m04) Calling .GetSSHKeyPath
	I0603 12:47:59.918179 1101458 main.go:141] libmachine: (ha-220492-m04) Calling .GetSSHUsername
	I0603 12:47:59.918353 1101458 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m04/id_rsa Username:docker}
	I0603 12:48:00.002096 1101458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:48:00.017127 1101458 status.go:257] ha-220492-m04 status: &{Name:ha-220492-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-220492 status -v=7 --alsologtostderr: exit status 3 (4.200164015s)

                                                
                                                
-- stdout --
	ha-220492
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-220492-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-220492-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-220492-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0603 12:48:02.370302 1101575 out.go:291] Setting OutFile to fd 1 ...
	I0603 12:48:02.370679 1101575 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:48:02.370695 1101575 out.go:304] Setting ErrFile to fd 2...
	I0603 12:48:02.371034 1101575 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:48:02.371677 1101575 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
	I0603 12:48:02.372042 1101575 out.go:298] Setting JSON to false
	I0603 12:48:02.372071 1101575 mustload.go:65] Loading cluster: ha-220492
	I0603 12:48:02.372132 1101575 notify.go:220] Checking for updates...
	I0603 12:48:02.372623 1101575 config.go:182] Loaded profile config "ha-220492": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:48:02.372649 1101575 status.go:255] checking status of ha-220492 ...
	I0603 12:48:02.373088 1101575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:48:02.373153 1101575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:48:02.389017 1101575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34397
	I0603 12:48:02.389516 1101575 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:48:02.390151 1101575 main.go:141] libmachine: Using API Version  1
	I0603 12:48:02.390184 1101575 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:48:02.390564 1101575 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:48:02.390738 1101575 main.go:141] libmachine: (ha-220492) Calling .GetState
	I0603 12:48:02.392121 1101575 status.go:330] ha-220492 host status = "Running" (err=<nil>)
	I0603 12:48:02.392141 1101575 host.go:66] Checking if "ha-220492" exists ...
	I0603 12:48:02.392565 1101575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:48:02.392607 1101575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:48:02.408198 1101575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43511
	I0603 12:48:02.408719 1101575 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:48:02.409198 1101575 main.go:141] libmachine: Using API Version  1
	I0603 12:48:02.409227 1101575 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:48:02.409579 1101575 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:48:02.409767 1101575 main.go:141] libmachine: (ha-220492) Calling .GetIP
	I0603 12:48:02.412306 1101575 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:48:02.412683 1101575 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:48:02.412713 1101575 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:48:02.412853 1101575 host.go:66] Checking if "ha-220492" exists ...
	I0603 12:48:02.413141 1101575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:48:02.413174 1101575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:48:02.428927 1101575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34913
	I0603 12:48:02.429313 1101575 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:48:02.429799 1101575 main.go:141] libmachine: Using API Version  1
	I0603 12:48:02.429820 1101575 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:48:02.430153 1101575 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:48:02.430412 1101575 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:48:02.430640 1101575 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 12:48:02.430668 1101575 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:48:02.433480 1101575 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:48:02.433915 1101575 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:48:02.433935 1101575 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:48:02.434129 1101575 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:48:02.434335 1101575 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:48:02.434548 1101575 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:48:02.434755 1101575 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/id_rsa Username:docker}
	I0603 12:48:02.518122 1101575 ssh_runner.go:195] Run: systemctl --version
	I0603 12:48:02.525174 1101575 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:48:02.543717 1101575 kubeconfig.go:125] found "ha-220492" server: "https://192.168.39.254:8443"
	I0603 12:48:02.543755 1101575 api_server.go:166] Checking apiserver status ...
	I0603 12:48:02.543793 1101575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:48:02.566998 1101575 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1226/cgroup
	W0603 12:48:02.580546 1101575 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1226/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 12:48:02.580600 1101575 ssh_runner.go:195] Run: ls
	I0603 12:48:02.585471 1101575 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0603 12:48:02.591492 1101575 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0603 12:48:02.591519 1101575 status.go:422] ha-220492 apiserver status = Running (err=<nil>)
	I0603 12:48:02.591528 1101575 status.go:257] ha-220492 status: &{Name:ha-220492 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 12:48:02.591545 1101575 status.go:255] checking status of ha-220492-m02 ...
	I0603 12:48:02.591843 1101575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:48:02.591887 1101575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:48:02.607636 1101575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40761
	I0603 12:48:02.608035 1101575 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:48:02.608536 1101575 main.go:141] libmachine: Using API Version  1
	I0603 12:48:02.608558 1101575 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:48:02.608898 1101575 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:48:02.609087 1101575 main.go:141] libmachine: (ha-220492-m02) Calling .GetState
	I0603 12:48:02.610698 1101575 status.go:330] ha-220492-m02 host status = "Running" (err=<nil>)
	I0603 12:48:02.610719 1101575 host.go:66] Checking if "ha-220492-m02" exists ...
	I0603 12:48:02.611043 1101575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:48:02.611080 1101575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:48:02.626972 1101575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46277
	I0603 12:48:02.627394 1101575 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:48:02.627838 1101575 main.go:141] libmachine: Using API Version  1
	I0603 12:48:02.627868 1101575 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:48:02.628173 1101575 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:48:02.628356 1101575 main.go:141] libmachine: (ha-220492-m02) Calling .GetIP
	I0603 12:48:02.631130 1101575 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:48:02.631610 1101575 main.go:141] libmachine: (ha-220492-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:56:2b", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:41:56 +0000 UTC Type:0 Mac:52:54:00:5d:56:2b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-220492-m02 Clientid:01:52:54:00:5d:56:2b}
	I0603 12:48:02.631650 1101575 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:48:02.631775 1101575 host.go:66] Checking if "ha-220492-m02" exists ...
	I0603 12:48:02.632197 1101575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:48:02.632282 1101575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:48:02.649015 1101575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37195
	I0603 12:48:02.649428 1101575 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:48:02.649961 1101575 main.go:141] libmachine: Using API Version  1
	I0603 12:48:02.649983 1101575 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:48:02.650372 1101575 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:48:02.650576 1101575 main.go:141] libmachine: (ha-220492-m02) Calling .DriverName
	I0603 12:48:02.650787 1101575 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 12:48:02.650811 1101575 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHHostname
	I0603 12:48:02.654212 1101575 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:48:02.654750 1101575 main.go:141] libmachine: (ha-220492-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:56:2b", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:41:56 +0000 UTC Type:0 Mac:52:54:00:5d:56:2b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-220492-m02 Clientid:01:52:54:00:5d:56:2b}
	I0603 12:48:02.654791 1101575 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:48:02.654968 1101575 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHPort
	I0603 12:48:02.655172 1101575 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHKeyPath
	I0603 12:48:02.655353 1101575 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHUsername
	I0603 12:48:02.655516 1101575 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m02/id_rsa Username:docker}
	W0603 12:48:02.729676 1101575 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.106:22: connect: no route to host
	I0603 12:48:02.729731 1101575 retry.go:31] will retry after 368.975567ms: dial tcp 192.168.39.106:22: connect: no route to host
	W0603 12:48:06.153690 1101575 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.106:22: connect: no route to host
	W0603 12:48:06.153810 1101575 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.106:22: connect: no route to host
	E0603 12:48:06.153834 1101575 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.106:22: connect: no route to host
	I0603 12:48:06.153842 1101575 status.go:257] ha-220492-m02 status: &{Name:ha-220492-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0603 12:48:06.153873 1101575 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.106:22: connect: no route to host
	I0603 12:48:06.153886 1101575 status.go:255] checking status of ha-220492-m03 ...
	I0603 12:48:06.154299 1101575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:48:06.154363 1101575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:48:06.170567 1101575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35105
	I0603 12:48:06.171023 1101575 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:48:06.171551 1101575 main.go:141] libmachine: Using API Version  1
	I0603 12:48:06.171572 1101575 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:48:06.171902 1101575 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:48:06.172099 1101575 main.go:141] libmachine: (ha-220492-m03) Calling .GetState
	I0603 12:48:06.173750 1101575 status.go:330] ha-220492-m03 host status = "Running" (err=<nil>)
	I0603 12:48:06.173767 1101575 host.go:66] Checking if "ha-220492-m03" exists ...
	I0603 12:48:06.174078 1101575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:48:06.174116 1101575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:48:06.189816 1101575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45963
	I0603 12:48:06.190256 1101575 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:48:06.190785 1101575 main.go:141] libmachine: Using API Version  1
	I0603 12:48:06.190835 1101575 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:48:06.191194 1101575 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:48:06.191396 1101575 main.go:141] libmachine: (ha-220492-m03) Calling .GetIP
	I0603 12:48:06.194257 1101575 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:48:06.194678 1101575 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-220492-m03 Clientid:01:52:54:00:ae:60:87}
	I0603 12:48:06.194718 1101575 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:48:06.194830 1101575 host.go:66] Checking if "ha-220492-m03" exists ...
	I0603 12:48:06.195140 1101575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:48:06.195189 1101575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:48:06.209794 1101575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32973
	I0603 12:48:06.210280 1101575 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:48:06.210749 1101575 main.go:141] libmachine: Using API Version  1
	I0603 12:48:06.210775 1101575 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:48:06.211046 1101575 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:48:06.211267 1101575 main.go:141] libmachine: (ha-220492-m03) Calling .DriverName
	I0603 12:48:06.211475 1101575 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 12:48:06.211495 1101575 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHHostname
	I0603 12:48:06.213800 1101575 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:48:06.214209 1101575 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-220492-m03 Clientid:01:52:54:00:ae:60:87}
	I0603 12:48:06.214237 1101575 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:48:06.214341 1101575 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHPort
	I0603 12:48:06.214495 1101575 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHKeyPath
	I0603 12:48:06.214637 1101575 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHUsername
	I0603 12:48:06.214787 1101575 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m03/id_rsa Username:docker}
	I0603 12:48:06.303357 1101575 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:48:06.318987 1101575 kubeconfig.go:125] found "ha-220492" server: "https://192.168.39.254:8443"
	I0603 12:48:06.319020 1101575 api_server.go:166] Checking apiserver status ...
	I0603 12:48:06.319056 1101575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:48:06.333220 1101575 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1506/cgroup
	W0603 12:48:06.348594 1101575 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1506/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 12:48:06.348651 1101575 ssh_runner.go:195] Run: ls
	I0603 12:48:06.355115 1101575 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0603 12:48:06.359737 1101575 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0603 12:48:06.359759 1101575 status.go:422] ha-220492-m03 apiserver status = Running (err=<nil>)
	I0603 12:48:06.359767 1101575 status.go:257] ha-220492-m03 status: &{Name:ha-220492-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 12:48:06.359783 1101575 status.go:255] checking status of ha-220492-m04 ...
	I0603 12:48:06.360062 1101575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:48:06.360154 1101575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:48:06.378389 1101575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35937
	I0603 12:48:06.378801 1101575 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:48:06.379342 1101575 main.go:141] libmachine: Using API Version  1
	I0603 12:48:06.379362 1101575 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:48:06.379700 1101575 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:48:06.379918 1101575 main.go:141] libmachine: (ha-220492-m04) Calling .GetState
	I0603 12:48:06.381348 1101575 status.go:330] ha-220492-m04 host status = "Running" (err=<nil>)
	I0603 12:48:06.381369 1101575 host.go:66] Checking if "ha-220492-m04" exists ...
	I0603 12:48:06.381703 1101575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:48:06.381741 1101575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:48:06.398025 1101575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37305
	I0603 12:48:06.398467 1101575 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:48:06.398971 1101575 main.go:141] libmachine: Using API Version  1
	I0603 12:48:06.398992 1101575 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:48:06.399309 1101575 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:48:06.399590 1101575 main.go:141] libmachine: (ha-220492-m04) Calling .GetIP
	I0603 12:48:06.402288 1101575 main.go:141] libmachine: (ha-220492-m04) DBG | domain ha-220492-m04 has defined MAC address 52:54:00:ce:45:9f in network mk-ha-220492
	I0603 12:48:06.402700 1101575 main.go:141] libmachine: (ha-220492-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:45:9f", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:44:29 +0000 UTC Type:0 Mac:52:54:00:ce:45:9f Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-220492-m04 Clientid:01:52:54:00:ce:45:9f}
	I0603 12:48:06.402733 1101575 main.go:141] libmachine: (ha-220492-m04) DBG | domain ha-220492-m04 has defined IP address 192.168.39.76 and MAC address 52:54:00:ce:45:9f in network mk-ha-220492
	I0603 12:48:06.402922 1101575 host.go:66] Checking if "ha-220492-m04" exists ...
	I0603 12:48:06.403330 1101575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:48:06.403373 1101575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:48:06.417827 1101575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36997
	I0603 12:48:06.418225 1101575 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:48:06.418736 1101575 main.go:141] libmachine: Using API Version  1
	I0603 12:48:06.418757 1101575 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:48:06.419091 1101575 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:48:06.419313 1101575 main.go:141] libmachine: (ha-220492-m04) Calling .DriverName
	I0603 12:48:06.419537 1101575 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 12:48:06.419566 1101575 main.go:141] libmachine: (ha-220492-m04) Calling .GetSSHHostname
	I0603 12:48:06.422387 1101575 main.go:141] libmachine: (ha-220492-m04) DBG | domain ha-220492-m04 has defined MAC address 52:54:00:ce:45:9f in network mk-ha-220492
	I0603 12:48:06.422815 1101575 main.go:141] libmachine: (ha-220492-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:45:9f", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:44:29 +0000 UTC Type:0 Mac:52:54:00:ce:45:9f Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-220492-m04 Clientid:01:52:54:00:ce:45:9f}
	I0603 12:48:06.422844 1101575 main.go:141] libmachine: (ha-220492-m04) DBG | domain ha-220492-m04 has defined IP address 192.168.39.76 and MAC address 52:54:00:ce:45:9f in network mk-ha-220492
	I0603 12:48:06.423014 1101575 main.go:141] libmachine: (ha-220492-m04) Calling .GetSSHPort
	I0603 12:48:06.423191 1101575 main.go:141] libmachine: (ha-220492-m04) Calling .GetSSHKeyPath
	I0603 12:48:06.423373 1101575 main.go:141] libmachine: (ha-220492-m04) Calling .GetSSHUsername
	I0603 12:48:06.423510 1101575 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m04/id_rsa Username:docker}
	I0603 12:48:06.508896 1101575 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:48:06.522749 1101575 status.go:257] ha-220492-m04 status: &{Name:ha-220492-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-220492 status -v=7 --alsologtostderr: exit status 3 (3.728124624s)

                                                
                                                
-- stdout --
	ha-220492
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-220492-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-220492-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-220492-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0603 12:48:13.517022 1101691 out.go:291] Setting OutFile to fd 1 ...
	I0603 12:48:13.517302 1101691 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:48:13.517312 1101691 out.go:304] Setting ErrFile to fd 2...
	I0603 12:48:13.517318 1101691 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:48:13.517522 1101691 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
	I0603 12:48:13.517739 1101691 out.go:298] Setting JSON to false
	I0603 12:48:13.517773 1101691 mustload.go:65] Loading cluster: ha-220492
	I0603 12:48:13.517891 1101691 notify.go:220] Checking for updates...
	I0603 12:48:13.518346 1101691 config.go:182] Loaded profile config "ha-220492": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:48:13.518369 1101691 status.go:255] checking status of ha-220492 ...
	I0603 12:48:13.518975 1101691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:48:13.519057 1101691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:48:13.535917 1101691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34511
	I0603 12:48:13.536428 1101691 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:48:13.537130 1101691 main.go:141] libmachine: Using API Version  1
	I0603 12:48:13.537162 1101691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:48:13.537591 1101691 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:48:13.537786 1101691 main.go:141] libmachine: (ha-220492) Calling .GetState
	I0603 12:48:13.539480 1101691 status.go:330] ha-220492 host status = "Running" (err=<nil>)
	I0603 12:48:13.539499 1101691 host.go:66] Checking if "ha-220492" exists ...
	I0603 12:48:13.539836 1101691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:48:13.539876 1101691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:48:13.555875 1101691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35093
	I0603 12:48:13.556343 1101691 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:48:13.556821 1101691 main.go:141] libmachine: Using API Version  1
	I0603 12:48:13.556838 1101691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:48:13.557150 1101691 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:48:13.557346 1101691 main.go:141] libmachine: (ha-220492) Calling .GetIP
	I0603 12:48:13.560399 1101691 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:48:13.560856 1101691 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:48:13.560890 1101691 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:48:13.561047 1101691 host.go:66] Checking if "ha-220492" exists ...
	I0603 12:48:13.561474 1101691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:48:13.561517 1101691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:48:13.576672 1101691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33241
	I0603 12:48:13.577097 1101691 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:48:13.577610 1101691 main.go:141] libmachine: Using API Version  1
	I0603 12:48:13.577635 1101691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:48:13.577940 1101691 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:48:13.578141 1101691 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:48:13.578394 1101691 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 12:48:13.578419 1101691 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:48:13.581289 1101691 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:48:13.581825 1101691 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:48:13.581870 1101691 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:48:13.582009 1101691 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:48:13.582200 1101691 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:48:13.582373 1101691 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:48:13.582527 1101691 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/id_rsa Username:docker}
	I0603 12:48:13.669028 1101691 ssh_runner.go:195] Run: systemctl --version
	I0603 12:48:13.677293 1101691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:48:13.693643 1101691 kubeconfig.go:125] found "ha-220492" server: "https://192.168.39.254:8443"
	I0603 12:48:13.693684 1101691 api_server.go:166] Checking apiserver status ...
	I0603 12:48:13.693719 1101691 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:48:13.708914 1101691 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1226/cgroup
	W0603 12:48:13.718294 1101691 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1226/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 12:48:13.718359 1101691 ssh_runner.go:195] Run: ls
	I0603 12:48:13.722984 1101691 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0603 12:48:13.729007 1101691 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0603 12:48:13.729032 1101691 status.go:422] ha-220492 apiserver status = Running (err=<nil>)
	I0603 12:48:13.729046 1101691 status.go:257] ha-220492 status: &{Name:ha-220492 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 12:48:13.729068 1101691 status.go:255] checking status of ha-220492-m02 ...
	I0603 12:48:13.729363 1101691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:48:13.729430 1101691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:48:13.745980 1101691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32899
	I0603 12:48:13.746494 1101691 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:48:13.747076 1101691 main.go:141] libmachine: Using API Version  1
	I0603 12:48:13.747105 1101691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:48:13.747442 1101691 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:48:13.747614 1101691 main.go:141] libmachine: (ha-220492-m02) Calling .GetState
	I0603 12:48:13.749185 1101691 status.go:330] ha-220492-m02 host status = "Running" (err=<nil>)
	I0603 12:48:13.749202 1101691 host.go:66] Checking if "ha-220492-m02" exists ...
	I0603 12:48:13.749581 1101691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:48:13.749627 1101691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:48:13.766179 1101691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46279
	I0603 12:48:13.766587 1101691 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:48:13.767127 1101691 main.go:141] libmachine: Using API Version  1
	I0603 12:48:13.767154 1101691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:48:13.767554 1101691 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:48:13.767746 1101691 main.go:141] libmachine: (ha-220492-m02) Calling .GetIP
	I0603 12:48:13.770494 1101691 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:48:13.770995 1101691 main.go:141] libmachine: (ha-220492-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:56:2b", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:41:56 +0000 UTC Type:0 Mac:52:54:00:5d:56:2b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-220492-m02 Clientid:01:52:54:00:5d:56:2b}
	I0603 12:48:13.771017 1101691 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:48:13.771160 1101691 host.go:66] Checking if "ha-220492-m02" exists ...
	I0603 12:48:13.771479 1101691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:48:13.771523 1101691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:48:13.786551 1101691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44935
	I0603 12:48:13.786974 1101691 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:48:13.787387 1101691 main.go:141] libmachine: Using API Version  1
	I0603 12:48:13.787408 1101691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:48:13.787719 1101691 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:48:13.787896 1101691 main.go:141] libmachine: (ha-220492-m02) Calling .DriverName
	I0603 12:48:13.788104 1101691 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 12:48:13.788124 1101691 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHHostname
	I0603 12:48:13.791099 1101691 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:48:13.791510 1101691 main.go:141] libmachine: (ha-220492-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:56:2b", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:41:56 +0000 UTC Type:0 Mac:52:54:00:5d:56:2b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-220492-m02 Clientid:01:52:54:00:5d:56:2b}
	I0603 12:48:13.791538 1101691 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:48:13.791679 1101691 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHPort
	I0603 12:48:13.791824 1101691 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHKeyPath
	I0603 12:48:13.791945 1101691 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHUsername
	I0603 12:48:13.792032 1101691 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m02/id_rsa Username:docker}
	W0603 12:48:16.841681 1101691 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.106:22: connect: no route to host
	W0603 12:48:16.841793 1101691 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.106:22: connect: no route to host
	E0603 12:48:16.841811 1101691 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.106:22: connect: no route to host
	I0603 12:48:16.841821 1101691 status.go:257] ha-220492-m02 status: &{Name:ha-220492-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0603 12:48:16.841839 1101691 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.106:22: connect: no route to host
	I0603 12:48:16.841846 1101691 status.go:255] checking status of ha-220492-m03 ...
	I0603 12:48:16.842170 1101691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:48:16.842215 1101691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:48:16.857373 1101691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44699
	I0603 12:48:16.857941 1101691 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:48:16.858510 1101691 main.go:141] libmachine: Using API Version  1
	I0603 12:48:16.858536 1101691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:48:16.858895 1101691 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:48:16.859084 1101691 main.go:141] libmachine: (ha-220492-m03) Calling .GetState
	I0603 12:48:16.860865 1101691 status.go:330] ha-220492-m03 host status = "Running" (err=<nil>)
	I0603 12:48:16.860886 1101691 host.go:66] Checking if "ha-220492-m03" exists ...
	I0603 12:48:16.861185 1101691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:48:16.861219 1101691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:48:16.876408 1101691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33735
	I0603 12:48:16.876865 1101691 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:48:16.877350 1101691 main.go:141] libmachine: Using API Version  1
	I0603 12:48:16.877372 1101691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:48:16.877684 1101691 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:48:16.877871 1101691 main.go:141] libmachine: (ha-220492-m03) Calling .GetIP
	I0603 12:48:16.880596 1101691 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:48:16.881064 1101691 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-220492-m03 Clientid:01:52:54:00:ae:60:87}
	I0603 12:48:16.881098 1101691 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:48:16.881236 1101691 host.go:66] Checking if "ha-220492-m03" exists ...
	I0603 12:48:16.881629 1101691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:48:16.881681 1101691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:48:16.896449 1101691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43061
	I0603 12:48:16.896880 1101691 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:48:16.897309 1101691 main.go:141] libmachine: Using API Version  1
	I0603 12:48:16.897329 1101691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:48:16.897660 1101691 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:48:16.897850 1101691 main.go:141] libmachine: (ha-220492-m03) Calling .DriverName
	I0603 12:48:16.898037 1101691 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 12:48:16.898058 1101691 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHHostname
	I0603 12:48:16.900685 1101691 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:48:16.901113 1101691 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-220492-m03 Clientid:01:52:54:00:ae:60:87}
	I0603 12:48:16.901139 1101691 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:48:16.901303 1101691 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHPort
	I0603 12:48:16.901485 1101691 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHKeyPath
	I0603 12:48:16.901665 1101691 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHUsername
	I0603 12:48:16.901831 1101691 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m03/id_rsa Username:docker}
	I0603 12:48:16.985445 1101691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:48:17.001290 1101691 kubeconfig.go:125] found "ha-220492" server: "https://192.168.39.254:8443"
	I0603 12:48:17.001345 1101691 api_server.go:166] Checking apiserver status ...
	I0603 12:48:17.001389 1101691 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:48:17.016054 1101691 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1506/cgroup
	W0603 12:48:17.026244 1101691 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1506/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 12:48:17.026310 1101691 ssh_runner.go:195] Run: ls
	I0603 12:48:17.031313 1101691 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0603 12:48:17.036540 1101691 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0603 12:48:17.036571 1101691 status.go:422] ha-220492-m03 apiserver status = Running (err=<nil>)
	I0603 12:48:17.036583 1101691 status.go:257] ha-220492-m03 status: &{Name:ha-220492-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 12:48:17.036606 1101691 status.go:255] checking status of ha-220492-m04 ...
	I0603 12:48:17.036936 1101691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:48:17.036996 1101691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:48:17.052449 1101691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34335
	I0603 12:48:17.052929 1101691 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:48:17.053465 1101691 main.go:141] libmachine: Using API Version  1
	I0603 12:48:17.053487 1101691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:48:17.053836 1101691 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:48:17.054056 1101691 main.go:141] libmachine: (ha-220492-m04) Calling .GetState
	I0603 12:48:17.055532 1101691 status.go:330] ha-220492-m04 host status = "Running" (err=<nil>)
	I0603 12:48:17.055549 1101691 host.go:66] Checking if "ha-220492-m04" exists ...
	I0603 12:48:17.055823 1101691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:48:17.055858 1101691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:48:17.070694 1101691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34411
	I0603 12:48:17.071103 1101691 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:48:17.071574 1101691 main.go:141] libmachine: Using API Version  1
	I0603 12:48:17.071594 1101691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:48:17.071920 1101691 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:48:17.072174 1101691 main.go:141] libmachine: (ha-220492-m04) Calling .GetIP
	I0603 12:48:17.075188 1101691 main.go:141] libmachine: (ha-220492-m04) DBG | domain ha-220492-m04 has defined MAC address 52:54:00:ce:45:9f in network mk-ha-220492
	I0603 12:48:17.075617 1101691 main.go:141] libmachine: (ha-220492-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:45:9f", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:44:29 +0000 UTC Type:0 Mac:52:54:00:ce:45:9f Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-220492-m04 Clientid:01:52:54:00:ce:45:9f}
	I0603 12:48:17.075640 1101691 main.go:141] libmachine: (ha-220492-m04) DBG | domain ha-220492-m04 has defined IP address 192.168.39.76 and MAC address 52:54:00:ce:45:9f in network mk-ha-220492
	I0603 12:48:17.075884 1101691 host.go:66] Checking if "ha-220492-m04" exists ...
	I0603 12:48:17.076338 1101691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:48:17.076379 1101691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:48:17.091168 1101691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40669
	I0603 12:48:17.091552 1101691 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:48:17.092020 1101691 main.go:141] libmachine: Using API Version  1
	I0603 12:48:17.092039 1101691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:48:17.092397 1101691 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:48:17.092650 1101691 main.go:141] libmachine: (ha-220492-m04) Calling .DriverName
	I0603 12:48:17.092887 1101691 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 12:48:17.092911 1101691 main.go:141] libmachine: (ha-220492-m04) Calling .GetSSHHostname
	I0603 12:48:17.095857 1101691 main.go:141] libmachine: (ha-220492-m04) DBG | domain ha-220492-m04 has defined MAC address 52:54:00:ce:45:9f in network mk-ha-220492
	I0603 12:48:17.096274 1101691 main.go:141] libmachine: (ha-220492-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:45:9f", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:44:29 +0000 UTC Type:0 Mac:52:54:00:ce:45:9f Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-220492-m04 Clientid:01:52:54:00:ce:45:9f}
	I0603 12:48:17.096304 1101691 main.go:141] libmachine: (ha-220492-m04) DBG | domain ha-220492-m04 has defined IP address 192.168.39.76 and MAC address 52:54:00:ce:45:9f in network mk-ha-220492
	I0603 12:48:17.096453 1101691 main.go:141] libmachine: (ha-220492-m04) Calling .GetSSHPort
	I0603 12:48:17.096621 1101691 main.go:141] libmachine: (ha-220492-m04) Calling .GetSSHKeyPath
	I0603 12:48:17.096791 1101691 main.go:141] libmachine: (ha-220492-m04) Calling .GetSSHUsername
	I0603 12:48:17.096918 1101691 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m04/id_rsa Username:docker}
	I0603 12:48:17.182308 1101691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:48:17.198213 1101691 status.go:257] ha-220492-m04 status: &{Name:ha-220492-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-220492 status -v=7 --alsologtostderr: exit status 7 (625.536366ms)

                                                
                                                
-- stdout --
	ha-220492
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-220492-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-220492-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-220492-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0603 12:48:21.495351 1101813 out.go:291] Setting OutFile to fd 1 ...
	I0603 12:48:21.495604 1101813 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:48:21.495612 1101813 out.go:304] Setting ErrFile to fd 2...
	I0603 12:48:21.495617 1101813 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:48:21.495818 1101813 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
	I0603 12:48:21.495973 1101813 out.go:298] Setting JSON to false
	I0603 12:48:21.495997 1101813 mustload.go:65] Loading cluster: ha-220492
	I0603 12:48:21.496044 1101813 notify.go:220] Checking for updates...
	I0603 12:48:21.496332 1101813 config.go:182] Loaded profile config "ha-220492": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:48:21.496347 1101813 status.go:255] checking status of ha-220492 ...
	I0603 12:48:21.496733 1101813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:48:21.496799 1101813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:48:21.512839 1101813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35693
	I0603 12:48:21.513278 1101813 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:48:21.513882 1101813 main.go:141] libmachine: Using API Version  1
	I0603 12:48:21.513909 1101813 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:48:21.514215 1101813 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:48:21.514406 1101813 main.go:141] libmachine: (ha-220492) Calling .GetState
	I0603 12:48:21.516209 1101813 status.go:330] ha-220492 host status = "Running" (err=<nil>)
	I0603 12:48:21.516227 1101813 host.go:66] Checking if "ha-220492" exists ...
	I0603 12:48:21.516521 1101813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:48:21.516570 1101813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:48:21.531254 1101813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39649
	I0603 12:48:21.531660 1101813 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:48:21.532118 1101813 main.go:141] libmachine: Using API Version  1
	I0603 12:48:21.532140 1101813 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:48:21.532453 1101813 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:48:21.532650 1101813 main.go:141] libmachine: (ha-220492) Calling .GetIP
	I0603 12:48:21.535358 1101813 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:48:21.535790 1101813 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:48:21.535819 1101813 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:48:21.535979 1101813 host.go:66] Checking if "ha-220492" exists ...
	I0603 12:48:21.536267 1101813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:48:21.536301 1101813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:48:21.551659 1101813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41003
	I0603 12:48:21.552011 1101813 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:48:21.552578 1101813 main.go:141] libmachine: Using API Version  1
	I0603 12:48:21.552596 1101813 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:48:21.552934 1101813 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:48:21.553148 1101813 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:48:21.553365 1101813 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 12:48:21.553388 1101813 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:48:21.556013 1101813 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:48:21.556436 1101813 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:48:21.556470 1101813 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:48:21.556610 1101813 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:48:21.556793 1101813 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:48:21.556986 1101813 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:48:21.557169 1101813 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/id_rsa Username:docker}
	I0603 12:48:21.637059 1101813 ssh_runner.go:195] Run: systemctl --version
	I0603 12:48:21.643773 1101813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:48:21.661194 1101813 kubeconfig.go:125] found "ha-220492" server: "https://192.168.39.254:8443"
	I0603 12:48:21.661239 1101813 api_server.go:166] Checking apiserver status ...
	I0603 12:48:21.661289 1101813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:48:21.679828 1101813 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1226/cgroup
	W0603 12:48:21.689283 1101813 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1226/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 12:48:21.689346 1101813 ssh_runner.go:195] Run: ls
	I0603 12:48:21.693697 1101813 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0603 12:48:21.698762 1101813 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0603 12:48:21.698789 1101813 status.go:422] ha-220492 apiserver status = Running (err=<nil>)
	I0603 12:48:21.698802 1101813 status.go:257] ha-220492 status: &{Name:ha-220492 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 12:48:21.698828 1101813 status.go:255] checking status of ha-220492-m02 ...
	I0603 12:48:21.699169 1101813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:48:21.699217 1101813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:48:21.714627 1101813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41769
	I0603 12:48:21.715084 1101813 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:48:21.715541 1101813 main.go:141] libmachine: Using API Version  1
	I0603 12:48:21.715562 1101813 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:48:21.715891 1101813 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:48:21.716132 1101813 main.go:141] libmachine: (ha-220492-m02) Calling .GetState
	I0603 12:48:21.717845 1101813 status.go:330] ha-220492-m02 host status = "Stopped" (err=<nil>)
	I0603 12:48:21.717861 1101813 status.go:343] host is not running, skipping remaining checks
	I0603 12:48:21.717870 1101813 status.go:257] ha-220492-m02 status: &{Name:ha-220492-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 12:48:21.717890 1101813 status.go:255] checking status of ha-220492-m03 ...
	I0603 12:48:21.718307 1101813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:48:21.718361 1101813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:48:21.733740 1101813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41239
	I0603 12:48:21.734198 1101813 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:48:21.734662 1101813 main.go:141] libmachine: Using API Version  1
	I0603 12:48:21.734685 1101813 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:48:21.734997 1101813 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:48:21.735224 1101813 main.go:141] libmachine: (ha-220492-m03) Calling .GetState
	I0603 12:48:21.736626 1101813 status.go:330] ha-220492-m03 host status = "Running" (err=<nil>)
	I0603 12:48:21.736646 1101813 host.go:66] Checking if "ha-220492-m03" exists ...
	I0603 12:48:21.736948 1101813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:48:21.736990 1101813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:48:21.751957 1101813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35627
	I0603 12:48:21.752314 1101813 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:48:21.752772 1101813 main.go:141] libmachine: Using API Version  1
	I0603 12:48:21.752794 1101813 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:48:21.753131 1101813 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:48:21.753317 1101813 main.go:141] libmachine: (ha-220492-m03) Calling .GetIP
	I0603 12:48:21.756043 1101813 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:48:21.756527 1101813 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-220492-m03 Clientid:01:52:54:00:ae:60:87}
	I0603 12:48:21.756555 1101813 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:48:21.756703 1101813 host.go:66] Checking if "ha-220492-m03" exists ...
	I0603 12:48:21.757129 1101813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:48:21.757176 1101813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:48:21.771782 1101813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32805
	I0603 12:48:21.772182 1101813 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:48:21.772629 1101813 main.go:141] libmachine: Using API Version  1
	I0603 12:48:21.772649 1101813 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:48:21.772958 1101813 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:48:21.773141 1101813 main.go:141] libmachine: (ha-220492-m03) Calling .DriverName
	I0603 12:48:21.773313 1101813 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 12:48:21.773338 1101813 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHHostname
	I0603 12:48:21.776030 1101813 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:48:21.776486 1101813 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-220492-m03 Clientid:01:52:54:00:ae:60:87}
	I0603 12:48:21.776513 1101813 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:48:21.776681 1101813 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHPort
	I0603 12:48:21.776845 1101813 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHKeyPath
	I0603 12:48:21.776974 1101813 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHUsername
	I0603 12:48:21.777098 1101813 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m03/id_rsa Username:docker}
	I0603 12:48:21.865232 1101813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:48:21.879930 1101813 kubeconfig.go:125] found "ha-220492" server: "https://192.168.39.254:8443"
	I0603 12:48:21.879963 1101813 api_server.go:166] Checking apiserver status ...
	I0603 12:48:21.880002 1101813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:48:21.893378 1101813 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1506/cgroup
	W0603 12:48:21.902307 1101813 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1506/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 12:48:21.902359 1101813 ssh_runner.go:195] Run: ls
	I0603 12:48:21.906906 1101813 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0603 12:48:21.911206 1101813 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0603 12:48:21.911230 1101813 status.go:422] ha-220492-m03 apiserver status = Running (err=<nil>)
	I0603 12:48:21.911241 1101813 status.go:257] ha-220492-m03 status: &{Name:ha-220492-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 12:48:21.911262 1101813 status.go:255] checking status of ha-220492-m04 ...
	I0603 12:48:21.911633 1101813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:48:21.911677 1101813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:48:21.927563 1101813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35979
	I0603 12:48:21.928037 1101813 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:48:21.928547 1101813 main.go:141] libmachine: Using API Version  1
	I0603 12:48:21.928572 1101813 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:48:21.928984 1101813 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:48:21.929179 1101813 main.go:141] libmachine: (ha-220492-m04) Calling .GetState
	I0603 12:48:21.931088 1101813 status.go:330] ha-220492-m04 host status = "Running" (err=<nil>)
	I0603 12:48:21.931110 1101813 host.go:66] Checking if "ha-220492-m04" exists ...
	I0603 12:48:21.931537 1101813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:48:21.931582 1101813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:48:21.946894 1101813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42379
	I0603 12:48:21.947374 1101813 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:48:21.947947 1101813 main.go:141] libmachine: Using API Version  1
	I0603 12:48:21.947971 1101813 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:48:21.948366 1101813 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:48:21.948589 1101813 main.go:141] libmachine: (ha-220492-m04) Calling .GetIP
	I0603 12:48:21.951567 1101813 main.go:141] libmachine: (ha-220492-m04) DBG | domain ha-220492-m04 has defined MAC address 52:54:00:ce:45:9f in network mk-ha-220492
	I0603 12:48:21.951998 1101813 main.go:141] libmachine: (ha-220492-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:45:9f", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:44:29 +0000 UTC Type:0 Mac:52:54:00:ce:45:9f Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-220492-m04 Clientid:01:52:54:00:ce:45:9f}
	I0603 12:48:21.952028 1101813 main.go:141] libmachine: (ha-220492-m04) DBG | domain ha-220492-m04 has defined IP address 192.168.39.76 and MAC address 52:54:00:ce:45:9f in network mk-ha-220492
	I0603 12:48:21.952151 1101813 host.go:66] Checking if "ha-220492-m04" exists ...
	I0603 12:48:21.952539 1101813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:48:21.952587 1101813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:48:21.967604 1101813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38329
	I0603 12:48:21.968008 1101813 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:48:21.968520 1101813 main.go:141] libmachine: Using API Version  1
	I0603 12:48:21.968544 1101813 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:48:21.968904 1101813 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:48:21.969099 1101813 main.go:141] libmachine: (ha-220492-m04) Calling .DriverName
	I0603 12:48:21.969293 1101813 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 12:48:21.969315 1101813 main.go:141] libmachine: (ha-220492-m04) Calling .GetSSHHostname
	I0603 12:48:21.972118 1101813 main.go:141] libmachine: (ha-220492-m04) DBG | domain ha-220492-m04 has defined MAC address 52:54:00:ce:45:9f in network mk-ha-220492
	I0603 12:48:21.972590 1101813 main.go:141] libmachine: (ha-220492-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:45:9f", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:44:29 +0000 UTC Type:0 Mac:52:54:00:ce:45:9f Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-220492-m04 Clientid:01:52:54:00:ce:45:9f}
	I0603 12:48:21.972629 1101813 main.go:141] libmachine: (ha-220492-m04) DBG | domain ha-220492-m04 has defined IP address 192.168.39.76 and MAC address 52:54:00:ce:45:9f in network mk-ha-220492
	I0603 12:48:21.972783 1101813 main.go:141] libmachine: (ha-220492-m04) Calling .GetSSHPort
	I0603 12:48:21.972962 1101813 main.go:141] libmachine: (ha-220492-m04) Calling .GetSSHKeyPath
	I0603 12:48:21.973082 1101813 main.go:141] libmachine: (ha-220492-m04) Calling .GetSSHUsername
	I0603 12:48:21.973246 1101813 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m04/id_rsa Username:docker}
	I0603 12:48:22.056390 1101813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:48:22.070903 1101813 status.go:257] ha-220492-m04 status: &{Name:ha-220492-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-220492 status -v=7 --alsologtostderr: exit status 7 (661.961935ms)

                                                
                                                
-- stdout --
	ha-220492
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-220492-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-220492-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-220492-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0603 12:48:30.090772 1101918 out.go:291] Setting OutFile to fd 1 ...
	I0603 12:48:30.091053 1101918 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:48:30.091064 1101918 out.go:304] Setting ErrFile to fd 2...
	I0603 12:48:30.091067 1101918 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:48:30.091306 1101918 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
	I0603 12:48:30.091522 1101918 out.go:298] Setting JSON to false
	I0603 12:48:30.091550 1101918 mustload.go:65] Loading cluster: ha-220492
	I0603 12:48:30.091603 1101918 notify.go:220] Checking for updates...
	I0603 12:48:30.091985 1101918 config.go:182] Loaded profile config "ha-220492": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:48:30.092006 1101918 status.go:255] checking status of ha-220492 ...
	I0603 12:48:30.092408 1101918 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:48:30.092487 1101918 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:48:30.115146 1101918 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36179
	I0603 12:48:30.115629 1101918 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:48:30.116224 1101918 main.go:141] libmachine: Using API Version  1
	I0603 12:48:30.116249 1101918 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:48:30.116660 1101918 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:48:30.116870 1101918 main.go:141] libmachine: (ha-220492) Calling .GetState
	I0603 12:48:30.118857 1101918 status.go:330] ha-220492 host status = "Running" (err=<nil>)
	I0603 12:48:30.118879 1101918 host.go:66] Checking if "ha-220492" exists ...
	I0603 12:48:30.119171 1101918 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:48:30.119211 1101918 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:48:30.135598 1101918 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37859
	I0603 12:48:30.136173 1101918 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:48:30.136930 1101918 main.go:141] libmachine: Using API Version  1
	I0603 12:48:30.136958 1101918 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:48:30.137392 1101918 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:48:30.137622 1101918 main.go:141] libmachine: (ha-220492) Calling .GetIP
	I0603 12:48:30.140966 1101918 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:48:30.141438 1101918 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:48:30.141477 1101918 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:48:30.141600 1101918 host.go:66] Checking if "ha-220492" exists ...
	I0603 12:48:30.141928 1101918 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:48:30.141955 1101918 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:48:30.157476 1101918 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37653
	I0603 12:48:30.157910 1101918 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:48:30.158392 1101918 main.go:141] libmachine: Using API Version  1
	I0603 12:48:30.158412 1101918 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:48:30.158782 1101918 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:48:30.158977 1101918 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:48:30.159219 1101918 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 12:48:30.159250 1101918 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:48:30.162226 1101918 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:48:30.162670 1101918 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:48:30.162697 1101918 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:48:30.162924 1101918 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:48:30.163128 1101918 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:48:30.163325 1101918 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:48:30.163498 1101918 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/id_rsa Username:docker}
	I0603 12:48:30.250396 1101918 ssh_runner.go:195] Run: systemctl --version
	I0603 12:48:30.257314 1101918 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:48:30.277686 1101918 kubeconfig.go:125] found "ha-220492" server: "https://192.168.39.254:8443"
	I0603 12:48:30.277726 1101918 api_server.go:166] Checking apiserver status ...
	I0603 12:48:30.277763 1101918 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:48:30.294675 1101918 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1226/cgroup
	W0603 12:48:30.306253 1101918 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1226/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 12:48:30.306312 1101918 ssh_runner.go:195] Run: ls
	I0603 12:48:30.311408 1101918 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0603 12:48:30.315942 1101918 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0603 12:48:30.315964 1101918 status.go:422] ha-220492 apiserver status = Running (err=<nil>)
	I0603 12:48:30.315976 1101918 status.go:257] ha-220492 status: &{Name:ha-220492 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 12:48:30.315992 1101918 status.go:255] checking status of ha-220492-m02 ...
	I0603 12:48:30.316291 1101918 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:48:30.316337 1101918 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:48:30.331562 1101918 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39701
	I0603 12:48:30.332007 1101918 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:48:30.332524 1101918 main.go:141] libmachine: Using API Version  1
	I0603 12:48:30.332540 1101918 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:48:30.332896 1101918 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:48:30.333130 1101918 main.go:141] libmachine: (ha-220492-m02) Calling .GetState
	I0603 12:48:30.334624 1101918 status.go:330] ha-220492-m02 host status = "Stopped" (err=<nil>)
	I0603 12:48:30.334638 1101918 status.go:343] host is not running, skipping remaining checks
	I0603 12:48:30.334643 1101918 status.go:257] ha-220492-m02 status: &{Name:ha-220492-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 12:48:30.334660 1101918 status.go:255] checking status of ha-220492-m03 ...
	I0603 12:48:30.334937 1101918 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:48:30.334960 1101918 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:48:30.351103 1101918 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44411
	I0603 12:48:30.351658 1101918 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:48:30.352181 1101918 main.go:141] libmachine: Using API Version  1
	I0603 12:48:30.352200 1101918 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:48:30.352602 1101918 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:48:30.352824 1101918 main.go:141] libmachine: (ha-220492-m03) Calling .GetState
	I0603 12:48:30.354344 1101918 status.go:330] ha-220492-m03 host status = "Running" (err=<nil>)
	I0603 12:48:30.354376 1101918 host.go:66] Checking if "ha-220492-m03" exists ...
	I0603 12:48:30.354750 1101918 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:48:30.354779 1101918 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:48:30.370323 1101918 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36033
	I0603 12:48:30.370734 1101918 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:48:30.371204 1101918 main.go:141] libmachine: Using API Version  1
	I0603 12:48:30.371237 1101918 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:48:30.371592 1101918 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:48:30.371756 1101918 main.go:141] libmachine: (ha-220492-m03) Calling .GetIP
	I0603 12:48:30.374915 1101918 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:48:30.375406 1101918 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-220492-m03 Clientid:01:52:54:00:ae:60:87}
	I0603 12:48:30.375435 1101918 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:48:30.375602 1101918 host.go:66] Checking if "ha-220492-m03" exists ...
	I0603 12:48:30.376037 1101918 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:48:30.376078 1101918 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:48:30.391624 1101918 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36691
	I0603 12:48:30.392091 1101918 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:48:30.392662 1101918 main.go:141] libmachine: Using API Version  1
	I0603 12:48:30.392693 1101918 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:48:30.393041 1101918 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:48:30.393243 1101918 main.go:141] libmachine: (ha-220492-m03) Calling .DriverName
	I0603 12:48:30.393463 1101918 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 12:48:30.393488 1101918 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHHostname
	I0603 12:48:30.396180 1101918 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:48:30.396683 1101918 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-220492-m03 Clientid:01:52:54:00:ae:60:87}
	I0603 12:48:30.396718 1101918 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:48:30.396895 1101918 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHPort
	I0603 12:48:30.397071 1101918 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHKeyPath
	I0603 12:48:30.397241 1101918 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHUsername
	I0603 12:48:30.397379 1101918 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m03/id_rsa Username:docker}
	I0603 12:48:30.485391 1101918 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:48:30.502491 1101918 kubeconfig.go:125] found "ha-220492" server: "https://192.168.39.254:8443"
	I0603 12:48:30.502534 1101918 api_server.go:166] Checking apiserver status ...
	I0603 12:48:30.502579 1101918 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:48:30.518831 1101918 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1506/cgroup
	W0603 12:48:30.530828 1101918 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1506/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 12:48:30.530907 1101918 ssh_runner.go:195] Run: ls
	I0603 12:48:30.538029 1101918 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0603 12:48:30.542553 1101918 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0603 12:48:30.542733 1101918 status.go:422] ha-220492-m03 apiserver status = Running (err=<nil>)
	I0603 12:48:30.542764 1101918 status.go:257] ha-220492-m03 status: &{Name:ha-220492-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 12:48:30.542787 1101918 status.go:255] checking status of ha-220492-m04 ...
	I0603 12:48:30.543175 1101918 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:48:30.543227 1101918 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:48:30.558490 1101918 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45671
	I0603 12:48:30.558964 1101918 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:48:30.559462 1101918 main.go:141] libmachine: Using API Version  1
	I0603 12:48:30.559490 1101918 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:48:30.559877 1101918 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:48:30.560094 1101918 main.go:141] libmachine: (ha-220492-m04) Calling .GetState
	I0603 12:48:30.561706 1101918 status.go:330] ha-220492-m04 host status = "Running" (err=<nil>)
	I0603 12:48:30.561726 1101918 host.go:66] Checking if "ha-220492-m04" exists ...
	I0603 12:48:30.562128 1101918 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:48:30.562174 1101918 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:48:30.577035 1101918 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36181
	I0603 12:48:30.577520 1101918 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:48:30.578043 1101918 main.go:141] libmachine: Using API Version  1
	I0603 12:48:30.578076 1101918 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:48:30.578413 1101918 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:48:30.578641 1101918 main.go:141] libmachine: (ha-220492-m04) Calling .GetIP
	I0603 12:48:30.581596 1101918 main.go:141] libmachine: (ha-220492-m04) DBG | domain ha-220492-m04 has defined MAC address 52:54:00:ce:45:9f in network mk-ha-220492
	I0603 12:48:30.582245 1101918 main.go:141] libmachine: (ha-220492-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:45:9f", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:44:29 +0000 UTC Type:0 Mac:52:54:00:ce:45:9f Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-220492-m04 Clientid:01:52:54:00:ce:45:9f}
	I0603 12:48:30.582291 1101918 main.go:141] libmachine: (ha-220492-m04) DBG | domain ha-220492-m04 has defined IP address 192.168.39.76 and MAC address 52:54:00:ce:45:9f in network mk-ha-220492
	I0603 12:48:30.582395 1101918 host.go:66] Checking if "ha-220492-m04" exists ...
	I0603 12:48:30.582694 1101918 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:48:30.582740 1101918 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:48:30.597788 1101918 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45421
	I0603 12:48:30.598182 1101918 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:48:30.598644 1101918 main.go:141] libmachine: Using API Version  1
	I0603 12:48:30.598663 1101918 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:48:30.598979 1101918 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:48:30.599167 1101918 main.go:141] libmachine: (ha-220492-m04) Calling .DriverName
	I0603 12:48:30.599404 1101918 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 12:48:30.599428 1101918 main.go:141] libmachine: (ha-220492-m04) Calling .GetSSHHostname
	I0603 12:48:30.602387 1101918 main.go:141] libmachine: (ha-220492-m04) DBG | domain ha-220492-m04 has defined MAC address 52:54:00:ce:45:9f in network mk-ha-220492
	I0603 12:48:30.602814 1101918 main.go:141] libmachine: (ha-220492-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:45:9f", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:44:29 +0000 UTC Type:0 Mac:52:54:00:ce:45:9f Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-220492-m04 Clientid:01:52:54:00:ce:45:9f}
	I0603 12:48:30.602840 1101918 main.go:141] libmachine: (ha-220492-m04) DBG | domain ha-220492-m04 has defined IP address 192.168.39.76 and MAC address 52:54:00:ce:45:9f in network mk-ha-220492
	I0603 12:48:30.603028 1101918 main.go:141] libmachine: (ha-220492-m04) Calling .GetSSHPort
	I0603 12:48:30.603192 1101918 main.go:141] libmachine: (ha-220492-m04) Calling .GetSSHKeyPath
	I0603 12:48:30.603334 1101918 main.go:141] libmachine: (ha-220492-m04) Calling .GetSSHUsername
	I0603 12:48:30.603520 1101918 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m04/id_rsa Username:docker}
	I0603 12:48:30.689661 1101918 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:48:30.703940 1101918 status.go:257] ha-220492-m04 status: &{Name:ha-220492-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-220492 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-220492 -n ha-220492
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-220492 logs -n 25: (1.425874397s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-220492 ssh -n                                                                 | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-220492 cp ha-220492-m03:/home/docker/cp-test.txt                              | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492:/home/docker/cp-test_ha-220492-m03_ha-220492.txt                       |           |         |         |                     |                     |
	| ssh     | ha-220492 ssh -n                                                                 | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-220492 ssh -n ha-220492 sudo cat                                              | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | /home/docker/cp-test_ha-220492-m03_ha-220492.txt                                 |           |         |         |                     |                     |
	| cp      | ha-220492 cp ha-220492-m03:/home/docker/cp-test.txt                              | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492-m02:/home/docker/cp-test_ha-220492-m03_ha-220492-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-220492 ssh -n                                                                 | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-220492 ssh -n ha-220492-m02 sudo cat                                          | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | /home/docker/cp-test_ha-220492-m03_ha-220492-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-220492 cp ha-220492-m03:/home/docker/cp-test.txt                              | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492-m04:/home/docker/cp-test_ha-220492-m03_ha-220492-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-220492 ssh -n                                                                 | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-220492 ssh -n ha-220492-m04 sudo cat                                          | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | /home/docker/cp-test_ha-220492-m03_ha-220492-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-220492 cp testdata/cp-test.txt                                                | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-220492 ssh -n                                                                 | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-220492 cp ha-220492-m04:/home/docker/cp-test.txt                              | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3428699095/001/cp-test_ha-220492-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-220492 ssh -n                                                                 | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-220492 cp ha-220492-m04:/home/docker/cp-test.txt                              | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492:/home/docker/cp-test_ha-220492-m04_ha-220492.txt                       |           |         |         |                     |                     |
	| ssh     | ha-220492 ssh -n                                                                 | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-220492 ssh -n ha-220492 sudo cat                                              | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | /home/docker/cp-test_ha-220492-m04_ha-220492.txt                                 |           |         |         |                     |                     |
	| cp      | ha-220492 cp ha-220492-m04:/home/docker/cp-test.txt                              | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492-m02:/home/docker/cp-test_ha-220492-m04_ha-220492-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-220492 ssh -n                                                                 | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-220492 ssh -n ha-220492-m02 sudo cat                                          | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | /home/docker/cp-test_ha-220492-m04_ha-220492-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-220492 cp ha-220492-m04:/home/docker/cp-test.txt                              | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492-m03:/home/docker/cp-test_ha-220492-m04_ha-220492-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-220492 ssh -n                                                                 | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-220492 ssh -n ha-220492-m03 sudo cat                                          | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | /home/docker/cp-test_ha-220492-m04_ha-220492-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-220492 node stop m02 -v=7                                                     | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-220492 node start m02 -v=7                                                    | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:47 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 12:40:45
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 12:40:45.154122 1096371 out.go:291] Setting OutFile to fd 1 ...
	I0603 12:40:45.154220 1096371 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:40:45.154228 1096371 out.go:304] Setting ErrFile to fd 2...
	I0603 12:40:45.154232 1096371 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:40:45.154410 1096371 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
	I0603 12:40:45.154944 1096371 out.go:298] Setting JSON to false
	I0603 12:40:45.155926 1096371 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12192,"bootTime":1717406253,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 12:40:45.155986 1096371 start.go:139] virtualization: kvm guest
	I0603 12:40:45.158145 1096371 out.go:177] * [ha-220492] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 12:40:45.159736 1096371 notify.go:220] Checking for updates...
	I0603 12:40:45.159744 1096371 out.go:177]   - MINIKUBE_LOCATION=19011
	I0603 12:40:45.161095 1096371 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 12:40:45.162385 1096371 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 12:40:45.163711 1096371 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 12:40:45.164898 1096371 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 12:40:45.166037 1096371 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 12:40:45.167326 1096371 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 12:40:45.202490 1096371 out.go:177] * Using the kvm2 driver based on user configuration
	I0603 12:40:45.203766 1096371 start.go:297] selected driver: kvm2
	I0603 12:40:45.203780 1096371 start.go:901] validating driver "kvm2" against <nil>
	I0603 12:40:45.203793 1096371 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 12:40:45.204471 1096371 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 12:40:45.204555 1096371 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19011-1078924/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0603 12:40:45.219610 1096371 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0603 12:40:45.219670 1096371 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0603 12:40:45.219878 1096371 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 12:40:45.219951 1096371 cni.go:84] Creating CNI manager for ""
	I0603 12:40:45.219969 1096371 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0603 12:40:45.219978 1096371 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0603 12:40:45.220046 1096371 start.go:340] cluster config:
	{Name:ha-220492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-220492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0603 12:40:45.220155 1096371 iso.go:125] acquiring lock: {Name:mka26d6a83f88b83737ccc78b57cc462fbe70fe1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 12:40:45.221748 1096371 out.go:177] * Starting "ha-220492" primary control-plane node in "ha-220492" cluster
	I0603 12:40:45.222990 1096371 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 12:40:45.223024 1096371 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0603 12:40:45.223048 1096371 cache.go:56] Caching tarball of preloaded images
	I0603 12:40:45.223125 1096371 preload.go:173] Found /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 12:40:45.223137 1096371 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0603 12:40:45.223447 1096371 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/config.json ...
	I0603 12:40:45.223472 1096371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/config.json: {Name:mkc9aa250f9d043c2e947d40a6dc3875c1521c9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:40:45.223612 1096371 start.go:360] acquireMachinesLock for ha-220492: {Name:mk20baaab39609d00406b78ad309423511e633ec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 12:40:45.223654 1096371 start.go:364] duration metric: took 25.719µs to acquireMachinesLock for "ha-220492"
	I0603 12:40:45.223683 1096371 start.go:93] Provisioning new machine with config: &{Name:ha-220492 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-220492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 12:40:45.223742 1096371 start.go:125] createHost starting for "" (driver="kvm2")
	I0603 12:40:45.225464 1096371 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0603 12:40:45.225606 1096371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:40:45.225660 1096371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:40:45.239421 1096371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34301
	I0603 12:40:45.239910 1096371 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:40:45.240536 1096371 main.go:141] libmachine: Using API Version  1
	I0603 12:40:45.240564 1096371 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:40:45.240924 1096371 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:40:45.241106 1096371 main.go:141] libmachine: (ha-220492) Calling .GetMachineName
	I0603 12:40:45.241237 1096371 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:40:45.241441 1096371 start.go:159] libmachine.API.Create for "ha-220492" (driver="kvm2")
	I0603 12:40:45.241473 1096371 client.go:168] LocalClient.Create starting
	I0603 12:40:45.241501 1096371 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem
	I0603 12:40:45.241533 1096371 main.go:141] libmachine: Decoding PEM data...
	I0603 12:40:45.241550 1096371 main.go:141] libmachine: Parsing certificate...
	I0603 12:40:45.241605 1096371 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem
	I0603 12:40:45.241624 1096371 main.go:141] libmachine: Decoding PEM data...
	I0603 12:40:45.241637 1096371 main.go:141] libmachine: Parsing certificate...
	I0603 12:40:45.241653 1096371 main.go:141] libmachine: Running pre-create checks...
	I0603 12:40:45.241662 1096371 main.go:141] libmachine: (ha-220492) Calling .PreCreateCheck
	I0603 12:40:45.242015 1096371 main.go:141] libmachine: (ha-220492) Calling .GetConfigRaw
	I0603 12:40:45.242395 1096371 main.go:141] libmachine: Creating machine...
	I0603 12:40:45.242419 1096371 main.go:141] libmachine: (ha-220492) Calling .Create
	I0603 12:40:45.242576 1096371 main.go:141] libmachine: (ha-220492) Creating KVM machine...
	I0603 12:40:45.243829 1096371 main.go:141] libmachine: (ha-220492) DBG | found existing default KVM network
	I0603 12:40:45.244550 1096371 main.go:141] libmachine: (ha-220492) DBG | I0603 12:40:45.244404 1096394 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012d9a0}
	I0603 12:40:45.244567 1096371 main.go:141] libmachine: (ha-220492) DBG | created network xml: 
	I0603 12:40:45.244577 1096371 main.go:141] libmachine: (ha-220492) DBG | <network>
	I0603 12:40:45.244582 1096371 main.go:141] libmachine: (ha-220492) DBG |   <name>mk-ha-220492</name>
	I0603 12:40:45.244588 1096371 main.go:141] libmachine: (ha-220492) DBG |   <dns enable='no'/>
	I0603 12:40:45.244592 1096371 main.go:141] libmachine: (ha-220492) DBG |   
	I0603 12:40:45.244602 1096371 main.go:141] libmachine: (ha-220492) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0603 12:40:45.244613 1096371 main.go:141] libmachine: (ha-220492) DBG |     <dhcp>
	I0603 12:40:45.244623 1096371 main.go:141] libmachine: (ha-220492) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0603 12:40:45.244634 1096371 main.go:141] libmachine: (ha-220492) DBG |     </dhcp>
	I0603 12:40:45.244642 1096371 main.go:141] libmachine: (ha-220492) DBG |   </ip>
	I0603 12:40:45.244653 1096371 main.go:141] libmachine: (ha-220492) DBG |   
	I0603 12:40:45.244665 1096371 main.go:141] libmachine: (ha-220492) DBG | </network>
	I0603 12:40:45.244673 1096371 main.go:141] libmachine: (ha-220492) DBG | 
	I0603 12:40:45.249628 1096371 main.go:141] libmachine: (ha-220492) DBG | trying to create private KVM network mk-ha-220492 192.168.39.0/24...
	I0603 12:40:45.311984 1096371 main.go:141] libmachine: (ha-220492) DBG | private KVM network mk-ha-220492 192.168.39.0/24 created
	I0603 12:40:45.312068 1096371 main.go:141] libmachine: (ha-220492) DBG | I0603 12:40:45.311945 1096394 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 12:40:45.312094 1096371 main.go:141] libmachine: (ha-220492) Setting up store path in /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492 ...
	I0603 12:40:45.312130 1096371 main.go:141] libmachine: (ha-220492) Building disk image from file:///home/jenkins/minikube-integration/19011-1078924/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso
	I0603 12:40:45.312150 1096371 main.go:141] libmachine: (ha-220492) Downloading /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19011-1078924/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0603 12:40:45.584465 1096371 main.go:141] libmachine: (ha-220492) DBG | I0603 12:40:45.584331 1096394 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/id_rsa...
	I0603 12:40:45.705607 1096371 main.go:141] libmachine: (ha-220492) DBG | I0603 12:40:45.705464 1096394 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/ha-220492.rawdisk...
	I0603 12:40:45.705640 1096371 main.go:141] libmachine: (ha-220492) DBG | Writing magic tar header
	I0603 12:40:45.705650 1096371 main.go:141] libmachine: (ha-220492) DBG | Writing SSH key tar header
	I0603 12:40:45.705737 1096371 main.go:141] libmachine: (ha-220492) DBG | I0603 12:40:45.705644 1096394 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492 ...
	I0603 12:40:45.705855 1096371 main.go:141] libmachine: (ha-220492) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492
	I0603 12:40:45.705879 1096371 main.go:141] libmachine: (ha-220492) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492 (perms=drwx------)
	I0603 12:40:45.705888 1096371 main.go:141] libmachine: (ha-220492) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines
	I0603 12:40:45.705899 1096371 main.go:141] libmachine: (ha-220492) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924/.minikube/machines (perms=drwxr-xr-x)
	I0603 12:40:45.705915 1096371 main.go:141] libmachine: (ha-220492) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924/.minikube (perms=drwxr-xr-x)
	I0603 12:40:45.705929 1096371 main.go:141] libmachine: (ha-220492) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 12:40:45.705940 1096371 main.go:141] libmachine: (ha-220492) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924 (perms=drwxrwxr-x)
	I0603 12:40:45.705956 1096371 main.go:141] libmachine: (ha-220492) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0603 12:40:45.705966 1096371 main.go:141] libmachine: (ha-220492) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0603 12:40:45.705975 1096371 main.go:141] libmachine: (ha-220492) Creating domain...
	I0603 12:40:45.705988 1096371 main.go:141] libmachine: (ha-220492) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924
	I0603 12:40:45.706002 1096371 main.go:141] libmachine: (ha-220492) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0603 12:40:45.706018 1096371 main.go:141] libmachine: (ha-220492) DBG | Checking permissions on dir: /home/jenkins
	I0603 12:40:45.706029 1096371 main.go:141] libmachine: (ha-220492) DBG | Checking permissions on dir: /home
	I0603 12:40:45.706040 1096371 main.go:141] libmachine: (ha-220492) DBG | Skipping /home - not owner
	I0603 12:40:45.707030 1096371 main.go:141] libmachine: (ha-220492) define libvirt domain using xml: 
	I0603 12:40:45.707050 1096371 main.go:141] libmachine: (ha-220492) <domain type='kvm'>
	I0603 12:40:45.707056 1096371 main.go:141] libmachine: (ha-220492)   <name>ha-220492</name>
	I0603 12:40:45.707064 1096371 main.go:141] libmachine: (ha-220492)   <memory unit='MiB'>2200</memory>
	I0603 12:40:45.707090 1096371 main.go:141] libmachine: (ha-220492)   <vcpu>2</vcpu>
	I0603 12:40:45.707111 1096371 main.go:141] libmachine: (ha-220492)   <features>
	I0603 12:40:45.707120 1096371 main.go:141] libmachine: (ha-220492)     <acpi/>
	I0603 12:40:45.707127 1096371 main.go:141] libmachine: (ha-220492)     <apic/>
	I0603 12:40:45.707135 1096371 main.go:141] libmachine: (ha-220492)     <pae/>
	I0603 12:40:45.707147 1096371 main.go:141] libmachine: (ha-220492)     
	I0603 12:40:45.707155 1096371 main.go:141] libmachine: (ha-220492)   </features>
	I0603 12:40:45.707162 1096371 main.go:141] libmachine: (ha-220492)   <cpu mode='host-passthrough'>
	I0603 12:40:45.707174 1096371 main.go:141] libmachine: (ha-220492)   
	I0603 12:40:45.707184 1096371 main.go:141] libmachine: (ha-220492)   </cpu>
	I0603 12:40:45.707192 1096371 main.go:141] libmachine: (ha-220492)   <os>
	I0603 12:40:45.707199 1096371 main.go:141] libmachine: (ha-220492)     <type>hvm</type>
	I0603 12:40:45.707208 1096371 main.go:141] libmachine: (ha-220492)     <boot dev='cdrom'/>
	I0603 12:40:45.707219 1096371 main.go:141] libmachine: (ha-220492)     <boot dev='hd'/>
	I0603 12:40:45.707296 1096371 main.go:141] libmachine: (ha-220492)     <bootmenu enable='no'/>
	I0603 12:40:45.707352 1096371 main.go:141] libmachine: (ha-220492)   </os>
	I0603 12:40:45.707369 1096371 main.go:141] libmachine: (ha-220492)   <devices>
	I0603 12:40:45.707381 1096371 main.go:141] libmachine: (ha-220492)     <disk type='file' device='cdrom'>
	I0603 12:40:45.707398 1096371 main.go:141] libmachine: (ha-220492)       <source file='/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/boot2docker.iso'/>
	I0603 12:40:45.707417 1096371 main.go:141] libmachine: (ha-220492)       <target dev='hdc' bus='scsi'/>
	I0603 12:40:45.707434 1096371 main.go:141] libmachine: (ha-220492)       <readonly/>
	I0603 12:40:45.707454 1096371 main.go:141] libmachine: (ha-220492)     </disk>
	I0603 12:40:45.707466 1096371 main.go:141] libmachine: (ha-220492)     <disk type='file' device='disk'>
	I0603 12:40:45.707484 1096371 main.go:141] libmachine: (ha-220492)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0603 12:40:45.707499 1096371 main.go:141] libmachine: (ha-220492)       <source file='/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/ha-220492.rawdisk'/>
	I0603 12:40:45.707510 1096371 main.go:141] libmachine: (ha-220492)       <target dev='hda' bus='virtio'/>
	I0603 12:40:45.707518 1096371 main.go:141] libmachine: (ha-220492)     </disk>
	I0603 12:40:45.707533 1096371 main.go:141] libmachine: (ha-220492)     <interface type='network'>
	I0603 12:40:45.707547 1096371 main.go:141] libmachine: (ha-220492)       <source network='mk-ha-220492'/>
	I0603 12:40:45.707557 1096371 main.go:141] libmachine: (ha-220492)       <model type='virtio'/>
	I0603 12:40:45.707566 1096371 main.go:141] libmachine: (ha-220492)     </interface>
	I0603 12:40:45.707576 1096371 main.go:141] libmachine: (ha-220492)     <interface type='network'>
	I0603 12:40:45.707605 1096371 main.go:141] libmachine: (ha-220492)       <source network='default'/>
	I0603 12:40:45.707621 1096371 main.go:141] libmachine: (ha-220492)       <model type='virtio'/>
	I0603 12:40:45.707633 1096371 main.go:141] libmachine: (ha-220492)     </interface>
	I0603 12:40:45.707643 1096371 main.go:141] libmachine: (ha-220492)     <serial type='pty'>
	I0603 12:40:45.707654 1096371 main.go:141] libmachine: (ha-220492)       <target port='0'/>
	I0603 12:40:45.707664 1096371 main.go:141] libmachine: (ha-220492)     </serial>
	I0603 12:40:45.707675 1096371 main.go:141] libmachine: (ha-220492)     <console type='pty'>
	I0603 12:40:45.707690 1096371 main.go:141] libmachine: (ha-220492)       <target type='serial' port='0'/>
	I0603 12:40:45.707709 1096371 main.go:141] libmachine: (ha-220492)     </console>
	I0603 12:40:45.707725 1096371 main.go:141] libmachine: (ha-220492)     <rng model='virtio'>
	I0603 12:40:45.707750 1096371 main.go:141] libmachine: (ha-220492)       <backend model='random'>/dev/random</backend>
	I0603 12:40:45.707766 1096371 main.go:141] libmachine: (ha-220492)     </rng>
	I0603 12:40:45.707778 1096371 main.go:141] libmachine: (ha-220492)     
	I0603 12:40:45.707791 1096371 main.go:141] libmachine: (ha-220492)     
	I0603 12:40:45.707809 1096371 main.go:141] libmachine: (ha-220492)   </devices>
	I0603 12:40:45.707823 1096371 main.go:141] libmachine: (ha-220492) </domain>
	I0603 12:40:45.707838 1096371 main.go:141] libmachine: (ha-220492) 
	I0603 12:40:45.711436 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:de:86:31 in network default
	I0603 12:40:45.712025 1096371 main.go:141] libmachine: (ha-220492) Ensuring networks are active...
	I0603 12:40:45.712047 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:40:45.712635 1096371 main.go:141] libmachine: (ha-220492) Ensuring network default is active
	I0603 12:40:45.712929 1096371 main.go:141] libmachine: (ha-220492) Ensuring network mk-ha-220492 is active
	I0603 12:40:45.713519 1096371 main.go:141] libmachine: (ha-220492) Getting domain xml...
	I0603 12:40:45.714138 1096371 main.go:141] libmachine: (ha-220492) Creating domain...
	I0603 12:40:46.873866 1096371 main.go:141] libmachine: (ha-220492) Waiting to get IP...
	I0603 12:40:46.874617 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:40:46.875016 1096371 main.go:141] libmachine: (ha-220492) DBG | unable to find current IP address of domain ha-220492 in network mk-ha-220492
	I0603 12:40:46.875059 1096371 main.go:141] libmachine: (ha-220492) DBG | I0603 12:40:46.874985 1096394 retry.go:31] will retry after 292.608651ms: waiting for machine to come up
	I0603 12:40:47.169512 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:40:47.169993 1096371 main.go:141] libmachine: (ha-220492) DBG | unable to find current IP address of domain ha-220492 in network mk-ha-220492
	I0603 12:40:47.170024 1096371 main.go:141] libmachine: (ha-220492) DBG | I0603 12:40:47.169954 1096394 retry.go:31] will retry after 331.173202ms: waiting for machine to come up
	I0603 12:40:47.502498 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:40:47.502913 1096371 main.go:141] libmachine: (ha-220492) DBG | unable to find current IP address of domain ha-220492 in network mk-ha-220492
	I0603 12:40:47.502948 1096371 main.go:141] libmachine: (ha-220492) DBG | I0603 12:40:47.502857 1096394 retry.go:31] will retry after 380.084322ms: waiting for machine to come up
	I0603 12:40:47.884522 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:40:47.884945 1096371 main.go:141] libmachine: (ha-220492) DBG | unable to find current IP address of domain ha-220492 in network mk-ha-220492
	I0603 12:40:47.884970 1096371 main.go:141] libmachine: (ha-220492) DBG | I0603 12:40:47.884914 1096394 retry.go:31] will retry after 457.940031ms: waiting for machine to come up
	I0603 12:40:48.344494 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:40:48.344876 1096371 main.go:141] libmachine: (ha-220492) DBG | unable to find current IP address of domain ha-220492 in network mk-ha-220492
	I0603 12:40:48.344897 1096371 main.go:141] libmachine: (ha-220492) DBG | I0603 12:40:48.344817 1096394 retry.go:31] will retry after 632.576512ms: waiting for machine to come up
	I0603 12:40:48.978563 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:40:48.978972 1096371 main.go:141] libmachine: (ha-220492) DBG | unable to find current IP address of domain ha-220492 in network mk-ha-220492
	I0603 12:40:48.978999 1096371 main.go:141] libmachine: (ha-220492) DBG | I0603 12:40:48.978929 1096394 retry.go:31] will retry after 909.430383ms: waiting for machine to come up
	I0603 12:40:49.889574 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:40:49.889917 1096371 main.go:141] libmachine: (ha-220492) DBG | unable to find current IP address of domain ha-220492 in network mk-ha-220492
	I0603 12:40:49.889951 1096371 main.go:141] libmachine: (ha-220492) DBG | I0603 12:40:49.889847 1096394 retry.go:31] will retry after 1.060400826s: waiting for machine to come up
	I0603 12:40:50.951652 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:40:50.952086 1096371 main.go:141] libmachine: (ha-220492) DBG | unable to find current IP address of domain ha-220492 in network mk-ha-220492
	I0603 12:40:50.952113 1096371 main.go:141] libmachine: (ha-220492) DBG | I0603 12:40:50.952035 1096394 retry.go:31] will retry after 967.639036ms: waiting for machine to come up
	I0603 12:40:51.921500 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:40:51.921850 1096371 main.go:141] libmachine: (ha-220492) DBG | unable to find current IP address of domain ha-220492 in network mk-ha-220492
	I0603 12:40:51.921911 1096371 main.go:141] libmachine: (ha-220492) DBG | I0603 12:40:51.921829 1096394 retry.go:31] will retry after 1.739106555s: waiting for machine to come up
	I0603 12:40:53.665285 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:40:53.665828 1096371 main.go:141] libmachine: (ha-220492) DBG | unable to find current IP address of domain ha-220492 in network mk-ha-220492
	I0603 12:40:53.665858 1096371 main.go:141] libmachine: (ha-220492) DBG | I0603 12:40:53.665772 1096394 retry.go:31] will retry after 1.453970794s: waiting for machine to come up
	I0603 12:40:55.121583 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:40:55.121969 1096371 main.go:141] libmachine: (ha-220492) DBG | unable to find current IP address of domain ha-220492 in network mk-ha-220492
	I0603 12:40:55.122001 1096371 main.go:141] libmachine: (ha-220492) DBG | I0603 12:40:55.121908 1096394 retry.go:31] will retry after 1.916636172s: waiting for machine to come up
	I0603 12:40:57.040564 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:40:57.041000 1096371 main.go:141] libmachine: (ha-220492) DBG | unable to find current IP address of domain ha-220492 in network mk-ha-220492
	I0603 12:40:57.041029 1096371 main.go:141] libmachine: (ha-220492) DBG | I0603 12:40:57.040958 1096394 retry.go:31] will retry after 2.280642214s: waiting for machine to come up
	I0603 12:40:59.324400 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:40:59.324815 1096371 main.go:141] libmachine: (ha-220492) DBG | unable to find current IP address of domain ha-220492 in network mk-ha-220492
	I0603 12:40:59.324841 1096371 main.go:141] libmachine: (ha-220492) DBG | I0603 12:40:59.324777 1096394 retry.go:31] will retry after 4.41502757s: waiting for machine to come up
	I0603 12:41:03.743917 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:03.744314 1096371 main.go:141] libmachine: (ha-220492) DBG | unable to find current IP address of domain ha-220492 in network mk-ha-220492
	I0603 12:41:03.744338 1096371 main.go:141] libmachine: (ha-220492) DBG | I0603 12:41:03.744274 1096394 retry.go:31] will retry after 4.66191218s: waiting for machine to come up
	I0603 12:41:08.410233 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:08.410774 1096371 main.go:141] libmachine: (ha-220492) Found IP for machine: 192.168.39.6
	I0603 12:41:08.410804 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has current primary IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:08.410813 1096371 main.go:141] libmachine: (ha-220492) Reserving static IP address...
	I0603 12:41:08.411211 1096371 main.go:141] libmachine: (ha-220492) DBG | unable to find host DHCP lease matching {name: "ha-220492", mac: "52:54:00:79:0d:a6", ip: "192.168.39.6"} in network mk-ha-220492
	I0603 12:41:08.484713 1096371 main.go:141] libmachine: (ha-220492) DBG | Getting to WaitForSSH function...
	I0603 12:41:08.484747 1096371 main.go:141] libmachine: (ha-220492) Reserved static IP address: 192.168.39.6
	I0603 12:41:08.484761 1096371 main.go:141] libmachine: (ha-220492) Waiting for SSH to be available...
	I0603 12:41:08.487460 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:08.487883 1096371 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:minikube Clientid:01:52:54:00:79:0d:a6}
	I0603 12:41:08.487928 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:08.488036 1096371 main.go:141] libmachine: (ha-220492) DBG | Using SSH client type: external
	I0603 12:41:08.488065 1096371 main.go:141] libmachine: (ha-220492) DBG | Using SSH private key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/id_rsa (-rw-------)
	I0603 12:41:08.488115 1096371 main.go:141] libmachine: (ha-220492) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.6 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 12:41:08.488135 1096371 main.go:141] libmachine: (ha-220492) DBG | About to run SSH command:
	I0603 12:41:08.488148 1096371 main.go:141] libmachine: (ha-220492) DBG | exit 0
	I0603 12:41:08.617602 1096371 main.go:141] libmachine: (ha-220492) DBG | SSH cmd err, output: <nil>: 
	I0603 12:41:08.617902 1096371 main.go:141] libmachine: (ha-220492) KVM machine creation complete!
	I0603 12:41:08.618255 1096371 main.go:141] libmachine: (ha-220492) Calling .GetConfigRaw
	I0603 12:41:08.618835 1096371 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:41:08.619050 1096371 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:41:08.619264 1096371 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0603 12:41:08.619281 1096371 main.go:141] libmachine: (ha-220492) Calling .GetState
	I0603 12:41:08.620453 1096371 main.go:141] libmachine: Detecting operating system of created instance...
	I0603 12:41:08.620481 1096371 main.go:141] libmachine: Waiting for SSH to be available...
	I0603 12:41:08.620487 1096371 main.go:141] libmachine: Getting to WaitForSSH function...
	I0603 12:41:08.620508 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:41:08.623035 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:08.623483 1096371 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:41:08.623499 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:08.623677 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:41:08.623919 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:41:08.624078 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:41:08.624333 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:41:08.624520 1096371 main.go:141] libmachine: Using SSH client type: native
	I0603 12:41:08.624742 1096371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0603 12:41:08.624757 1096371 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0603 12:41:08.732628 1096371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:41:08.732662 1096371 main.go:141] libmachine: Detecting the provisioner...
	I0603 12:41:08.732674 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:41:08.735828 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:08.736203 1096371 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:41:08.736226 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:08.736419 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:41:08.736625 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:41:08.736793 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:41:08.736950 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:41:08.737098 1096371 main.go:141] libmachine: Using SSH client type: native
	I0603 12:41:08.737324 1096371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0603 12:41:08.737339 1096371 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0603 12:41:08.846417 1096371 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0603 12:41:08.846525 1096371 main.go:141] libmachine: found compatible host: buildroot
	I0603 12:41:08.846537 1096371 main.go:141] libmachine: Provisioning with buildroot...
	I0603 12:41:08.846545 1096371 main.go:141] libmachine: (ha-220492) Calling .GetMachineName
	I0603 12:41:08.846871 1096371 buildroot.go:166] provisioning hostname "ha-220492"
	I0603 12:41:08.846903 1096371 main.go:141] libmachine: (ha-220492) Calling .GetMachineName
	I0603 12:41:08.847118 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:41:08.849533 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:08.849812 1096371 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:41:08.849854 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:08.849968 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:41:08.850170 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:41:08.850325 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:41:08.850543 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:41:08.850678 1096371 main.go:141] libmachine: Using SSH client type: native
	I0603 12:41:08.850889 1096371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0603 12:41:08.850902 1096371 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-220492 && echo "ha-220492" | sudo tee /etc/hostname
	I0603 12:41:08.975847 1096371 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-220492
	
	I0603 12:41:08.975877 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:41:08.978686 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:08.978954 1096371 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:41:08.978999 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:08.979154 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:41:08.979387 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:41:08.979591 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:41:08.979736 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:41:08.979922 1096371 main.go:141] libmachine: Using SSH client type: native
	I0603 12:41:08.980097 1096371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0603 12:41:08.980113 1096371 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-220492' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-220492/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-220492' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 12:41:09.099148 1096371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:41:09.099187 1096371 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19011-1078924/.minikube CaCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19011-1078924/.minikube}
	I0603 12:41:09.099227 1096371 buildroot.go:174] setting up certificates
	I0603 12:41:09.099240 1096371 provision.go:84] configureAuth start
	I0603 12:41:09.099252 1096371 main.go:141] libmachine: (ha-220492) Calling .GetMachineName
	I0603 12:41:09.099581 1096371 main.go:141] libmachine: (ha-220492) Calling .GetIP
	I0603 12:41:09.102107 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:09.102418 1096371 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:41:09.102444 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:09.102566 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:41:09.104787 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:09.105123 1096371 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:41:09.105149 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:09.105298 1096371 provision.go:143] copyHostCerts
	I0603 12:41:09.105329 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 12:41:09.105377 1096371 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem, removing ...
	I0603 12:41:09.105387 1096371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 12:41:09.105475 1096371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem (1675 bytes)
	I0603 12:41:09.105607 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 12:41:09.105626 1096371 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem, removing ...
	I0603 12:41:09.105631 1096371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 12:41:09.105661 1096371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem (1078 bytes)
	I0603 12:41:09.105718 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 12:41:09.105736 1096371 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem, removing ...
	I0603 12:41:09.105739 1096371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 12:41:09.105772 1096371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem (1123 bytes)
	I0603 12:41:09.105833 1096371 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem org=jenkins.ha-220492 san=[127.0.0.1 192.168.39.6 ha-220492 localhost minikube]
	I0603 12:41:09.144506 1096371 provision.go:177] copyRemoteCerts
	I0603 12:41:09.144571 1096371 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 12:41:09.144595 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:41:09.147555 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:09.147871 1096371 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:41:09.147911 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:09.148084 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:41:09.148311 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:41:09.148463 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:41:09.148616 1096371 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/id_rsa Username:docker}
	I0603 12:41:09.232186 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0603 12:41:09.232270 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 12:41:09.256495 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0603 12:41:09.256591 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0603 12:41:09.279937 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0603 12:41:09.280020 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 12:41:09.302800 1096371 provision.go:87] duration metric: took 203.541974ms to configureAuth
	I0603 12:41:09.302832 1096371 buildroot.go:189] setting minikube options for container-runtime
	I0603 12:41:09.303052 1096371 config.go:182] Loaded profile config "ha-220492": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:41:09.303169 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:41:09.305950 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:09.306309 1096371 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:41:09.306345 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:09.306571 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:41:09.306767 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:41:09.306974 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:41:09.307118 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:41:09.307322 1096371 main.go:141] libmachine: Using SSH client type: native
	I0603 12:41:09.307541 1096371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0603 12:41:09.307568 1096371 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 12:41:09.582908 1096371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 12:41:09.582947 1096371 main.go:141] libmachine: Checking connection to Docker...
	I0603 12:41:09.582973 1096371 main.go:141] libmachine: (ha-220492) Calling .GetURL
	I0603 12:41:09.584407 1096371 main.go:141] libmachine: (ha-220492) DBG | Using libvirt version 6000000
	I0603 12:41:09.586804 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:09.587235 1096371 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:41:09.587260 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:09.587399 1096371 main.go:141] libmachine: Docker is up and running!
	I0603 12:41:09.587414 1096371 main.go:141] libmachine: Reticulating splines...
	I0603 12:41:09.587424 1096371 client.go:171] duration metric: took 24.345940503s to LocalClient.Create
	I0603 12:41:09.587453 1096371 start.go:167] duration metric: took 24.346013192s to libmachine.API.Create "ha-220492"
	I0603 12:41:09.587467 1096371 start.go:293] postStartSetup for "ha-220492" (driver="kvm2")
	I0603 12:41:09.587488 1096371 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 12:41:09.587511 1096371 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:41:09.587761 1096371 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 12:41:09.587787 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:41:09.589732 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:09.590060 1096371 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:41:09.590087 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:09.590164 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:41:09.590378 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:41:09.590558 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:41:09.590740 1096371 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/id_rsa Username:docker}
	I0603 12:41:09.676420 1096371 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 12:41:09.680623 1096371 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 12:41:09.680650 1096371 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/addons for local assets ...
	I0603 12:41:09.680735 1096371 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/files for local assets ...
	I0603 12:41:09.680843 1096371 filesync.go:149] local asset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> 10862512.pem in /etc/ssl/certs
	I0603 12:41:09.680858 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> /etc/ssl/certs/10862512.pem
	I0603 12:41:09.680969 1096371 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 12:41:09.690475 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 12:41:09.714650 1096371 start.go:296] duration metric: took 127.159539ms for postStartSetup
	I0603 12:41:09.714708 1096371 main.go:141] libmachine: (ha-220492) Calling .GetConfigRaw
	I0603 12:41:09.715397 1096371 main.go:141] libmachine: (ha-220492) Calling .GetIP
	I0603 12:41:09.718274 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:09.718634 1096371 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:41:09.718662 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:09.718992 1096371 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/config.json ...
	I0603 12:41:09.719173 1096371 start.go:128] duration metric: took 24.495419868s to createHost
	I0603 12:41:09.719240 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:41:09.721338 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:09.721632 1096371 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:41:09.721654 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:09.721797 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:41:09.721975 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:41:09.722162 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:41:09.722277 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:41:09.722449 1096371 main.go:141] libmachine: Using SSH client type: native
	I0603 12:41:09.722617 1096371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0603 12:41:09.722638 1096371 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 12:41:09.834352 1096371 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717418469.811408647
	
	I0603 12:41:09.834385 1096371 fix.go:216] guest clock: 1717418469.811408647
	I0603 12:41:09.834395 1096371 fix.go:229] Guest: 2024-06-03 12:41:09.811408647 +0000 UTC Remote: 2024-06-03 12:41:09.719204809 +0000 UTC m=+24.601774795 (delta=92.203838ms)
	I0603 12:41:09.834422 1096371 fix.go:200] guest clock delta is within tolerance: 92.203838ms
	I0603 12:41:09.834428 1096371 start.go:83] releasing machines lock for "ha-220492", held for 24.610763142s
	I0603 12:41:09.834448 1096371 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:41:09.834698 1096371 main.go:141] libmachine: (ha-220492) Calling .GetIP
	I0603 12:41:09.837362 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:09.837770 1096371 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:41:09.837810 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:09.837878 1096371 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:41:09.838413 1096371 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:41:09.838611 1096371 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:41:09.838714 1096371 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 12:41:09.838765 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:41:09.838861 1096371 ssh_runner.go:195] Run: cat /version.json
	I0603 12:41:09.838887 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:41:09.841501 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:09.841605 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:09.841930 1096371 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:41:09.841956 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:09.842004 1096371 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:41:09.842040 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:09.842084 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:41:09.842265 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:41:09.842326 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:41:09.842453 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:41:09.842481 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:41:09.842687 1096371 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/id_rsa Username:docker}
	I0603 12:41:09.842707 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:41:09.842841 1096371 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/id_rsa Username:docker}
	I0603 12:41:09.946969 1096371 ssh_runner.go:195] Run: systemctl --version
	I0603 12:41:09.953061 1096371 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 12:41:10.114367 1096371 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 12:41:10.120451 1096371 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 12:41:10.120507 1096371 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 12:41:10.136901 1096371 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 12:41:10.136927 1096371 start.go:494] detecting cgroup driver to use...
	I0603 12:41:10.137010 1096371 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 12:41:10.152519 1096371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:41:10.166479 1096371 docker.go:217] disabling cri-docker service (if available) ...
	I0603 12:41:10.166553 1096371 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 12:41:10.179615 1096371 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 12:41:10.192772 1096371 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 12:41:10.302754 1096371 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 12:41:10.447209 1096371 docker.go:233] disabling docker service ...
	I0603 12:41:10.447309 1096371 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 12:41:10.462073 1096371 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 12:41:10.475186 1096371 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 12:41:10.604450 1096371 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 12:41:10.730595 1096371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 12:41:10.744935 1096371 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:41:10.763746 1096371 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 12:41:10.763808 1096371 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:41:10.774316 1096371 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 12:41:10.774404 1096371 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:41:10.784785 1096371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:41:10.795071 1096371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:41:10.805255 1096371 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 12:41:10.815375 1096371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:41:10.825270 1096371 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:41:10.842181 1096371 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:41:10.852166 1096371 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 12:41:10.861053 1096371 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 12:41:10.861113 1096371 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 12:41:10.874159 1096371 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 12:41:10.883417 1096371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:41:10.992570 1096371 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 12:41:11.128086 1096371 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 12:41:11.128206 1096371 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 12:41:11.132907 1096371 start.go:562] Will wait 60s for crictl version
	I0603 12:41:11.132978 1096371 ssh_runner.go:195] Run: which crictl
	I0603 12:41:11.136891 1096371 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 12:41:11.176818 1096371 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 12:41:11.176897 1096371 ssh_runner.go:195] Run: crio --version
	I0603 12:41:11.205711 1096371 ssh_runner.go:195] Run: crio --version
	I0603 12:41:11.235610 1096371 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 12:41:11.236829 1096371 main.go:141] libmachine: (ha-220492) Calling .GetIP
	I0603 12:41:11.239504 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:11.239857 1096371 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:41:11.239902 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:11.240094 1096371 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0603 12:41:11.244177 1096371 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:41:11.257181 1096371 kubeadm.go:877] updating cluster {Name:ha-220492 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:ha-220492 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 12:41:11.257314 1096371 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 12:41:11.257358 1096371 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:41:11.290352 1096371 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0603 12:41:11.290435 1096371 ssh_runner.go:195] Run: which lz4
	I0603 12:41:11.294176 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0603 12:41:11.294272 1096371 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 12:41:11.298645 1096371 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 12:41:11.298674 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0603 12:41:12.707970 1096371 crio.go:462] duration metric: took 1.413714631s to copy over tarball
	I0603 12:41:12.708044 1096371 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 12:41:14.850543 1096371 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.142469766s)
	I0603 12:41:14.850575 1096371 crio.go:469] duration metric: took 2.142572179s to extract the tarball
	I0603 12:41:14.850582 1096371 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 12:41:14.888041 1096371 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:41:14.937691 1096371 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 12:41:14.937722 1096371 cache_images.go:84] Images are preloaded, skipping loading
	I0603 12:41:14.937731 1096371 kubeadm.go:928] updating node { 192.168.39.6 8443 v1.30.1 crio true true} ...
	I0603 12:41:14.937872 1096371 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-220492 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-220492 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 12:41:14.937971 1096371 ssh_runner.go:195] Run: crio config
	I0603 12:41:14.983244 1096371 cni.go:84] Creating CNI manager for ""
	I0603 12:41:14.983269 1096371 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0603 12:41:14.983283 1096371 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 12:41:14.983306 1096371 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.6 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-220492 NodeName:ha-220492 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 12:41:14.983454 1096371 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-220492"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 12:41:14.983485 1096371 kube-vip.go:115] generating kube-vip config ...
	I0603 12:41:14.983530 1096371 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0603 12:41:15.002647 1096371 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0603 12:41:15.002758 1096371 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0603 12:41:15.002834 1096371 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 12:41:15.013239 1096371 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 12:41:15.013305 1096371 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0603 12:41:15.023168 1096371 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0603 12:41:15.040219 1096371 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 12:41:15.056200 1096371 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0603 12:41:15.072933 1096371 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0603 12:41:15.089270 1096371 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0603 12:41:15.093234 1096371 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:41:15.105390 1096371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:41:15.213160 1096371 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:41:15.228491 1096371 certs.go:68] Setting up /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492 for IP: 192.168.39.6
	I0603 12:41:15.228516 1096371 certs.go:194] generating shared ca certs ...
	I0603 12:41:15.228534 1096371 certs.go:226] acquiring lock for ca certs: {Name:mkeec5aabce7c9540fcb31b78e4f96c2851d54f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:41:15.228726 1096371 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key
	I0603 12:41:15.228786 1096371 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key
	I0603 12:41:15.228800 1096371 certs.go:256] generating profile certs ...
	I0603 12:41:15.228874 1096371 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/client.key
	I0603 12:41:15.228891 1096371 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/client.crt with IP's: []
	I0603 12:41:16.007432 1096371 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/client.crt ...
	I0603 12:41:16.007467 1096371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/client.crt: {Name:mkcf8e4c0397b30b1fc6ff360e1357815d7e9487 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:41:16.007645 1096371 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/client.key ...
	I0603 12:41:16.007657 1096371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/client.key: {Name:mkf5571341d9e95c379715e81518b377f7fe4a62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:41:16.007742 1096371 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key.8b57d20e
	I0603 12:41:16.007758 1096371 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt.8b57d20e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.6 192.168.39.254]
	I0603 12:41:16.076792 1096371 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt.8b57d20e ...
	I0603 12:41:16.076823 1096371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt.8b57d20e: {Name:mk7c3878ef4aff24b303a01d932b8859cd5fadb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:41:16.076982 1096371 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key.8b57d20e ...
	I0603 12:41:16.076995 1096371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key.8b57d20e: {Name:mk2a40f6900698664b0c05d410f3a6a10c2384fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:41:16.077063 1096371 certs.go:381] copying /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt.8b57d20e -> /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt
	I0603 12:41:16.077155 1096371 certs.go:385] copying /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key.8b57d20e -> /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key
	I0603 12:41:16.077214 1096371 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/proxy-client.key
	I0603 12:41:16.077230 1096371 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/proxy-client.crt with IP's: []
	I0603 12:41:16.343261 1096371 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/proxy-client.crt ...
	I0603 12:41:16.343300 1096371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/proxy-client.crt: {Name:mk84cee379f524557192feddab8407818bce5852 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:41:16.343477 1096371 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/proxy-client.key ...
	I0603 12:41:16.343487 1096371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/proxy-client.key: {Name:mkba07abba520f757a1375a8fe5f778a22b26881 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:41:16.343559 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0603 12:41:16.343576 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0603 12:41:16.343586 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0603 12:41:16.343599 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0603 12:41:16.343609 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0603 12:41:16.343620 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0603 12:41:16.343630 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0603 12:41:16.343641 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0603 12:41:16.343698 1096371 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem (1338 bytes)
	W0603 12:41:16.343735 1096371 certs.go:480] ignoring /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251_empty.pem, impossibly tiny 0 bytes
	I0603 12:41:16.343745 1096371 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 12:41:16.343767 1096371 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem (1078 bytes)
	I0603 12:41:16.343790 1096371 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem (1123 bytes)
	I0603 12:41:16.343811 1096371 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem (1675 bytes)
	I0603 12:41:16.343846 1096371 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 12:41:16.343872 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> /usr/share/ca-certificates/10862512.pem
	I0603 12:41:16.343886 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:41:16.343898 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem -> /usr/share/ca-certificates/1086251.pem
	I0603 12:41:16.344503 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 12:41:16.374351 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 12:41:16.399920 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 12:41:16.426092 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0603 12:41:16.450523 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0603 12:41:16.475199 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0603 12:41:16.499461 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 12:41:16.525093 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 12:41:16.548674 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /usr/share/ca-certificates/10862512.pem (1708 bytes)
	I0603 12:41:16.572498 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 12:41:16.595826 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem --> /usr/share/ca-certificates/1086251.pem (1338 bytes)
	I0603 12:41:16.619941 1096371 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 12:41:16.636934 1096371 ssh_runner.go:195] Run: openssl version
	I0603 12:41:16.642939 1096371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1086251.pem && ln -fs /usr/share/ca-certificates/1086251.pem /etc/ssl/certs/1086251.pem"
	I0603 12:41:16.653985 1096371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1086251.pem
	I0603 12:41:16.658496 1096371 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:37 /usr/share/ca-certificates/1086251.pem
	I0603 12:41:16.658561 1096371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1086251.pem
	I0603 12:41:16.664405 1096371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1086251.pem /etc/ssl/certs/51391683.0"
	I0603 12:41:16.675317 1096371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10862512.pem && ln -fs /usr/share/ca-certificates/10862512.pem /etc/ssl/certs/10862512.pem"
	I0603 12:41:16.686072 1096371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10862512.pem
	I0603 12:41:16.690593 1096371 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:37 /usr/share/ca-certificates/10862512.pem
	I0603 12:41:16.690652 1096371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10862512.pem
	I0603 12:41:16.696468 1096371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10862512.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 12:41:16.707203 1096371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 12:41:16.717986 1096371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:41:16.722665 1096371 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:24 /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:41:16.722725 1096371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:41:16.728327 1096371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 12:41:16.739532 1096371 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 12:41:16.743738 1096371 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 12:41:16.743806 1096371 kubeadm.go:391] StartCluster: {Name:ha-220492 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-220492 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:41:16.743898 1096371 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 12:41:16.743942 1096371 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:41:16.786837 1096371 cri.go:89] found id: ""
	I0603 12:41:16.786924 1096371 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0603 12:41:16.797324 1096371 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 12:41:16.807138 1096371 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 12:41:16.816725 1096371 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 12:41:16.816747 1096371 kubeadm.go:156] found existing configuration files:
	
	I0603 12:41:16.816787 1096371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 12:41:16.826048 1096371 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 12:41:16.826117 1096371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 12:41:16.835723 1096371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 12:41:16.844806 1096371 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 12:41:16.844855 1096371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 12:41:16.854180 1096371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 12:41:16.863089 1096371 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 12:41:16.863148 1096371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 12:41:16.872432 1096371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 12:41:16.881249 1096371 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 12:41:16.881315 1096371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 12:41:16.893091 1096371 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 12:41:17.142716 1096371 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 12:41:27.696084 1096371 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0603 12:41:27.696146 1096371 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 12:41:27.696209 1096371 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 12:41:27.696314 1096371 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 12:41:27.696448 1096371 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 12:41:27.696559 1096371 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 12:41:27.697948 1096371 out.go:204]   - Generating certificates and keys ...
	I0603 12:41:27.698064 1096371 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 12:41:27.698153 1096371 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 12:41:27.698252 1096371 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0603 12:41:27.698353 1096371 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0603 12:41:27.698433 1096371 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0603 12:41:27.698486 1096371 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0603 12:41:27.698532 1096371 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0603 12:41:27.698649 1096371 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-220492 localhost] and IPs [192.168.39.6 127.0.0.1 ::1]
	I0603 12:41:27.698720 1096371 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0603 12:41:27.698859 1096371 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-220492 localhost] and IPs [192.168.39.6 127.0.0.1 ::1]
	I0603 12:41:27.698941 1096371 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0603 12:41:27.699029 1096371 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0603 12:41:27.699095 1096371 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0603 12:41:27.699173 1096371 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 12:41:27.699237 1096371 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 12:41:27.699308 1096371 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0603 12:41:27.699388 1096371 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 12:41:27.699470 1096371 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 12:41:27.699550 1096371 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 12:41:27.699659 1096371 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 12:41:27.699746 1096371 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 12:41:27.701049 1096371 out.go:204]   - Booting up control plane ...
	I0603 12:41:27.701154 1096371 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 12:41:27.701232 1096371 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 12:41:27.701295 1096371 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 12:41:27.701389 1096371 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 12:41:27.701482 1096371 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 12:41:27.701521 1096371 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 12:41:27.701655 1096371 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0603 12:41:27.701751 1096371 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0603 12:41:27.701808 1096371 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.456802ms
	I0603 12:41:27.701867 1096371 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0603 12:41:27.701916 1096371 kubeadm.go:309] [api-check] The API server is healthy after 6.002004686s
	I0603 12:41:27.702014 1096371 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0603 12:41:27.702116 1096371 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0603 12:41:27.702171 1096371 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0603 12:41:27.702358 1096371 kubeadm.go:309] [mark-control-plane] Marking the node ha-220492 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0603 12:41:27.702418 1096371 kubeadm.go:309] [bootstrap-token] Using token: udpj77.zgtf6r34m22e6dpn
	I0603 12:41:27.703655 1096371 out.go:204]   - Configuring RBAC rules ...
	I0603 12:41:27.703765 1096371 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0603 12:41:27.703849 1096371 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0603 12:41:27.704016 1096371 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0603 12:41:27.704182 1096371 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0603 12:41:27.704344 1096371 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0603 12:41:27.704436 1096371 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0603 12:41:27.704562 1096371 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0603 12:41:27.704625 1096371 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0603 12:41:27.704682 1096371 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0603 12:41:27.704689 1096371 kubeadm.go:309] 
	I0603 12:41:27.704740 1096371 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0603 12:41:27.704746 1096371 kubeadm.go:309] 
	I0603 12:41:27.704816 1096371 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0603 12:41:27.704822 1096371 kubeadm.go:309] 
	I0603 12:41:27.704852 1096371 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0603 12:41:27.704908 1096371 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0603 12:41:27.704955 1096371 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0603 12:41:27.704962 1096371 kubeadm.go:309] 
	I0603 12:41:27.705011 1096371 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0603 12:41:27.705018 1096371 kubeadm.go:309] 
	I0603 12:41:27.705056 1096371 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0603 12:41:27.705062 1096371 kubeadm.go:309] 
	I0603 12:41:27.705103 1096371 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0603 12:41:27.705165 1096371 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0603 12:41:27.705242 1096371 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0603 12:41:27.705255 1096371 kubeadm.go:309] 
	I0603 12:41:27.705357 1096371 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0603 12:41:27.705446 1096371 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0603 12:41:27.705454 1096371 kubeadm.go:309] 
	I0603 12:41:27.705523 1096371 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token udpj77.zgtf6r34m22e6dpn \
	I0603 12:41:27.705611 1096371 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c33e9516f6d05db03b44f9194bafe44692a1b8ae1d860b8bc74f77578e93fdb1 \
	I0603 12:41:27.705635 1096371 kubeadm.go:309] 	--control-plane 
	I0603 12:41:27.705641 1096371 kubeadm.go:309] 
	I0603 12:41:27.705708 1096371 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0603 12:41:27.705714 1096371 kubeadm.go:309] 
	I0603 12:41:27.705779 1096371 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token udpj77.zgtf6r34m22e6dpn \
	I0603 12:41:27.705879 1096371 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c33e9516f6d05db03b44f9194bafe44692a1b8ae1d860b8bc74f77578e93fdb1 
	I0603 12:41:27.705892 1096371 cni.go:84] Creating CNI manager for ""
	I0603 12:41:27.705897 1096371 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0603 12:41:27.707387 1096371 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0603 12:41:27.708551 1096371 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0603 12:41:27.713890 1096371 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0603 12:41:27.713909 1096371 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0603 12:41:27.734853 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0603 12:41:28.054420 1096371 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 12:41:28.054507 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:28.054538 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-220492 minikube.k8s.io/updated_at=2024_06_03T12_41_28_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354 minikube.k8s.io/name=ha-220492 minikube.k8s.io/primary=true
	I0603 12:41:28.086452 1096371 ops.go:34] apiserver oom_adj: -16
	I0603 12:41:28.182311 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:28.682758 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:29.183361 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:29.682627 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:30.182859 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:30.683228 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:31.183069 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:31.682992 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:32.182664 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:32.683313 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:33.182544 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:33.683120 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:34.182933 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:34.682944 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:35.183074 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:35.682695 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:36.182914 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:36.683222 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:37.182640 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:37.682507 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:38.182629 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:38.682400 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:39.182712 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:39.682329 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:40.183320 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:40.683259 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 12:41:40.789456 1096371 kubeadm.go:1107] duration metric: took 12.735015654s to wait for elevateKubeSystemPrivileges
	W0603 12:41:40.789505 1096371 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0603 12:41:40.789516 1096371 kubeadm.go:393] duration metric: took 24.045716128s to StartCluster
	I0603 12:41:40.789542 1096371 settings.go:142] acquiring lock: {Name:mka7155af15d143794eb08b8670f7d850f44839e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:41:40.789635 1096371 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 12:41:40.790400 1096371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/kubeconfig: {Name:mk082a4c41fd0f4876b4085806e1bc5ef6533b14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:41:40.790632 1096371 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 12:41:40.790661 1096371 start.go:240] waiting for startup goroutines ...
	I0603 12:41:40.790634 1096371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0603 12:41:40.790658 1096371 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 12:41:40.790725 1096371 addons.go:69] Setting storage-provisioner=true in profile "ha-220492"
	I0603 12:41:40.790751 1096371 addons.go:234] Setting addon storage-provisioner=true in "ha-220492"
	I0603 12:41:40.790755 1096371 addons.go:69] Setting default-storageclass=true in profile "ha-220492"
	I0603 12:41:40.790795 1096371 host.go:66] Checking if "ha-220492" exists ...
	I0603 12:41:40.790796 1096371 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-220492"
	I0603 12:41:40.790857 1096371 config.go:182] Loaded profile config "ha-220492": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:41:40.791200 1096371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:41:40.791233 1096371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:41:40.791244 1096371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:41:40.791268 1096371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:41:40.806968 1096371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40089
	I0603 12:41:40.807010 1096371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34511
	I0603 12:41:40.807503 1096371 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:41:40.807550 1096371 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:41:40.808030 1096371 main.go:141] libmachine: Using API Version  1
	I0603 12:41:40.808051 1096371 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:41:40.808176 1096371 main.go:141] libmachine: Using API Version  1
	I0603 12:41:40.808201 1096371 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:41:40.808396 1096371 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:41:40.808584 1096371 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:41:40.808774 1096371 main.go:141] libmachine: (ha-220492) Calling .GetState
	I0603 12:41:40.808989 1096371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:41:40.809018 1096371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:41:40.811030 1096371 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 12:41:40.811405 1096371 kapi.go:59] client config for ha-220492: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/client.crt", KeyFile:"/home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/client.key", CAFile:"/home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfa500), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0603 12:41:40.812024 1096371 cert_rotation.go:137] Starting client certificate rotation controller
	I0603 12:41:40.812284 1096371 addons.go:234] Setting addon default-storageclass=true in "ha-220492"
	I0603 12:41:40.812334 1096371 host.go:66] Checking if "ha-220492" exists ...
	I0603 12:41:40.812705 1096371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:41:40.812742 1096371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:41:40.824612 1096371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39813
	I0603 12:41:40.825060 1096371 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:41:40.825616 1096371 main.go:141] libmachine: Using API Version  1
	I0603 12:41:40.825641 1096371 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:41:40.825959 1096371 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:41:40.826180 1096371 main.go:141] libmachine: (ha-220492) Calling .GetState
	I0603 12:41:40.827899 1096371 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:41:40.830068 1096371 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 12:41:40.828430 1096371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33655
	I0603 12:41:40.831614 1096371 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 12:41:40.831636 1096371 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 12:41:40.831656 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:41:40.831834 1096371 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:41:40.832331 1096371 main.go:141] libmachine: Using API Version  1
	I0603 12:41:40.832352 1096371 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:41:40.832736 1096371 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:41:40.833309 1096371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:41:40.833358 1096371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:41:40.834604 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:40.835055 1096371 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:41:40.835084 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:40.835215 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:41:40.835408 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:41:40.835571 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:41:40.835746 1096371 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/id_rsa Username:docker}
	I0603 12:41:40.848804 1096371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37175
	I0603 12:41:40.849189 1096371 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:41:40.849622 1096371 main.go:141] libmachine: Using API Version  1
	I0603 12:41:40.849642 1096371 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:41:40.849982 1096371 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:41:40.850164 1096371 main.go:141] libmachine: (ha-220492) Calling .GetState
	I0603 12:41:40.851627 1096371 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:41:40.851798 1096371 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 12:41:40.851814 1096371 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 12:41:40.851830 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:41:40.854207 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:40.854548 1096371 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:41:40.854577 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:41:40.854763 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:41:40.854937 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:41:40.855133 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:41:40.855293 1096371 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/id_rsa Username:docker}
	I0603 12:41:41.006436 1096371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0603 12:41:41.055489 1096371 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 12:41:41.101156 1096371 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 12:41:41.501620 1096371 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0603 12:41:41.501705 1096371 main.go:141] libmachine: Making call to close driver server
	I0603 12:41:41.501728 1096371 main.go:141] libmachine: (ha-220492) Calling .Close
	I0603 12:41:41.502034 1096371 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:41:41.502051 1096371 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:41:41.502059 1096371 main.go:141] libmachine: Making call to close driver server
	I0603 12:41:41.502067 1096371 main.go:141] libmachine: (ha-220492) Calling .Close
	I0603 12:41:41.502340 1096371 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:41:41.502361 1096371 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:41:41.502428 1096371 main.go:141] libmachine: (ha-220492) DBG | Closing plugin on server side
	I0603 12:41:41.502489 1096371 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0603 12:41:41.502501 1096371 round_trippers.go:469] Request Headers:
	I0603 12:41:41.502511 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:41:41.502523 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:41:41.510300 1096371 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 12:41:41.511148 1096371 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0603 12:41:41.511171 1096371 round_trippers.go:469] Request Headers:
	I0603 12:41:41.511181 1096371 round_trippers.go:473]     Content-Type: application/json
	I0603 12:41:41.511190 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:41:41.511194 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:41:41.519775 1096371 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0603 12:41:41.519949 1096371 main.go:141] libmachine: Making call to close driver server
	I0603 12:41:41.519964 1096371 main.go:141] libmachine: (ha-220492) Calling .Close
	I0603 12:41:41.520298 1096371 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:41:41.520319 1096371 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:41:41.807170 1096371 main.go:141] libmachine: Making call to close driver server
	I0603 12:41:41.807203 1096371 main.go:141] libmachine: (ha-220492) Calling .Close
	I0603 12:41:41.807610 1096371 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:41:41.807639 1096371 main.go:141] libmachine: (ha-220492) DBG | Closing plugin on server side
	I0603 12:41:41.807646 1096371 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:41:41.807663 1096371 main.go:141] libmachine: Making call to close driver server
	I0603 12:41:41.807672 1096371 main.go:141] libmachine: (ha-220492) Calling .Close
	I0603 12:41:41.807941 1096371 main.go:141] libmachine: Successfully made call to close driver server
	I0603 12:41:41.807977 1096371 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 12:41:41.809742 1096371 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0603 12:41:41.811132 1096371 addons.go:510] duration metric: took 1.020469972s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0603 12:41:41.811167 1096371 start.go:245] waiting for cluster config update ...
	I0603 12:41:41.811181 1096371 start.go:254] writing updated cluster config ...
	I0603 12:41:41.812981 1096371 out.go:177] 
	I0603 12:41:41.814660 1096371 config.go:182] Loaded profile config "ha-220492": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:41:41.814750 1096371 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/config.json ...
	I0603 12:41:41.816223 1096371 out.go:177] * Starting "ha-220492-m02" control-plane node in "ha-220492" cluster
	I0603 12:41:41.817447 1096371 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 12:41:41.817471 1096371 cache.go:56] Caching tarball of preloaded images
	I0603 12:41:41.817575 1096371 preload.go:173] Found /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 12:41:41.817588 1096371 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0603 12:41:41.817673 1096371 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/config.json ...
	I0603 12:41:41.817850 1096371 start.go:360] acquireMachinesLock for ha-220492-m02: {Name:mk20baaab39609d00406b78ad309423511e633ec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 12:41:41.817914 1096371 start.go:364] duration metric: took 42.326µs to acquireMachinesLock for "ha-220492-m02"
	I0603 12:41:41.817939 1096371 start.go:93] Provisioning new machine with config: &{Name:ha-220492 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-220492 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 12:41:41.818039 1096371 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0603 12:41:41.819647 1096371 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0603 12:41:41.819745 1096371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:41:41.819777 1096371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:41:41.834827 1096371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39621
	I0603 12:41:41.835244 1096371 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:41:41.835692 1096371 main.go:141] libmachine: Using API Version  1
	I0603 12:41:41.835712 1096371 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:41:41.836046 1096371 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:41:41.836236 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetMachineName
	I0603 12:41:41.836356 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .DriverName
	I0603 12:41:41.836494 1096371 start.go:159] libmachine.API.Create for "ha-220492" (driver="kvm2")
	I0603 12:41:41.836512 1096371 client.go:168] LocalClient.Create starting
	I0603 12:41:41.836537 1096371 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem
	I0603 12:41:41.836569 1096371 main.go:141] libmachine: Decoding PEM data...
	I0603 12:41:41.836583 1096371 main.go:141] libmachine: Parsing certificate...
	I0603 12:41:41.836644 1096371 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem
	I0603 12:41:41.836663 1096371 main.go:141] libmachine: Decoding PEM data...
	I0603 12:41:41.836673 1096371 main.go:141] libmachine: Parsing certificate...
	I0603 12:41:41.836692 1096371 main.go:141] libmachine: Running pre-create checks...
	I0603 12:41:41.836700 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .PreCreateCheck
	I0603 12:41:41.836898 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetConfigRaw
	I0603 12:41:41.837273 1096371 main.go:141] libmachine: Creating machine...
	I0603 12:41:41.837284 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .Create
	I0603 12:41:41.837449 1096371 main.go:141] libmachine: (ha-220492-m02) Creating KVM machine...
	I0603 12:41:41.838645 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | found existing default KVM network
	I0603 12:41:41.838775 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | found existing private KVM network mk-ha-220492
	I0603 12:41:41.838942 1096371 main.go:141] libmachine: (ha-220492-m02) Setting up store path in /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m02 ...
	I0603 12:41:41.838969 1096371 main.go:141] libmachine: (ha-220492-m02) Building disk image from file:///home/jenkins/minikube-integration/19011-1078924/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso
	I0603 12:41:41.839005 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | I0603 12:41:41.838909 1096793 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 12:41:41.839091 1096371 main.go:141] libmachine: (ha-220492-m02) Downloading /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19011-1078924/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0603 12:41:42.098476 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | I0603 12:41:42.098332 1096793 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m02/id_rsa...
	I0603 12:41:42.164226 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | I0603 12:41:42.164093 1096793 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m02/ha-220492-m02.rawdisk...
	I0603 12:41:42.164262 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | Writing magic tar header
	I0603 12:41:42.164286 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | Writing SSH key tar header
	I0603 12:41:42.164295 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | I0603 12:41:42.164206 1096793 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m02 ...
	I0603 12:41:42.164306 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m02
	I0603 12:41:42.164379 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines
	I0603 12:41:42.164412 1096371 main.go:141] libmachine: (ha-220492-m02) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m02 (perms=drwx------)
	I0603 12:41:42.164422 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 12:41:42.164433 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924
	I0603 12:41:42.164447 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0603 12:41:42.164453 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | Checking permissions on dir: /home/jenkins
	I0603 12:41:42.164459 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | Checking permissions on dir: /home
	I0603 12:41:42.164466 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | Skipping /home - not owner
	I0603 12:41:42.164510 1096371 main.go:141] libmachine: (ha-220492-m02) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924/.minikube/machines (perms=drwxr-xr-x)
	I0603 12:41:42.164547 1096371 main.go:141] libmachine: (ha-220492-m02) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924/.minikube (perms=drwxr-xr-x)
	I0603 12:41:42.164561 1096371 main.go:141] libmachine: (ha-220492-m02) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924 (perms=drwxrwxr-x)
	I0603 12:41:42.164573 1096371 main.go:141] libmachine: (ha-220492-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0603 12:41:42.164587 1096371 main.go:141] libmachine: (ha-220492-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0603 12:41:42.164598 1096371 main.go:141] libmachine: (ha-220492-m02) Creating domain...
	I0603 12:41:42.165519 1096371 main.go:141] libmachine: (ha-220492-m02) define libvirt domain using xml: 
	I0603 12:41:42.165540 1096371 main.go:141] libmachine: (ha-220492-m02) <domain type='kvm'>
	I0603 12:41:42.165550 1096371 main.go:141] libmachine: (ha-220492-m02)   <name>ha-220492-m02</name>
	I0603 12:41:42.165557 1096371 main.go:141] libmachine: (ha-220492-m02)   <memory unit='MiB'>2200</memory>
	I0603 12:41:42.165568 1096371 main.go:141] libmachine: (ha-220492-m02)   <vcpu>2</vcpu>
	I0603 12:41:42.165573 1096371 main.go:141] libmachine: (ha-220492-m02)   <features>
	I0603 12:41:42.165581 1096371 main.go:141] libmachine: (ha-220492-m02)     <acpi/>
	I0603 12:41:42.165587 1096371 main.go:141] libmachine: (ha-220492-m02)     <apic/>
	I0603 12:41:42.165595 1096371 main.go:141] libmachine: (ha-220492-m02)     <pae/>
	I0603 12:41:42.165605 1096371 main.go:141] libmachine: (ha-220492-m02)     
	I0603 12:41:42.165612 1096371 main.go:141] libmachine: (ha-220492-m02)   </features>
	I0603 12:41:42.165621 1096371 main.go:141] libmachine: (ha-220492-m02)   <cpu mode='host-passthrough'>
	I0603 12:41:42.165648 1096371 main.go:141] libmachine: (ha-220492-m02)   
	I0603 12:41:42.165671 1096371 main.go:141] libmachine: (ha-220492-m02)   </cpu>
	I0603 12:41:42.165687 1096371 main.go:141] libmachine: (ha-220492-m02)   <os>
	I0603 12:41:42.165699 1096371 main.go:141] libmachine: (ha-220492-m02)     <type>hvm</type>
	I0603 12:41:42.165709 1096371 main.go:141] libmachine: (ha-220492-m02)     <boot dev='cdrom'/>
	I0603 12:41:42.165719 1096371 main.go:141] libmachine: (ha-220492-m02)     <boot dev='hd'/>
	I0603 12:41:42.165731 1096371 main.go:141] libmachine: (ha-220492-m02)     <bootmenu enable='no'/>
	I0603 12:41:42.165741 1096371 main.go:141] libmachine: (ha-220492-m02)   </os>
	I0603 12:41:42.165751 1096371 main.go:141] libmachine: (ha-220492-m02)   <devices>
	I0603 12:41:42.165764 1096371 main.go:141] libmachine: (ha-220492-m02)     <disk type='file' device='cdrom'>
	I0603 12:41:42.165782 1096371 main.go:141] libmachine: (ha-220492-m02)       <source file='/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m02/boot2docker.iso'/>
	I0603 12:41:42.165793 1096371 main.go:141] libmachine: (ha-220492-m02)       <target dev='hdc' bus='scsi'/>
	I0603 12:41:42.165801 1096371 main.go:141] libmachine: (ha-220492-m02)       <readonly/>
	I0603 12:41:42.165818 1096371 main.go:141] libmachine: (ha-220492-m02)     </disk>
	I0603 12:41:42.165829 1096371 main.go:141] libmachine: (ha-220492-m02)     <disk type='file' device='disk'>
	I0603 12:41:42.165840 1096371 main.go:141] libmachine: (ha-220492-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0603 12:41:42.165857 1096371 main.go:141] libmachine: (ha-220492-m02)       <source file='/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m02/ha-220492-m02.rawdisk'/>
	I0603 12:41:42.165869 1096371 main.go:141] libmachine: (ha-220492-m02)       <target dev='hda' bus='virtio'/>
	I0603 12:41:42.165877 1096371 main.go:141] libmachine: (ha-220492-m02)     </disk>
	I0603 12:41:42.165892 1096371 main.go:141] libmachine: (ha-220492-m02)     <interface type='network'>
	I0603 12:41:42.165904 1096371 main.go:141] libmachine: (ha-220492-m02)       <source network='mk-ha-220492'/>
	I0603 12:41:42.165913 1096371 main.go:141] libmachine: (ha-220492-m02)       <model type='virtio'/>
	I0603 12:41:42.165919 1096371 main.go:141] libmachine: (ha-220492-m02)     </interface>
	I0603 12:41:42.165930 1096371 main.go:141] libmachine: (ha-220492-m02)     <interface type='network'>
	I0603 12:41:42.165944 1096371 main.go:141] libmachine: (ha-220492-m02)       <source network='default'/>
	I0603 12:41:42.165958 1096371 main.go:141] libmachine: (ha-220492-m02)       <model type='virtio'/>
	I0603 12:41:42.165970 1096371 main.go:141] libmachine: (ha-220492-m02)     </interface>
	I0603 12:41:42.165979 1096371 main.go:141] libmachine: (ha-220492-m02)     <serial type='pty'>
	I0603 12:41:42.165991 1096371 main.go:141] libmachine: (ha-220492-m02)       <target port='0'/>
	I0603 12:41:42.165999 1096371 main.go:141] libmachine: (ha-220492-m02)     </serial>
	I0603 12:41:42.166005 1096371 main.go:141] libmachine: (ha-220492-m02)     <console type='pty'>
	I0603 12:41:42.166016 1096371 main.go:141] libmachine: (ha-220492-m02)       <target type='serial' port='0'/>
	I0603 12:41:42.166039 1096371 main.go:141] libmachine: (ha-220492-m02)     </console>
	I0603 12:41:42.166057 1096371 main.go:141] libmachine: (ha-220492-m02)     <rng model='virtio'>
	I0603 12:41:42.166064 1096371 main.go:141] libmachine: (ha-220492-m02)       <backend model='random'>/dev/random</backend>
	I0603 12:41:42.166071 1096371 main.go:141] libmachine: (ha-220492-m02)     </rng>
	I0603 12:41:42.166077 1096371 main.go:141] libmachine: (ha-220492-m02)     
	I0603 12:41:42.166081 1096371 main.go:141] libmachine: (ha-220492-m02)     
	I0603 12:41:42.166087 1096371 main.go:141] libmachine: (ha-220492-m02)   </devices>
	I0603 12:41:42.166093 1096371 main.go:141] libmachine: (ha-220492-m02) </domain>
	I0603 12:41:42.166100 1096371 main.go:141] libmachine: (ha-220492-m02) 
	I0603 12:41:42.172501 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:3c:64:73 in network default
	I0603 12:41:42.173058 1096371 main.go:141] libmachine: (ha-220492-m02) Ensuring networks are active...
	I0603 12:41:42.173071 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:41:42.173840 1096371 main.go:141] libmachine: (ha-220492-m02) Ensuring network default is active
	I0603 12:41:42.174161 1096371 main.go:141] libmachine: (ha-220492-m02) Ensuring network mk-ha-220492 is active
	I0603 12:41:42.174575 1096371 main.go:141] libmachine: (ha-220492-m02) Getting domain xml...
	I0603 12:41:42.175289 1096371 main.go:141] libmachine: (ha-220492-m02) Creating domain...
	I0603 12:41:43.408023 1096371 main.go:141] libmachine: (ha-220492-m02) Waiting to get IP...
	I0603 12:41:43.408787 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:41:43.409234 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | unable to find current IP address of domain ha-220492-m02 in network mk-ha-220492
	I0603 12:41:43.409266 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | I0603 12:41:43.409198 1096793 retry.go:31] will retry after 231.363398ms: waiting for machine to come up
	I0603 12:41:43.643040 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:41:43.643639 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | unable to find current IP address of domain ha-220492-m02 in network mk-ha-220492
	I0603 12:41:43.643666 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | I0603 12:41:43.643579 1096793 retry.go:31] will retry after 353.063611ms: waiting for machine to come up
	I0603 12:41:43.998171 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:41:43.998655 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | unable to find current IP address of domain ha-220492-m02 in network mk-ha-220492
	I0603 12:41:43.998688 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | I0603 12:41:43.998593 1096793 retry.go:31] will retry after 405.64874ms: waiting for machine to come up
	I0603 12:41:44.406228 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:41:44.406687 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | unable to find current IP address of domain ha-220492-m02 in network mk-ha-220492
	I0603 12:41:44.406712 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | I0603 12:41:44.406641 1096793 retry.go:31] will retry after 471.518099ms: waiting for machine to come up
	I0603 12:41:44.879308 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:41:44.879787 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | unable to find current IP address of domain ha-220492-m02 in network mk-ha-220492
	I0603 12:41:44.879818 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | I0603 12:41:44.879742 1096793 retry.go:31] will retry after 670.162296ms: waiting for machine to come up
	I0603 12:41:45.551947 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:41:45.552455 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | unable to find current IP address of domain ha-220492-m02 in network mk-ha-220492
	I0603 12:41:45.552508 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | I0603 12:41:45.552449 1096793 retry.go:31] will retry after 784.973205ms: waiting for machine to come up
	I0603 12:41:46.339394 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:41:46.339836 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | unable to find current IP address of domain ha-220492-m02 in network mk-ha-220492
	I0603 12:41:46.339869 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | I0603 12:41:46.339773 1096793 retry.go:31] will retry after 946.869881ms: waiting for machine to come up
	I0603 12:41:47.288357 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:41:47.288753 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | unable to find current IP address of domain ha-220492-m02 in network mk-ha-220492
	I0603 12:41:47.288780 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | I0603 12:41:47.288698 1096793 retry.go:31] will retry after 1.43924214s: waiting for machine to come up
	I0603 12:41:48.729639 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:41:48.730058 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | unable to find current IP address of domain ha-220492-m02 in network mk-ha-220492
	I0603 12:41:48.730084 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | I0603 12:41:48.730007 1096793 retry.go:31] will retry after 1.520365565s: waiting for machine to come up
	I0603 12:41:50.252526 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:41:50.252955 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | unable to find current IP address of domain ha-220492-m02 in network mk-ha-220492
	I0603 12:41:50.252979 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | I0603 12:41:50.252908 1096793 retry.go:31] will retry after 1.523540957s: waiting for machine to come up
	I0603 12:41:51.778661 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:41:51.779119 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | unable to find current IP address of domain ha-220492-m02 in network mk-ha-220492
	I0603 12:41:51.779143 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | I0603 12:41:51.779069 1096793 retry.go:31] will retry after 2.17843585s: waiting for machine to come up
	I0603 12:41:53.959571 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:41:53.960016 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | unable to find current IP address of domain ha-220492-m02 in network mk-ha-220492
	I0603 12:41:53.960046 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | I0603 12:41:53.959992 1096793 retry.go:31] will retry after 3.266960434s: waiting for machine to come up
	I0603 12:41:57.228322 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:41:57.228849 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | unable to find current IP address of domain ha-220492-m02 in network mk-ha-220492
	I0603 12:41:57.228872 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | I0603 12:41:57.228794 1096793 retry.go:31] will retry after 3.22328969s: waiting for machine to come up
	I0603 12:42:00.454701 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:00.455157 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | unable to find current IP address of domain ha-220492-m02 in network mk-ha-220492
	I0603 12:42:00.455195 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | I0603 12:42:00.455113 1096793 retry.go:31] will retry after 4.667919915s: waiting for machine to come up
	I0603 12:42:05.126452 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:05.126859 1096371 main.go:141] libmachine: (ha-220492-m02) Found IP for machine: 192.168.39.106
	I0603 12:42:05.126887 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has current primary IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:05.126895 1096371 main.go:141] libmachine: (ha-220492-m02) Reserving static IP address...
	I0603 12:42:05.127264 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | unable to find host DHCP lease matching {name: "ha-220492-m02", mac: "52:54:00:5d:56:2b", ip: "192.168.39.106"} in network mk-ha-220492
	I0603 12:42:05.200106 1096371 main.go:141] libmachine: (ha-220492-m02) Reserved static IP address: 192.168.39.106
	I0603 12:42:05.200140 1096371 main.go:141] libmachine: (ha-220492-m02) Waiting for SSH to be available...
	I0603 12:42:05.200150 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | Getting to WaitForSSH function...
	I0603 12:42:05.202975 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:05.203470 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:56:2b", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:41:56 +0000 UTC Type:0 Mac:52:54:00:5d:56:2b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5d:56:2b}
	I0603 12:42:05.203507 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:05.203695 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | Using SSH client type: external
	I0603 12:42:05.203727 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m02/id_rsa (-rw-------)
	I0603 12:42:05.203757 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.106 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 12:42:05.203775 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | About to run SSH command:
	I0603 12:42:05.203792 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | exit 0
	I0603 12:42:05.329602 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | SSH cmd err, output: <nil>: 
	I0603 12:42:05.329923 1096371 main.go:141] libmachine: (ha-220492-m02) KVM machine creation complete!
	I0603 12:42:05.330264 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetConfigRaw
	I0603 12:42:05.330860 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .DriverName
	I0603 12:42:05.331081 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .DriverName
	I0603 12:42:05.331272 1096371 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0603 12:42:05.331290 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetState
	I0603 12:42:05.332521 1096371 main.go:141] libmachine: Detecting operating system of created instance...
	I0603 12:42:05.332541 1096371 main.go:141] libmachine: Waiting for SSH to be available...
	I0603 12:42:05.332549 1096371 main.go:141] libmachine: Getting to WaitForSSH function...
	I0603 12:42:05.332556 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHHostname
	I0603 12:42:05.335101 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:05.335490 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:56:2b", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:41:56 +0000 UTC Type:0 Mac:52:54:00:5d:56:2b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-220492-m02 Clientid:01:52:54:00:5d:56:2b}
	I0603 12:42:05.335519 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:05.335663 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHPort
	I0603 12:42:05.335877 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHKeyPath
	I0603 12:42:05.336039 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHKeyPath
	I0603 12:42:05.336201 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHUsername
	I0603 12:42:05.336355 1096371 main.go:141] libmachine: Using SSH client type: native
	I0603 12:42:05.336562 1096371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0603 12:42:05.336573 1096371 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0603 12:42:05.444957 1096371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:42:05.444990 1096371 main.go:141] libmachine: Detecting the provisioner...
	I0603 12:42:05.445000 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHHostname
	I0603 12:42:05.448052 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:05.448397 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:56:2b", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:41:56 +0000 UTC Type:0 Mac:52:54:00:5d:56:2b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-220492-m02 Clientid:01:52:54:00:5d:56:2b}
	I0603 12:42:05.448425 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:05.448648 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHPort
	I0603 12:42:05.448850 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHKeyPath
	I0603 12:42:05.448996 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHKeyPath
	I0603 12:42:05.449126 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHUsername
	I0603 12:42:05.449297 1096371 main.go:141] libmachine: Using SSH client type: native
	I0603 12:42:05.449483 1096371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0603 12:42:05.449494 1096371 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0603 12:42:05.558489 1096371 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0603 12:42:05.558575 1096371 main.go:141] libmachine: found compatible host: buildroot
	I0603 12:42:05.558582 1096371 main.go:141] libmachine: Provisioning with buildroot...
	I0603 12:42:05.558591 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetMachineName
	I0603 12:42:05.558853 1096371 buildroot.go:166] provisioning hostname "ha-220492-m02"
	I0603 12:42:05.558884 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetMachineName
	I0603 12:42:05.559079 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHHostname
	I0603 12:42:05.561873 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:05.562264 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:56:2b", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:41:56 +0000 UTC Type:0 Mac:52:54:00:5d:56:2b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-220492-m02 Clientid:01:52:54:00:5d:56:2b}
	I0603 12:42:05.562292 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:05.562440 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHPort
	I0603 12:42:05.562650 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHKeyPath
	I0603 12:42:05.562804 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHKeyPath
	I0603 12:42:05.562961 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHUsername
	I0603 12:42:05.563147 1096371 main.go:141] libmachine: Using SSH client type: native
	I0603 12:42:05.563333 1096371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0603 12:42:05.563349 1096371 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-220492-m02 && echo "ha-220492-m02" | sudo tee /etc/hostname
	I0603 12:42:05.688813 1096371 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-220492-m02
	
	I0603 12:42:05.688851 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHHostname
	I0603 12:42:05.691627 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:05.692015 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:56:2b", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:41:56 +0000 UTC Type:0 Mac:52:54:00:5d:56:2b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-220492-m02 Clientid:01:52:54:00:5d:56:2b}
	I0603 12:42:05.692047 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:05.692236 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHPort
	I0603 12:42:05.692475 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHKeyPath
	I0603 12:42:05.692661 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHKeyPath
	I0603 12:42:05.692850 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHUsername
	I0603 12:42:05.693027 1096371 main.go:141] libmachine: Using SSH client type: native
	I0603 12:42:05.693215 1096371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0603 12:42:05.693238 1096371 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-220492-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-220492-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-220492-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 12:42:05.810157 1096371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:42:05.810197 1096371 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19011-1078924/.minikube CaCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19011-1078924/.minikube}
	I0603 12:42:05.810216 1096371 buildroot.go:174] setting up certificates
	I0603 12:42:05.810227 1096371 provision.go:84] configureAuth start
	I0603 12:42:05.810240 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetMachineName
	I0603 12:42:05.810528 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetIP
	I0603 12:42:05.813279 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:05.813619 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:56:2b", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:41:56 +0000 UTC Type:0 Mac:52:54:00:5d:56:2b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-220492-m02 Clientid:01:52:54:00:5d:56:2b}
	I0603 12:42:05.813647 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:05.813833 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHHostname
	I0603 12:42:05.815843 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:05.816159 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:56:2b", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:41:56 +0000 UTC Type:0 Mac:52:54:00:5d:56:2b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-220492-m02 Clientid:01:52:54:00:5d:56:2b}
	I0603 12:42:05.816204 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:05.816330 1096371 provision.go:143] copyHostCerts
	I0603 12:42:05.816374 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 12:42:05.816415 1096371 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem, removing ...
	I0603 12:42:05.816428 1096371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 12:42:05.816508 1096371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem (1078 bytes)
	I0603 12:42:05.816602 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 12:42:05.816626 1096371 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem, removing ...
	I0603 12:42:05.816634 1096371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 12:42:05.816674 1096371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem (1123 bytes)
	I0603 12:42:05.816735 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 12:42:05.816759 1096371 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem, removing ...
	I0603 12:42:05.816768 1096371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 12:42:05.816800 1096371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem (1675 bytes)
	I0603 12:42:05.816866 1096371 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem org=jenkins.ha-220492-m02 san=[127.0.0.1 192.168.39.106 ha-220492-m02 localhost minikube]
	I0603 12:42:05.949501 1096371 provision.go:177] copyRemoteCerts
	I0603 12:42:05.949574 1096371 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 12:42:05.949609 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHHostname
	I0603 12:42:05.952377 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:05.952708 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:56:2b", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:41:56 +0000 UTC Type:0 Mac:52:54:00:5d:56:2b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-220492-m02 Clientid:01:52:54:00:5d:56:2b}
	I0603 12:42:05.952742 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:05.952896 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHPort
	I0603 12:42:05.953080 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHKeyPath
	I0603 12:42:05.953262 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHUsername
	I0603 12:42:05.953400 1096371 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m02/id_rsa Username:docker}
	I0603 12:42:06.039497 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0603 12:42:06.039603 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 12:42:06.065277 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0603 12:42:06.065349 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0603 12:42:06.091385 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0603 12:42:06.091455 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 12:42:06.122390 1096371 provision.go:87] duration metric: took 312.14592ms to configureAuth
	I0603 12:42:06.122424 1096371 buildroot.go:189] setting minikube options for container-runtime
	I0603 12:42:06.122671 1096371 config.go:182] Loaded profile config "ha-220492": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:42:06.122781 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHHostname
	I0603 12:42:06.125780 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:06.126255 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:56:2b", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:41:56 +0000 UTC Type:0 Mac:52:54:00:5d:56:2b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-220492-m02 Clientid:01:52:54:00:5d:56:2b}
	I0603 12:42:06.126289 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:06.126374 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHPort
	I0603 12:42:06.126579 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHKeyPath
	I0603 12:42:06.126777 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHKeyPath
	I0603 12:42:06.126945 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHUsername
	I0603 12:42:06.127161 1096371 main.go:141] libmachine: Using SSH client type: native
	I0603 12:42:06.127366 1096371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0603 12:42:06.127385 1096371 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 12:42:06.414766 1096371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 12:42:06.414806 1096371 main.go:141] libmachine: Checking connection to Docker...
	I0603 12:42:06.414828 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetURL
	I0603 12:42:06.416212 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | Using libvirt version 6000000
	I0603 12:42:06.418443 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:06.418867 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:56:2b", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:41:56 +0000 UTC Type:0 Mac:52:54:00:5d:56:2b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-220492-m02 Clientid:01:52:54:00:5d:56:2b}
	I0603 12:42:06.418889 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:06.419077 1096371 main.go:141] libmachine: Docker is up and running!
	I0603 12:42:06.419093 1096371 main.go:141] libmachine: Reticulating splines...
	I0603 12:42:06.419101 1096371 client.go:171] duration metric: took 24.5825817s to LocalClient.Create
	I0603 12:42:06.419122 1096371 start.go:167] duration metric: took 24.582629095s to libmachine.API.Create "ha-220492"
	I0603 12:42:06.419130 1096371 start.go:293] postStartSetup for "ha-220492-m02" (driver="kvm2")
	I0603 12:42:06.419141 1096371 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 12:42:06.419158 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .DriverName
	I0603 12:42:06.419416 1096371 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 12:42:06.419445 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHHostname
	I0603 12:42:06.421613 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:06.421902 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:56:2b", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:41:56 +0000 UTC Type:0 Mac:52:54:00:5d:56:2b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-220492-m02 Clientid:01:52:54:00:5d:56:2b}
	I0603 12:42:06.421930 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:06.422008 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHPort
	I0603 12:42:06.422169 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHKeyPath
	I0603 12:42:06.422328 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHUsername
	I0603 12:42:06.422486 1096371 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m02/id_rsa Username:docker}
	I0603 12:42:06.507721 1096371 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 12:42:06.512254 1096371 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 12:42:06.512286 1096371 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/addons for local assets ...
	I0603 12:42:06.512367 1096371 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/files for local assets ...
	I0603 12:42:06.512464 1096371 filesync.go:149] local asset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> 10862512.pem in /etc/ssl/certs
	I0603 12:42:06.512479 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> /etc/ssl/certs/10862512.pem
	I0603 12:42:06.512597 1096371 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 12:42:06.522036 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 12:42:06.546236 1096371 start.go:296] duration metric: took 127.089603ms for postStartSetup
	I0603 12:42:06.546292 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetConfigRaw
	I0603 12:42:06.546939 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetIP
	I0603 12:42:06.549490 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:06.549831 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:56:2b", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:41:56 +0000 UTC Type:0 Mac:52:54:00:5d:56:2b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-220492-m02 Clientid:01:52:54:00:5d:56:2b}
	I0603 12:42:06.549853 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:06.550152 1096371 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/config.json ...
	I0603 12:42:06.550339 1096371 start.go:128] duration metric: took 24.732287104s to createHost
	I0603 12:42:06.550363 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHHostname
	I0603 12:42:06.552701 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:06.552989 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:56:2b", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:41:56 +0000 UTC Type:0 Mac:52:54:00:5d:56:2b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-220492-m02 Clientid:01:52:54:00:5d:56:2b}
	I0603 12:42:06.553012 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:06.553239 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHPort
	I0603 12:42:06.553450 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHKeyPath
	I0603 12:42:06.553592 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHKeyPath
	I0603 12:42:06.553727 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHUsername
	I0603 12:42:06.553864 1096371 main.go:141] libmachine: Using SSH client type: native
	I0603 12:42:06.554029 1096371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I0603 12:42:06.554040 1096371 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 12:42:06.666580 1096371 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717418526.642377736
	
	I0603 12:42:06.666608 1096371 fix.go:216] guest clock: 1717418526.642377736
	I0603 12:42:06.666618 1096371 fix.go:229] Guest: 2024-06-03 12:42:06.642377736 +0000 UTC Remote: 2024-06-03 12:42:06.550350299 +0000 UTC m=+81.432920285 (delta=92.027437ms)
	I0603 12:42:06.666639 1096371 fix.go:200] guest clock delta is within tolerance: 92.027437ms
	I0603 12:42:06.666646 1096371 start.go:83] releasing machines lock for "ha-220492-m02", held for 24.848719588s
	I0603 12:42:06.666672 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .DriverName
	I0603 12:42:06.666965 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetIP
	I0603 12:42:06.670944 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:06.671366 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:56:2b", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:41:56 +0000 UTC Type:0 Mac:52:54:00:5d:56:2b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-220492-m02 Clientid:01:52:54:00:5d:56:2b}
	I0603 12:42:06.671395 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:06.673668 1096371 out.go:177] * Found network options:
	I0603 12:42:06.674997 1096371 out.go:177]   - NO_PROXY=192.168.39.6
	W0603 12:42:06.676110 1096371 proxy.go:119] fail to check proxy env: Error ip not in block
	I0603 12:42:06.676136 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .DriverName
	I0603 12:42:06.676719 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .DriverName
	I0603 12:42:06.676925 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .DriverName
	I0603 12:42:06.677049 1096371 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	W0603 12:42:06.677092 1096371 proxy.go:119] fail to check proxy env: Error ip not in block
	I0603 12:42:06.677101 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHHostname
	I0603 12:42:06.677171 1096371 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 12:42:06.677227 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHHostname
	I0603 12:42:06.679770 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:06.679946 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:06.680124 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:56:2b", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:41:56 +0000 UTC Type:0 Mac:52:54:00:5d:56:2b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-220492-m02 Clientid:01:52:54:00:5d:56:2b}
	I0603 12:42:06.680150 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:06.680321 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHPort
	I0603 12:42:06.680406 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:56:2b", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:41:56 +0000 UTC Type:0 Mac:52:54:00:5d:56:2b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-220492-m02 Clientid:01:52:54:00:5d:56:2b}
	I0603 12:42:06.680437 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:06.680508 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHKeyPath
	I0603 12:42:06.680602 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHPort
	I0603 12:42:06.680716 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHKeyPath
	I0603 12:42:06.680715 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHUsername
	I0603 12:42:06.680884 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHUsername
	I0603 12:42:06.680881 1096371 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m02/id_rsa Username:docker}
	I0603 12:42:06.681009 1096371 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m02/id_rsa Username:docker}
	I0603 12:42:06.928955 1096371 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 12:42:06.935185 1096371 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 12:42:06.935268 1096371 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 12:42:06.950886 1096371 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 12:42:06.950909 1096371 start.go:494] detecting cgroup driver to use...
	I0603 12:42:06.950967 1096371 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 12:42:06.968906 1096371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:42:06.984061 1096371 docker.go:217] disabling cri-docker service (if available) ...
	I0603 12:42:06.984127 1096371 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 12:42:06.997903 1096371 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 12:42:07.011677 1096371 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 12:42:07.132411 1096371 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 12:42:07.272200 1096371 docker.go:233] disabling docker service ...
	I0603 12:42:07.272297 1096371 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 12:42:07.289229 1096371 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 12:42:07.303039 1096371 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 12:42:07.433572 1096371 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 12:42:07.546622 1096371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 12:42:07.560394 1096371 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:42:07.578937 1096371 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 12:42:07.579006 1096371 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:42:07.589108 1096371 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 12:42:07.589166 1096371 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:42:07.600314 1096371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:42:07.610061 1096371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:42:07.619841 1096371 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 12:42:07.629789 1096371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:42:07.639901 1096371 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:42:07.656766 1096371 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:42:07.667492 1096371 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 12:42:07.677190 1096371 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 12:42:07.677233 1096371 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 12:42:07.691268 1096371 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 12:42:07.700972 1096371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:42:07.826553 1096371 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 12:42:07.961045 1096371 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 12:42:07.961117 1096371 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 12:42:07.965738 1096371 start.go:562] Will wait 60s for crictl version
	I0603 12:42:07.965794 1096371 ssh_runner.go:195] Run: which crictl
	I0603 12:42:07.969909 1096371 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 12:42:08.019940 1096371 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 12:42:08.020041 1096371 ssh_runner.go:195] Run: crio --version
	I0603 12:42:08.048407 1096371 ssh_runner.go:195] Run: crio --version
	I0603 12:42:08.078649 1096371 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 12:42:08.079999 1096371 out.go:177]   - env NO_PROXY=192.168.39.6
	I0603 12:42:08.081200 1096371 main.go:141] libmachine: (ha-220492-m02) Calling .GetIP
	I0603 12:42:08.083757 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:08.084130 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:56:2b", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:41:56 +0000 UTC Type:0 Mac:52:54:00:5d:56:2b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-220492-m02 Clientid:01:52:54:00:5d:56:2b}
	I0603 12:42:08.084152 1096371 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:42:08.084412 1096371 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0603 12:42:08.088707 1096371 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:42:08.101315 1096371 mustload.go:65] Loading cluster: ha-220492
	I0603 12:42:08.101560 1096371 config.go:182] Loaded profile config "ha-220492": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:42:08.101805 1096371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:42:08.101873 1096371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:42:08.116945 1096371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41499
	I0603 12:42:08.117382 1096371 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:42:08.117914 1096371 main.go:141] libmachine: Using API Version  1
	I0603 12:42:08.117939 1096371 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:42:08.118276 1096371 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:42:08.118501 1096371 main.go:141] libmachine: (ha-220492) Calling .GetState
	I0603 12:42:08.120058 1096371 host.go:66] Checking if "ha-220492" exists ...
	I0603 12:42:08.120367 1096371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:42:08.120392 1096371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:42:08.135995 1096371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44789
	I0603 12:42:08.136369 1096371 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:42:08.136845 1096371 main.go:141] libmachine: Using API Version  1
	I0603 12:42:08.136870 1096371 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:42:08.137213 1096371 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:42:08.137417 1096371 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:42:08.137610 1096371 certs.go:68] Setting up /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492 for IP: 192.168.39.106
	I0603 12:42:08.137626 1096371 certs.go:194] generating shared ca certs ...
	I0603 12:42:08.137640 1096371 certs.go:226] acquiring lock for ca certs: {Name:mkeec5aabce7c9540fcb31b78e4f96c2851d54f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:42:08.137760 1096371 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key
	I0603 12:42:08.137795 1096371 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key
	I0603 12:42:08.137807 1096371 certs.go:256] generating profile certs ...
	I0603 12:42:08.137872 1096371 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/client.key
	I0603 12:42:08.137896 1096371 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key.11e6568a
	I0603 12:42:08.137908 1096371 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt.11e6568a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.6 192.168.39.106 192.168.39.254]
	I0603 12:42:08.319810 1096371 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt.11e6568a ...
	I0603 12:42:08.319845 1096371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt.11e6568a: {Name:mkd21a2eba7380f69e7d36df8d1f2bd501844ef3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:42:08.320044 1096371 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key.11e6568a ...
	I0603 12:42:08.320064 1096371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key.11e6568a: {Name:mkfc0c55f94b5f637b57a4905b366f7655de4d17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:42:08.320172 1096371 certs.go:381] copying /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt.11e6568a -> /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt
	I0603 12:42:08.320343 1096371 certs.go:385] copying /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key.11e6568a -> /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key
	I0603 12:42:08.320589 1096371 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/proxy-client.key
	I0603 12:42:08.320622 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0603 12:42:08.320663 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0603 12:42:08.320685 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0603 12:42:08.320703 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0603 12:42:08.320721 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0603 12:42:08.320849 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0603 12:42:08.320874 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0603 12:42:08.320893 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0603 12:42:08.320984 1096371 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem (1338 bytes)
	W0603 12:42:08.321032 1096371 certs.go:480] ignoring /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251_empty.pem, impossibly tiny 0 bytes
	I0603 12:42:08.321045 1096371 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 12:42:08.321076 1096371 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem (1078 bytes)
	I0603 12:42:08.321109 1096371 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem (1123 bytes)
	I0603 12:42:08.321140 1096371 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem (1675 bytes)
	I0603 12:42:08.321226 1096371 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 12:42:08.321270 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem -> /usr/share/ca-certificates/1086251.pem
	I0603 12:42:08.321291 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> /usr/share/ca-certificates/10862512.pem
	I0603 12:42:08.321308 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:42:08.321358 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:42:08.324577 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:42:08.325002 1096371 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:42:08.325024 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:42:08.325209 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:42:08.325476 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:42:08.325663 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:42:08.325949 1096371 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/id_rsa Username:docker}
	I0603 12:42:08.401753 1096371 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0603 12:42:08.407623 1096371 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0603 12:42:08.419823 1096371 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0603 12:42:08.424030 1096371 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0603 12:42:08.435432 1096371 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0603 12:42:08.439640 1096371 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0603 12:42:08.450791 1096371 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0603 12:42:08.459749 1096371 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0603 12:42:08.470705 1096371 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0603 12:42:08.474974 1096371 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0603 12:42:08.485132 1096371 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0603 12:42:08.489458 1096371 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0603 12:42:08.500170 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 12:42:08.528731 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 12:42:08.552333 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 12:42:08.576303 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0603 12:42:08.599871 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0603 12:42:08.625361 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0603 12:42:08.651137 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 12:42:08.674060 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 12:42:08.696650 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem --> /usr/share/ca-certificates/1086251.pem (1338 bytes)
	I0603 12:42:08.719885 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /usr/share/ca-certificates/10862512.pem (1708 bytes)
	I0603 12:42:08.742499 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 12:42:08.765530 1096371 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0603 12:42:08.782242 1096371 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0603 12:42:08.798849 1096371 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0603 12:42:08.815252 1096371 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0603 12:42:08.832083 1096371 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0603 12:42:08.848487 1096371 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0603 12:42:08.865730 1096371 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0603 12:42:08.882211 1096371 ssh_runner.go:195] Run: openssl version
	I0603 12:42:08.888010 1096371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10862512.pem && ln -fs /usr/share/ca-certificates/10862512.pem /etc/ssl/certs/10862512.pem"
	I0603 12:42:08.898794 1096371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10862512.pem
	I0603 12:42:08.903261 1096371 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:37 /usr/share/ca-certificates/10862512.pem
	I0603 12:42:08.903330 1096371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10862512.pem
	I0603 12:42:08.909128 1096371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10862512.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 12:42:08.919702 1096371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 12:42:08.929911 1096371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:42:08.934373 1096371 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:24 /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:42:08.934426 1096371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:42:08.939885 1096371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 12:42:08.950480 1096371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1086251.pem && ln -fs /usr/share/ca-certificates/1086251.pem /etc/ssl/certs/1086251.pem"
	I0603 12:42:08.961118 1096371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1086251.pem
	I0603 12:42:08.965509 1096371 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:37 /usr/share/ca-certificates/1086251.pem
	I0603 12:42:08.965557 1096371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1086251.pem
	I0603 12:42:08.971051 1096371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1086251.pem /etc/ssl/certs/51391683.0"
	I0603 12:42:08.981382 1096371 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 12:42:08.985271 1096371 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 12:42:08.985331 1096371 kubeadm.go:928] updating node {m02 192.168.39.106 8443 v1.30.1 crio true true} ...
	I0603 12:42:08.985451 1096371 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-220492-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.106
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-220492 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 12:42:08.985480 1096371 kube-vip.go:115] generating kube-vip config ...
	I0603 12:42:08.985523 1096371 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0603 12:42:09.000417 1096371 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0603 12:42:09.000502 1096371 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0603 12:42:09.000556 1096371 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 12:42:09.010330 1096371 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0603 12:42:09.010391 1096371 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0603 12:42:09.020126 1096371 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0603 12:42:09.020155 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/linux/amd64/v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0603 12:42:09.020164 1096371 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/linux/amd64/v1.30.1/kubeadm
	I0603 12:42:09.020177 1096371 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/linux/amd64/v1.30.1/kubelet
	I0603 12:42:09.020235 1096371 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0603 12:42:09.024594 1096371 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0603 12:42:09.024625 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/linux/amd64/v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0603 12:42:09.662346 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/linux/amd64/v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0603 12:42:09.662438 1096371 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0603 12:42:09.667373 1096371 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0603 12:42:09.667408 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/linux/amd64/v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0603 12:42:14.371306 1096371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:42:14.386191 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/linux/amd64/v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0603 12:42:14.386288 1096371 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0603 12:42:14.390761 1096371 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0603 12:42:14.390790 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/linux/amd64/v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0603 12:42:14.791630 1096371 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0603 12:42:14.801234 1096371 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0603 12:42:14.818132 1096371 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 12:42:14.834413 1096371 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0603 12:42:14.850605 1096371 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0603 12:42:14.854560 1096371 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:42:14.866382 1096371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:42:14.992955 1096371 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:42:15.011459 1096371 host.go:66] Checking if "ha-220492" exists ...
	I0603 12:42:15.011896 1096371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:42:15.011955 1096371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:42:15.027141 1096371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35197
	I0603 12:42:15.027605 1096371 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:42:15.028094 1096371 main.go:141] libmachine: Using API Version  1
	I0603 12:42:15.028116 1096371 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:42:15.028445 1096371 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:42:15.028698 1096371 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:42:15.028862 1096371 start.go:316] joinCluster: &{Name:ha-220492 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cluster
Name:ha-220492 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.106 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:42:15.028982 1096371 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0603 12:42:15.029010 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:42:15.031978 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:42:15.032398 1096371 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:42:15.032428 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:42:15.032568 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:42:15.032733 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:42:15.032882 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:42:15.033039 1096371 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/id_rsa Username:docker}
	I0603 12:42:15.202036 1096371 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.106 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 12:42:15.202116 1096371 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token unnm3w.kbp0iaoodjba0o8t --discovery-token-ca-cert-hash sha256:c33e9516f6d05db03b44f9194bafe44692a1b8ae1d860b8bc74f77578e93fdb1 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-220492-m02 --control-plane --apiserver-advertise-address=192.168.39.106 --apiserver-bind-port=8443"
	I0603 12:42:36.806620 1096371 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token unnm3w.kbp0iaoodjba0o8t --discovery-token-ca-cert-hash sha256:c33e9516f6d05db03b44f9194bafe44692a1b8ae1d860b8bc74f77578e93fdb1 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-220492-m02 --control-plane --apiserver-advertise-address=192.168.39.106 --apiserver-bind-port=8443": (21.604476371s)
	I0603 12:42:36.806666 1096371 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0603 12:42:37.291847 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-220492-m02 minikube.k8s.io/updated_at=2024_06_03T12_42_37_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354 minikube.k8s.io/name=ha-220492 minikube.k8s.io/primary=false
	I0603 12:42:37.460680 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-220492-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0603 12:42:37.597348 1096371 start.go:318] duration metric: took 22.568467348s to joinCluster
	I0603 12:42:37.597465 1096371 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.106 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 12:42:37.599025 1096371 out.go:177] * Verifying Kubernetes components...
	I0603 12:42:37.597819 1096371 config.go:182] Loaded profile config "ha-220492": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:42:37.600512 1096371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:42:37.854079 1096371 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:42:37.909944 1096371 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 12:42:37.910331 1096371 kapi.go:59] client config for ha-220492: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/client.crt", KeyFile:"/home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/client.key", CAFile:"/home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfa500), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0603 12:42:37.910437 1096371 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.6:8443
	I0603 12:42:37.910748 1096371 node_ready.go:35] waiting up to 6m0s for node "ha-220492-m02" to be "Ready" ...
	I0603 12:42:37.910885 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:37.910899 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:37.910910 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:37.910917 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:37.921735 1096371 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0603 12:42:38.411681 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:38.411706 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:38.411714 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:38.411718 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:38.415557 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:38.911551 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:38.911575 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:38.911588 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:38.911595 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:38.914987 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:39.411024 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:39.411052 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:39.411062 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:39.411066 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:39.415523 1096371 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:42:39.911698 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:39.911721 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:39.911732 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:39.911737 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:39.914421 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:39.915151 1096371 node_ready.go:53] node "ha-220492-m02" has status "Ready":"False"
	I0603 12:42:40.411860 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:40.411891 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:40.411902 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:40.411909 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:40.414709 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:40.911073 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:40.911101 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:40.911113 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:40.911118 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:41.071044 1096371 round_trippers.go:574] Response Status: 200 OK in 159 milliseconds
	I0603 12:42:41.411483 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:41.411510 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:41.411522 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:41.411529 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:41.415459 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:41.911745 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:41.911768 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:41.911776 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:41.911780 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:41.915242 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:41.915823 1096371 node_ready.go:53] node "ha-220492-m02" has status "Ready":"False"
	I0603 12:42:42.411039 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:42.411060 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:42.411069 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:42.411072 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:42.414058 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:42.910998 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:42.911028 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:42.911038 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:42.911042 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:42.914644 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:43.411647 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:43.411676 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:43.411688 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:43.411696 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:43.415081 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:43.911305 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:43.911330 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:43.911338 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:43.911342 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:43.914504 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:43.915152 1096371 node_ready.go:49] node "ha-220492-m02" has status "Ready":"True"
	I0603 12:42:43.915172 1096371 node_ready.go:38] duration metric: took 6.00438251s for node "ha-220492-m02" to be "Ready" ...
	I0603 12:42:43.915182 1096371 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:42:43.915275 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0603 12:42:43.915284 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:43.915291 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:43.915294 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:43.919990 1096371 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:42:43.926100 1096371 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-d2tgp" in "kube-system" namespace to be "Ready" ...
	I0603 12:42:43.926184 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-d2tgp
	I0603 12:42:43.926197 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:43.926204 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:43.926212 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:43.928912 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:43.929620 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492
	I0603 12:42:43.929636 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:43.929644 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:43.929648 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:43.932075 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:43.932651 1096371 pod_ready.go:92] pod "coredns-7db6d8ff4d-d2tgp" in "kube-system" namespace has status "Ready":"True"
	I0603 12:42:43.932678 1096371 pod_ready.go:81] duration metric: took 6.551142ms for pod "coredns-7db6d8ff4d-d2tgp" in "kube-system" namespace to be "Ready" ...
	I0603 12:42:43.932690 1096371 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-q7687" in "kube-system" namespace to be "Ready" ...
	I0603 12:42:43.932745 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-q7687
	I0603 12:42:43.932753 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:43.932759 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:43.932765 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:43.935012 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:43.935763 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492
	I0603 12:42:43.935780 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:43.935787 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:43.935791 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:43.938245 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:43.938797 1096371 pod_ready.go:92] pod "coredns-7db6d8ff4d-q7687" in "kube-system" namespace has status "Ready":"True"
	I0603 12:42:43.938818 1096371 pod_ready.go:81] duration metric: took 6.12059ms for pod "coredns-7db6d8ff4d-q7687" in "kube-system" namespace to be "Ready" ...
	I0603 12:42:43.938831 1096371 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-220492" in "kube-system" namespace to be "Ready" ...
	I0603 12:42:43.938896 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492
	I0603 12:42:43.938906 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:43.938916 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:43.938926 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:43.947608 1096371 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0603 12:42:43.948266 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492
	I0603 12:42:43.948283 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:43.948290 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:43.948293 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:43.950370 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:43.950917 1096371 pod_ready.go:92] pod "etcd-ha-220492" in "kube-system" namespace has status "Ready":"True"
	I0603 12:42:43.950934 1096371 pod_ready.go:81] duration metric: took 12.093433ms for pod "etcd-ha-220492" in "kube-system" namespace to be "Ready" ...
	I0603 12:42:43.950944 1096371 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-220492-m02" in "kube-system" namespace to be "Ready" ...
	I0603 12:42:43.950993 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492-m02
	I0603 12:42:43.951000 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:43.951006 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:43.951010 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:43.953171 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:43.953798 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:43.953814 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:43.953820 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:43.953824 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:43.955996 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:44.452050 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492-m02
	I0603 12:42:44.452080 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:44.452097 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:44.452102 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:44.456918 1096371 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:42:44.457562 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:44.457578 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:44.457586 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:44.457589 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:44.460143 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:44.952171 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492-m02
	I0603 12:42:44.952196 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:44.952204 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:44.952208 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:44.955658 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:44.956445 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:44.956464 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:44.956477 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:44.956484 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:44.959182 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:45.451688 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492-m02
	I0603 12:42:45.451717 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:45.451733 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:45.451742 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:45.455265 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:45.455897 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:45.455917 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:45.455925 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:45.455931 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:45.458762 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:45.951811 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492-m02
	I0603 12:42:45.951835 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:45.951844 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:45.951848 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:45.955024 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:45.955723 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:45.955740 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:45.955747 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:45.955750 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:45.958194 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:45.958765 1096371 pod_ready.go:102] pod "etcd-ha-220492-m02" in "kube-system" namespace has status "Ready":"False"
	I0603 12:42:46.452152 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492-m02
	I0603 12:42:46.452175 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:46.452183 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:46.452188 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:46.455462 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:46.456019 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:46.456035 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:46.456045 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:46.456051 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:46.460469 1096371 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:42:46.951982 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492-m02
	I0603 12:42:46.952007 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:46.952015 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:46.952020 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:46.955468 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:46.956197 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:46.956211 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:46.956218 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:46.956229 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:46.958688 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:47.451803 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492-m02
	I0603 12:42:47.451831 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:47.451838 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:47.451843 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:47.454748 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:47.455383 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:47.455400 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:47.455407 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:47.455418 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:47.457933 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:47.951875 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492-m02
	I0603 12:42:47.951900 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:47.951908 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:47.951913 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:47.955431 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:47.956378 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:47.956396 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:47.956404 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:47.956408 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:47.959048 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:47.959589 1096371 pod_ready.go:102] pod "etcd-ha-220492-m02" in "kube-system" namespace has status "Ready":"False"
	I0603 12:42:48.451935 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492-m02
	I0603 12:42:48.451960 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:48.451970 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:48.451977 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:48.455145 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:48.455754 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:48.455773 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:48.455784 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:48.455789 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:48.458662 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:48.951807 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492-m02
	I0603 12:42:48.951835 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:48.951843 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:48.951847 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:48.955036 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:48.955613 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:48.955628 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:48.955635 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:48.955639 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:48.957794 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:49.451700 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492-m02
	I0603 12:42:49.451726 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:49.451735 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:49.451738 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:49.454867 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:49.455697 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:49.455718 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:49.455729 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:49.455738 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:49.458332 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:49.951291 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492-m02
	I0603 12:42:49.951320 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:49.951329 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:49.951333 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:49.954813 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:49.955504 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:49.955519 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:49.955527 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:49.955531 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:49.958052 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:50.451165 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492-m02
	I0603 12:42:50.451188 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:50.451195 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:50.451203 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:50.454033 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:50.454621 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:50.454636 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:50.454644 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:50.454648 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:50.457230 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:50.457704 1096371 pod_ready.go:102] pod "etcd-ha-220492-m02" in "kube-system" namespace has status "Ready":"False"
	I0603 12:42:50.951248 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492-m02
	I0603 12:42:50.951272 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:50.951280 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:50.951283 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:50.954643 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:50.955330 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:50.955346 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:50.955353 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:50.955357 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:50.957803 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:51.451914 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492-m02
	I0603 12:42:51.451937 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:51.451946 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:51.451949 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:51.455022 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:51.455738 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:51.455752 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:51.455758 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:51.455762 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:51.458085 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:51.458516 1096371 pod_ready.go:92] pod "etcd-ha-220492-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 12:42:51.458535 1096371 pod_ready.go:81] duration metric: took 7.507583146s for pod "etcd-ha-220492-m02" in "kube-system" namespace to be "Ready" ...
	I0603 12:42:51.458550 1096371 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-220492" in "kube-system" namespace to be "Ready" ...
	I0603 12:42:51.458614 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-220492
	I0603 12:42:51.458622 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:51.458629 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:51.458634 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:51.460620 1096371 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0603 12:42:51.461255 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492
	I0603 12:42:51.461272 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:51.461281 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:51.461286 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:51.467387 1096371 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 12:42:51.467827 1096371 pod_ready.go:92] pod "kube-apiserver-ha-220492" in "kube-system" namespace has status "Ready":"True"
	I0603 12:42:51.467847 1096371 pod_ready.go:81] duration metric: took 9.291191ms for pod "kube-apiserver-ha-220492" in "kube-system" namespace to be "Ready" ...
	I0603 12:42:51.467855 1096371 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-220492-m02" in "kube-system" namespace to be "Ready" ...
	I0603 12:42:51.467903 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-220492-m02
	I0603 12:42:51.467910 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:51.467917 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:51.467923 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:51.470179 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:51.470748 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:51.470761 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:51.470768 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:51.470772 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:51.472713 1096371 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0603 12:42:51.473306 1096371 pod_ready.go:92] pod "kube-apiserver-ha-220492-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 12:42:51.473325 1096371 pod_ready.go:81] duration metric: took 5.462411ms for pod "kube-apiserver-ha-220492-m02" in "kube-system" namespace to be "Ready" ...
	I0603 12:42:51.473336 1096371 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-220492" in "kube-system" namespace to be "Ready" ...
	I0603 12:42:51.473388 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220492
	I0603 12:42:51.473399 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:51.473427 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:51.473439 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:51.476070 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:51.477078 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492
	I0603 12:42:51.477094 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:51.477102 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:51.477106 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:51.479871 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:51.480529 1096371 pod_ready.go:92] pod "kube-controller-manager-ha-220492" in "kube-system" namespace has status "Ready":"True"
	I0603 12:42:51.480545 1096371 pod_ready.go:81] duration metric: took 7.202574ms for pod "kube-controller-manager-ha-220492" in "kube-system" namespace to be "Ready" ...
	I0603 12:42:51.480555 1096371 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-220492-m02" in "kube-system" namespace to be "Ready" ...
	I0603 12:42:51.480596 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220492-m02
	I0603 12:42:51.480604 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:51.480611 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:51.480616 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:51.483040 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:51.512090 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:51.512112 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:51.512122 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:51.512126 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:51.515340 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:51.980909 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220492-m02
	I0603 12:42:51.980932 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:51.980938 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:51.980941 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:51.984835 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:51.985779 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:51.985799 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:51.985807 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:51.985813 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:51.988339 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:42:51.988919 1096371 pod_ready.go:92] pod "kube-controller-manager-ha-220492-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 12:42:51.988941 1096371 pod_ready.go:81] duration metric: took 508.378458ms for pod "kube-controller-manager-ha-220492-m02" in "kube-system" namespace to be "Ready" ...
	I0603 12:42:51.988953 1096371 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dkzgt" in "kube-system" namespace to be "Ready" ...
	I0603 12:42:52.112349 1096371 request.go:629] Waited for 123.283767ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dkzgt
	I0603 12:42:52.112416 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dkzgt
	I0603 12:42:52.112421 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:52.112429 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:52.112435 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:52.117165 1096371 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:42:52.311406 1096371 request.go:629] Waited for 193.273989ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:52.311482 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:52.311488 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:52.311498 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:52.311506 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:52.315177 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:52.315744 1096371 pod_ready.go:92] pod "kube-proxy-dkzgt" in "kube-system" namespace has status "Ready":"True"
	I0603 12:42:52.315762 1096371 pod_ready.go:81] duration metric: took 326.801779ms for pod "kube-proxy-dkzgt" in "kube-system" namespace to be "Ready" ...
	I0603 12:42:52.315777 1096371 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w2hpg" in "kube-system" namespace to be "Ready" ...
	I0603 12:42:52.511944 1096371 request.go:629] Waited for 196.053868ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w2hpg
	I0603 12:42:52.512023 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w2hpg
	I0603 12:42:52.512030 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:52.512043 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:52.512056 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:52.515570 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:52.712057 1096371 request.go:629] Waited for 195.418875ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-220492
	I0603 12:42:52.712139 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492
	I0603 12:42:52.712150 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:52.712162 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:52.712171 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:52.716555 1096371 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:42:52.717768 1096371 pod_ready.go:92] pod "kube-proxy-w2hpg" in "kube-system" namespace has status "Ready":"True"
	I0603 12:42:52.717792 1096371 pod_ready.go:81] duration metric: took 402.006709ms for pod "kube-proxy-w2hpg" in "kube-system" namespace to be "Ready" ...
	I0603 12:42:52.717802 1096371 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-220492" in "kube-system" namespace to be "Ready" ...
	I0603 12:42:52.911792 1096371 request.go:629] Waited for 193.913411ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-220492
	I0603 12:42:52.911866 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-220492
	I0603 12:42:52.911872 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:52.911880 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:52.911884 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:52.915171 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:53.112324 1096371 request.go:629] Waited for 196.393723ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-220492
	I0603 12:42:53.112419 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492
	I0603 12:42:53.112426 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:53.112437 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:53.112443 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:53.116086 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:53.116707 1096371 pod_ready.go:92] pod "kube-scheduler-ha-220492" in "kube-system" namespace has status "Ready":"True"
	I0603 12:42:53.116727 1096371 pod_ready.go:81] duration metric: took 398.918778ms for pod "kube-scheduler-ha-220492" in "kube-system" namespace to be "Ready" ...
	I0603 12:42:53.116740 1096371 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-220492-m02" in "kube-system" namespace to be "Ready" ...
	I0603 12:42:53.311802 1096371 request.go:629] Waited for 194.958528ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-220492-m02
	I0603 12:42:53.311891 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-220492-m02
	I0603 12:42:53.311902 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:53.311914 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:53.311922 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:53.315421 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:53.511641 1096371 request.go:629] Waited for 195.386969ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:53.511734 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:42:53.511750 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:53.511761 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:53.511766 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:53.515114 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:53.515926 1096371 pod_ready.go:92] pod "kube-scheduler-ha-220492-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 12:42:53.515948 1096371 pod_ready.go:81] duration metric: took 399.201094ms for pod "kube-scheduler-ha-220492-m02" in "kube-system" namespace to be "Ready" ...
	I0603 12:42:53.515959 1096371 pod_ready.go:38] duration metric: took 9.600737698s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:42:53.515975 1096371 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:42:53.516039 1096371 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:42:53.538237 1096371 api_server.go:72] duration metric: took 15.940722259s to wait for apiserver process to appear ...
	I0603 12:42:53.538270 1096371 api_server.go:88] waiting for apiserver healthz status ...
	I0603 12:42:53.538305 1096371 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0603 12:42:53.546310 1096371 api_server.go:279] https://192.168.39.6:8443/healthz returned 200:
	ok
	I0603 12:42:53.546385 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/version
	I0603 12:42:53.546393 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:53.546402 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:53.546408 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:53.547445 1096371 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0603 12:42:53.547567 1096371 api_server.go:141] control plane version: v1.30.1
	I0603 12:42:53.547591 1096371 api_server.go:131] duration metric: took 9.311823ms to wait for apiserver health ...
	I0603 12:42:53.547609 1096371 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 12:42:53.712035 1096371 request.go:629] Waited for 164.33041ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0603 12:42:53.712119 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0603 12:42:53.712130 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:53.712141 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:53.712146 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:53.718089 1096371 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 12:42:53.723286 1096371 system_pods.go:59] 17 kube-system pods found
	I0603 12:42:53.723315 1096371 system_pods.go:61] "coredns-7db6d8ff4d-d2tgp" [534e15ed-2e68-4275-8725-099d7240c25d] Running
	I0603 12:42:53.723320 1096371 system_pods.go:61] "coredns-7db6d8ff4d-q7687" [4e78d9e6-8feb-44ef-b44d-ed6039ab00ee] Running
	I0603 12:42:53.723326 1096371 system_pods.go:61] "etcd-ha-220492" [f2bcc3d2-bb06-4775-9080-72f7652a48b1] Running
	I0603 12:42:53.723330 1096371 system_pods.go:61] "etcd-ha-220492-m02" [035f5269-9ad9-4be8-a582-59bf15e1f6f1] Running
	I0603 12:42:53.723333 1096371 system_pods.go:61] "kindnet-5p8f7" [12b97c9f-e363-42c3-9ac9-d808c47de63a] Running
	I0603 12:42:53.723335 1096371 system_pods.go:61] "kindnet-hbl6v" [9f697f13-4a60-4247-bb5e-a8bcdd3336cd] Running
	I0603 12:42:53.723338 1096371 system_pods.go:61] "kube-apiserver-ha-220492" [a5d2882e-9fb6-4c38-b232-5dc8cb7a009e] Running
	I0603 12:42:53.723341 1096371 system_pods.go:61] "kube-apiserver-ha-220492-m02" [8ef1ba46-3175-4524-8979-fbb5f4d0608a] Running
	I0603 12:42:53.723344 1096371 system_pods.go:61] "kube-controller-manager-ha-220492" [38d8a477-8b59-43d0-9004-a70023c07b14] Running
	I0603 12:42:53.723347 1096371 system_pods.go:61] "kube-controller-manager-ha-220492-m02" [9cde04ca-9c61-4015-9f2f-08c9db8439cc] Running
	I0603 12:42:53.723350 1096371 system_pods.go:61] "kube-proxy-dkzgt" [e1536cb0-2da1-4d9a-a6f7-50adfb8f7c9a] Running
	I0603 12:42:53.723353 1096371 system_pods.go:61] "kube-proxy-w2hpg" [51a52e47-6a1e-4f9c-ba1b-feb3e362531a] Running
	I0603 12:42:53.723356 1096371 system_pods.go:61] "kube-scheduler-ha-220492" [40a56d71-3787-44fa-a3b7-9d2dc2bcf5ac] Running
	I0603 12:42:53.723359 1096371 system_pods.go:61] "kube-scheduler-ha-220492-m02" [6dede50f-8a71-4a7a-97fa-8cc4d2a6ef8c] Running
	I0603 12:42:53.723362 1096371 system_pods.go:61] "kube-vip-ha-220492" [577ecb1f-e5df-4494-b898-7d2d8e79151d] Running
	I0603 12:42:53.723365 1096371 system_pods.go:61] "kube-vip-ha-220492-m02" [a53477a8-aa28-443e-bf5d-1abb3b66ce57] Running
	I0603 12:42:53.723371 1096371 system_pods.go:61] "storage-provisioner" [f85b2808-26fa-4608-a208-2c11eaddc293] Running
	I0603 12:42:53.723378 1096371 system_pods.go:74] duration metric: took 175.75879ms to wait for pod list to return data ...
	I0603 12:42:53.723389 1096371 default_sa.go:34] waiting for default service account to be created ...
	I0603 12:42:53.911860 1096371 request.go:629] Waited for 188.361515ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/default/serviceaccounts
	I0603 12:42:53.911932 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/default/serviceaccounts
	I0603 12:42:53.911937 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:53.911944 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:53.911950 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:53.919345 1096371 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0603 12:42:53.919698 1096371 default_sa.go:45] found service account: "default"
	I0603 12:42:53.919721 1096371 default_sa.go:55] duration metric: took 196.321286ms for default service account to be created ...
	I0603 12:42:53.919733 1096371 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 12:42:54.112230 1096371 request.go:629] Waited for 192.396547ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0603 12:42:54.112307 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0603 12:42:54.112314 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:54.112325 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:54.112333 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:54.118409 1096371 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 12:42:54.122780 1096371 system_pods.go:86] 17 kube-system pods found
	I0603 12:42:54.122810 1096371 system_pods.go:89] "coredns-7db6d8ff4d-d2tgp" [534e15ed-2e68-4275-8725-099d7240c25d] Running
	I0603 12:42:54.122818 1096371 system_pods.go:89] "coredns-7db6d8ff4d-q7687" [4e78d9e6-8feb-44ef-b44d-ed6039ab00ee] Running
	I0603 12:42:54.122825 1096371 system_pods.go:89] "etcd-ha-220492" [f2bcc3d2-bb06-4775-9080-72f7652a48b1] Running
	I0603 12:42:54.122831 1096371 system_pods.go:89] "etcd-ha-220492-m02" [035f5269-9ad9-4be8-a582-59bf15e1f6f1] Running
	I0603 12:42:54.122837 1096371 system_pods.go:89] "kindnet-5p8f7" [12b97c9f-e363-42c3-9ac9-d808c47de63a] Running
	I0603 12:42:54.122842 1096371 system_pods.go:89] "kindnet-hbl6v" [9f697f13-4a60-4247-bb5e-a8bcdd3336cd] Running
	I0603 12:42:54.122848 1096371 system_pods.go:89] "kube-apiserver-ha-220492" [a5d2882e-9fb6-4c38-b232-5dc8cb7a009e] Running
	I0603 12:42:54.122855 1096371 system_pods.go:89] "kube-apiserver-ha-220492-m02" [8ef1ba46-3175-4524-8979-fbb5f4d0608a] Running
	I0603 12:42:54.122865 1096371 system_pods.go:89] "kube-controller-manager-ha-220492" [38d8a477-8b59-43d0-9004-a70023c07b14] Running
	I0603 12:42:54.122874 1096371 system_pods.go:89] "kube-controller-manager-ha-220492-m02" [9cde04ca-9c61-4015-9f2f-08c9db8439cc] Running
	I0603 12:42:54.122884 1096371 system_pods.go:89] "kube-proxy-dkzgt" [e1536cb0-2da1-4d9a-a6f7-50adfb8f7c9a] Running
	I0603 12:42:54.122895 1096371 system_pods.go:89] "kube-proxy-w2hpg" [51a52e47-6a1e-4f9c-ba1b-feb3e362531a] Running
	I0603 12:42:54.122903 1096371 system_pods.go:89] "kube-scheduler-ha-220492" [40a56d71-3787-44fa-a3b7-9d2dc2bcf5ac] Running
	I0603 12:42:54.122910 1096371 system_pods.go:89] "kube-scheduler-ha-220492-m02" [6dede50f-8a71-4a7a-97fa-8cc4d2a6ef8c] Running
	I0603 12:42:54.122918 1096371 system_pods.go:89] "kube-vip-ha-220492" [577ecb1f-e5df-4494-b898-7d2d8e79151d] Running
	I0603 12:42:54.122924 1096371 system_pods.go:89] "kube-vip-ha-220492-m02" [a53477a8-aa28-443e-bf5d-1abb3b66ce57] Running
	I0603 12:42:54.122930 1096371 system_pods.go:89] "storage-provisioner" [f85b2808-26fa-4608-a208-2c11eaddc293] Running
	I0603 12:42:54.122944 1096371 system_pods.go:126] duration metric: took 203.201242ms to wait for k8s-apps to be running ...
	I0603 12:42:54.122956 1096371 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 12:42:54.123014 1096371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:42:54.138088 1096371 system_svc.go:56] duration metric: took 15.123781ms WaitForService to wait for kubelet
	I0603 12:42:54.138113 1096371 kubeadm.go:576] duration metric: took 16.540607996s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 12:42:54.138133 1096371 node_conditions.go:102] verifying NodePressure condition ...
	I0603 12:42:54.311522 1096371 request.go:629] Waited for 173.274208ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes
	I0603 12:42:54.311597 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes
	I0603 12:42:54.311604 1096371 round_trippers.go:469] Request Headers:
	I0603 12:42:54.311623 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:42:54.311633 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:42:54.315374 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:42:54.316091 1096371 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 12:42:54.316116 1096371 node_conditions.go:123] node cpu capacity is 2
	I0603 12:42:54.316128 1096371 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 12:42:54.316131 1096371 node_conditions.go:123] node cpu capacity is 2
	I0603 12:42:54.316135 1096371 node_conditions.go:105] duration metric: took 177.997261ms to run NodePressure ...
	I0603 12:42:54.316149 1096371 start.go:240] waiting for startup goroutines ...
	I0603 12:42:54.316186 1096371 start.go:254] writing updated cluster config ...
	I0603 12:42:54.318264 1096371 out.go:177] 
	I0603 12:42:54.319855 1096371 config.go:182] Loaded profile config "ha-220492": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:42:54.319961 1096371 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/config.json ...
	I0603 12:42:54.321799 1096371 out.go:177] * Starting "ha-220492-m03" control-plane node in "ha-220492" cluster
	I0603 12:42:54.322998 1096371 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 12:42:54.323025 1096371 cache.go:56] Caching tarball of preloaded images
	I0603 12:42:54.323127 1096371 preload.go:173] Found /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 12:42:54.323138 1096371 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0603 12:42:54.323276 1096371 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/config.json ...
	I0603 12:42:54.323466 1096371 start.go:360] acquireMachinesLock for ha-220492-m03: {Name:mk20baaab39609d00406b78ad309423511e633ec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 12:42:54.323549 1096371 start.go:364] duration metric: took 54.152µs to acquireMachinesLock for "ha-220492-m03"
	I0603 12:42:54.323576 1096371 start.go:93] Provisioning new machine with config: &{Name:ha-220492 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-220492 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.106 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 12:42:54.323692 1096371 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0603 12:42:54.325236 1096371 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0603 12:42:54.325321 1096371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:42:54.325356 1096371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:42:54.341059 1096371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38639
	I0603 12:42:54.341606 1096371 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:42:54.342210 1096371 main.go:141] libmachine: Using API Version  1
	I0603 12:42:54.342234 1096371 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:42:54.342575 1096371 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:42:54.342767 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetMachineName
	I0603 12:42:54.342959 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .DriverName
	I0603 12:42:54.343122 1096371 start.go:159] libmachine.API.Create for "ha-220492" (driver="kvm2")
	I0603 12:42:54.343148 1096371 client.go:168] LocalClient.Create starting
	I0603 12:42:54.343186 1096371 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem
	I0603 12:42:54.343227 1096371 main.go:141] libmachine: Decoding PEM data...
	I0603 12:42:54.343244 1096371 main.go:141] libmachine: Parsing certificate...
	I0603 12:42:54.343316 1096371 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem
	I0603 12:42:54.343341 1096371 main.go:141] libmachine: Decoding PEM data...
	I0603 12:42:54.343359 1096371 main.go:141] libmachine: Parsing certificate...
	I0603 12:42:54.343387 1096371 main.go:141] libmachine: Running pre-create checks...
	I0603 12:42:54.343399 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .PreCreateCheck
	I0603 12:42:54.343564 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetConfigRaw
	I0603 12:42:54.343938 1096371 main.go:141] libmachine: Creating machine...
	I0603 12:42:54.343952 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .Create
	I0603 12:42:54.344066 1096371 main.go:141] libmachine: (ha-220492-m03) Creating KVM machine...
	I0603 12:42:54.345206 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | found existing default KVM network
	I0603 12:42:54.345353 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | found existing private KVM network mk-ha-220492
	I0603 12:42:54.345545 1096371 main.go:141] libmachine: (ha-220492-m03) Setting up store path in /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m03 ...
	I0603 12:42:54.345569 1096371 main.go:141] libmachine: (ha-220492-m03) Building disk image from file:///home/jenkins/minikube-integration/19011-1078924/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso
	I0603 12:42:54.345625 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | I0603 12:42:54.345525 1097168 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 12:42:54.345723 1096371 main.go:141] libmachine: (ha-220492-m03) Downloading /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19011-1078924/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0603 12:42:54.620863 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | I0603 12:42:54.620701 1097168 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m03/id_rsa...
	I0603 12:42:55.088497 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | I0603 12:42:55.088351 1097168 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m03/ha-220492-m03.rawdisk...
	I0603 12:42:55.088533 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | Writing magic tar header
	I0603 12:42:55.088547 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | Writing SSH key tar header
	I0603 12:42:55.088559 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | I0603 12:42:55.088471 1097168 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m03 ...
	I0603 12:42:55.088574 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m03
	I0603 12:42:55.088660 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines
	I0603 12:42:55.088686 1096371 main.go:141] libmachine: (ha-220492-m03) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m03 (perms=drwx------)
	I0603 12:42:55.088693 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 12:42:55.088709 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924
	I0603 12:42:55.088726 1096371 main.go:141] libmachine: (ha-220492-m03) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924/.minikube/machines (perms=drwxr-xr-x)
	I0603 12:42:55.088739 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0603 12:42:55.088756 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | Checking permissions on dir: /home/jenkins
	I0603 12:42:55.088764 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | Checking permissions on dir: /home
	I0603 12:42:55.088772 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | Skipping /home - not owner
	I0603 12:42:55.088782 1096371 main.go:141] libmachine: (ha-220492-m03) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924/.minikube (perms=drwxr-xr-x)
	I0603 12:42:55.088793 1096371 main.go:141] libmachine: (ha-220492-m03) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924 (perms=drwxrwxr-x)
	I0603 12:42:55.088807 1096371 main.go:141] libmachine: (ha-220492-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0603 12:42:55.088819 1096371 main.go:141] libmachine: (ha-220492-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0603 12:42:55.088830 1096371 main.go:141] libmachine: (ha-220492-m03) Creating domain...
	I0603 12:42:55.089747 1096371 main.go:141] libmachine: (ha-220492-m03) define libvirt domain using xml: 
	I0603 12:42:55.089775 1096371 main.go:141] libmachine: (ha-220492-m03) <domain type='kvm'>
	I0603 12:42:55.089787 1096371 main.go:141] libmachine: (ha-220492-m03)   <name>ha-220492-m03</name>
	I0603 12:42:55.089817 1096371 main.go:141] libmachine: (ha-220492-m03)   <memory unit='MiB'>2200</memory>
	I0603 12:42:55.089835 1096371 main.go:141] libmachine: (ha-220492-m03)   <vcpu>2</vcpu>
	I0603 12:42:55.089839 1096371 main.go:141] libmachine: (ha-220492-m03)   <features>
	I0603 12:42:55.089844 1096371 main.go:141] libmachine: (ha-220492-m03)     <acpi/>
	I0603 12:42:55.089849 1096371 main.go:141] libmachine: (ha-220492-m03)     <apic/>
	I0603 12:42:55.089854 1096371 main.go:141] libmachine: (ha-220492-m03)     <pae/>
	I0603 12:42:55.089857 1096371 main.go:141] libmachine: (ha-220492-m03)     
	I0603 12:42:55.089865 1096371 main.go:141] libmachine: (ha-220492-m03)   </features>
	I0603 12:42:55.089870 1096371 main.go:141] libmachine: (ha-220492-m03)   <cpu mode='host-passthrough'>
	I0603 12:42:55.089875 1096371 main.go:141] libmachine: (ha-220492-m03)   
	I0603 12:42:55.089879 1096371 main.go:141] libmachine: (ha-220492-m03)   </cpu>
	I0603 12:42:55.089890 1096371 main.go:141] libmachine: (ha-220492-m03)   <os>
	I0603 12:42:55.089900 1096371 main.go:141] libmachine: (ha-220492-m03)     <type>hvm</type>
	I0603 12:42:55.089911 1096371 main.go:141] libmachine: (ha-220492-m03)     <boot dev='cdrom'/>
	I0603 12:42:55.089944 1096371 main.go:141] libmachine: (ha-220492-m03)     <boot dev='hd'/>
	I0603 12:42:55.089966 1096371 main.go:141] libmachine: (ha-220492-m03)     <bootmenu enable='no'/>
	I0603 12:42:55.089973 1096371 main.go:141] libmachine: (ha-220492-m03)   </os>
	I0603 12:42:55.089979 1096371 main.go:141] libmachine: (ha-220492-m03)   <devices>
	I0603 12:42:55.089986 1096371 main.go:141] libmachine: (ha-220492-m03)     <disk type='file' device='cdrom'>
	I0603 12:42:55.090000 1096371 main.go:141] libmachine: (ha-220492-m03)       <source file='/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m03/boot2docker.iso'/>
	I0603 12:42:55.090032 1096371 main.go:141] libmachine: (ha-220492-m03)       <target dev='hdc' bus='scsi'/>
	I0603 12:42:55.090057 1096371 main.go:141] libmachine: (ha-220492-m03)       <readonly/>
	I0603 12:42:55.090065 1096371 main.go:141] libmachine: (ha-220492-m03)     </disk>
	I0603 12:42:55.090082 1096371 main.go:141] libmachine: (ha-220492-m03)     <disk type='file' device='disk'>
	I0603 12:42:55.090096 1096371 main.go:141] libmachine: (ha-220492-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0603 12:42:55.090111 1096371 main.go:141] libmachine: (ha-220492-m03)       <source file='/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m03/ha-220492-m03.rawdisk'/>
	I0603 12:42:55.090124 1096371 main.go:141] libmachine: (ha-220492-m03)       <target dev='hda' bus='virtio'/>
	I0603 12:42:55.090137 1096371 main.go:141] libmachine: (ha-220492-m03)     </disk>
	I0603 12:42:55.090177 1096371 main.go:141] libmachine: (ha-220492-m03)     <interface type='network'>
	I0603 12:42:55.090198 1096371 main.go:141] libmachine: (ha-220492-m03)       <source network='mk-ha-220492'/>
	I0603 12:42:55.090208 1096371 main.go:141] libmachine: (ha-220492-m03)       <model type='virtio'/>
	I0603 12:42:55.090215 1096371 main.go:141] libmachine: (ha-220492-m03)     </interface>
	I0603 12:42:55.090225 1096371 main.go:141] libmachine: (ha-220492-m03)     <interface type='network'>
	I0603 12:42:55.090235 1096371 main.go:141] libmachine: (ha-220492-m03)       <source network='default'/>
	I0603 12:42:55.090244 1096371 main.go:141] libmachine: (ha-220492-m03)       <model type='virtio'/>
	I0603 12:42:55.090255 1096371 main.go:141] libmachine: (ha-220492-m03)     </interface>
	I0603 12:42:55.090281 1096371 main.go:141] libmachine: (ha-220492-m03)     <serial type='pty'>
	I0603 12:42:55.090302 1096371 main.go:141] libmachine: (ha-220492-m03)       <target port='0'/>
	I0603 12:42:55.090315 1096371 main.go:141] libmachine: (ha-220492-m03)     </serial>
	I0603 12:42:55.090325 1096371 main.go:141] libmachine: (ha-220492-m03)     <console type='pty'>
	I0603 12:42:55.090342 1096371 main.go:141] libmachine: (ha-220492-m03)       <target type='serial' port='0'/>
	I0603 12:42:55.090351 1096371 main.go:141] libmachine: (ha-220492-m03)     </console>
	I0603 12:42:55.090362 1096371 main.go:141] libmachine: (ha-220492-m03)     <rng model='virtio'>
	I0603 12:42:55.090372 1096371 main.go:141] libmachine: (ha-220492-m03)       <backend model='random'>/dev/random</backend>
	I0603 12:42:55.090384 1096371 main.go:141] libmachine: (ha-220492-m03)     </rng>
	I0603 12:42:55.090398 1096371 main.go:141] libmachine: (ha-220492-m03)     
	I0603 12:42:55.090409 1096371 main.go:141] libmachine: (ha-220492-m03)     
	I0603 12:42:55.090416 1096371 main.go:141] libmachine: (ha-220492-m03)   </devices>
	I0603 12:42:55.090427 1096371 main.go:141] libmachine: (ha-220492-m03) </domain>
	I0603 12:42:55.090436 1096371 main.go:141] libmachine: (ha-220492-m03) 
	I0603 12:42:55.096811 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:2f:3f:07 in network default
	I0603 12:42:55.097338 1096371 main.go:141] libmachine: (ha-220492-m03) Ensuring networks are active...
	I0603 12:42:55.097360 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:42:55.098017 1096371 main.go:141] libmachine: (ha-220492-m03) Ensuring network default is active
	I0603 12:42:55.098256 1096371 main.go:141] libmachine: (ha-220492-m03) Ensuring network mk-ha-220492 is active
	I0603 12:42:55.098610 1096371 main.go:141] libmachine: (ha-220492-m03) Getting domain xml...
	I0603 12:42:55.099251 1096371 main.go:141] libmachine: (ha-220492-m03) Creating domain...
	I0603 12:42:56.333622 1096371 main.go:141] libmachine: (ha-220492-m03) Waiting to get IP...
	I0603 12:42:56.334452 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:42:56.334805 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | unable to find current IP address of domain ha-220492-m03 in network mk-ha-220492
	I0603 12:42:56.334832 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | I0603 12:42:56.334779 1097168 retry.go:31] will retry after 270.111796ms: waiting for machine to come up
	I0603 12:42:56.607116 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:42:56.607501 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | unable to find current IP address of domain ha-220492-m03 in network mk-ha-220492
	I0603 12:42:56.607534 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | I0603 12:42:56.607452 1097168 retry.go:31] will retry after 259.20477ms: waiting for machine to come up
	I0603 12:42:56.867718 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:42:56.868143 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | unable to find current IP address of domain ha-220492-m03 in network mk-ha-220492
	I0603 12:42:56.868171 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | I0603 12:42:56.868092 1097168 retry.go:31] will retry after 415.070892ms: waiting for machine to come up
	I0603 12:42:57.284930 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:42:57.285525 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | unable to find current IP address of domain ha-220492-m03 in network mk-ha-220492
	I0603 12:42:57.285565 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | I0603 12:42:57.285456 1097168 retry.go:31] will retry after 400.725155ms: waiting for machine to come up
	I0603 12:42:57.687701 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:42:57.688129 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | unable to find current IP address of domain ha-220492-m03 in network mk-ha-220492
	I0603 12:42:57.688165 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | I0603 12:42:57.688062 1097168 retry.go:31] will retry after 678.144187ms: waiting for machine to come up
	I0603 12:42:58.367821 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:42:58.368220 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | unable to find current IP address of domain ha-220492-m03 in network mk-ha-220492
	I0603 12:42:58.368294 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | I0603 12:42:58.368200 1097168 retry.go:31] will retry after 931.821679ms: waiting for machine to come up
	I0603 12:42:59.301346 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:42:59.301831 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | unable to find current IP address of domain ha-220492-m03 in network mk-ha-220492
	I0603 12:42:59.301865 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | I0603 12:42:59.301781 1097168 retry.go:31] will retry after 755.612995ms: waiting for machine to come up
	I0603 12:43:00.058476 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:00.058926 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | unable to find current IP address of domain ha-220492-m03 in network mk-ha-220492
	I0603 12:43:00.058959 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | I0603 12:43:00.058869 1097168 retry.go:31] will retry after 1.26953951s: waiting for machine to come up
	I0603 12:43:01.330176 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:01.330783 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | unable to find current IP address of domain ha-220492-m03 in network mk-ha-220492
	I0603 12:43:01.330816 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | I0603 12:43:01.330729 1097168 retry.go:31] will retry after 1.366168747s: waiting for machine to come up
	I0603 12:43:02.698825 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:02.699340 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | unable to find current IP address of domain ha-220492-m03 in network mk-ha-220492
	I0603 12:43:02.699368 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | I0603 12:43:02.699306 1097168 retry.go:31] will retry after 1.428113816s: waiting for machine to come up
	I0603 12:43:04.128962 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:04.129604 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | unable to find current IP address of domain ha-220492-m03 in network mk-ha-220492
	I0603 12:43:04.129639 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | I0603 12:43:04.129545 1097168 retry.go:31] will retry after 2.201677486s: waiting for machine to come up
	I0603 12:43:06.332618 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:06.333109 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | unable to find current IP address of domain ha-220492-m03 in network mk-ha-220492
	I0603 12:43:06.333168 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | I0603 12:43:06.333082 1097168 retry.go:31] will retry after 3.368027556s: waiting for machine to come up
	I0603 12:43:09.702818 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:09.703237 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | unable to find current IP address of domain ha-220492-m03 in network mk-ha-220492
	I0603 12:43:09.703261 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | I0603 12:43:09.703190 1097168 retry.go:31] will retry after 4.345500761s: waiting for machine to come up
	I0603 12:43:14.050558 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:14.051004 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | unable to find current IP address of domain ha-220492-m03 in network mk-ha-220492
	I0603 12:43:14.051035 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | I0603 12:43:14.050932 1097168 retry.go:31] will retry after 4.935094667s: waiting for machine to come up
	I0603 12:43:18.990788 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:18.991372 1096371 main.go:141] libmachine: (ha-220492-m03) Found IP for machine: 192.168.39.169
	I0603 12:43:18.991419 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has current primary IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:18.991430 1096371 main.go:141] libmachine: (ha-220492-m03) Reserving static IP address...
	I0603 12:43:18.991807 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | unable to find host DHCP lease matching {name: "ha-220492-m03", mac: "52:54:00:ae:60:87", ip: "192.168.39.169"} in network mk-ha-220492
	I0603 12:43:19.065731 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | Getting to WaitForSSH function...
	I0603 12:43:19.065789 1096371 main.go:141] libmachine: (ha-220492-m03) Reserved static IP address: 192.168.39.169
	I0603 12:43:19.065805 1096371 main.go:141] libmachine: (ha-220492-m03) Waiting for SSH to be available...
	I0603 12:43:19.068473 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:19.069095 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ae:60:87}
	I0603 12:43:19.069253 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:19.069613 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | Using SSH client type: external
	I0603 12:43:19.069665 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m03/id_rsa (-rw-------)
	I0603 12:43:19.069734 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.169 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 12:43:19.069762 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | About to run SSH command:
	I0603 12:43:19.069778 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | exit 0
	I0603 12:43:19.201678 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | SSH cmd err, output: <nil>: 
	I0603 12:43:19.202022 1096371 main.go:141] libmachine: (ha-220492-m03) KVM machine creation complete!
	I0603 12:43:19.202377 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetConfigRaw
	I0603 12:43:19.202958 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .DriverName
	I0603 12:43:19.203165 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .DriverName
	I0603 12:43:19.203354 1096371 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0603 12:43:19.203373 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetState
	I0603 12:43:19.204613 1096371 main.go:141] libmachine: Detecting operating system of created instance...
	I0603 12:43:19.204626 1096371 main.go:141] libmachine: Waiting for SSH to be available...
	I0603 12:43:19.204632 1096371 main.go:141] libmachine: Getting to WaitForSSH function...
	I0603 12:43:19.204638 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHHostname
	I0603 12:43:19.207109 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:19.207510 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-220492-m03 Clientid:01:52:54:00:ae:60:87}
	I0603 12:43:19.207536 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:19.207716 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHPort
	I0603 12:43:19.207897 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHKeyPath
	I0603 12:43:19.208077 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHKeyPath
	I0603 12:43:19.208274 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHUsername
	I0603 12:43:19.208431 1096371 main.go:141] libmachine: Using SSH client type: native
	I0603 12:43:19.208686 1096371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I0603 12:43:19.208699 1096371 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0603 12:43:19.316587 1096371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:43:19.316614 1096371 main.go:141] libmachine: Detecting the provisioner...
	I0603 12:43:19.316623 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHHostname
	I0603 12:43:19.319634 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:19.320042 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-220492-m03 Clientid:01:52:54:00:ae:60:87}
	I0603 12:43:19.320077 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:19.320227 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHPort
	I0603 12:43:19.320475 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHKeyPath
	I0603 12:43:19.320672 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHKeyPath
	I0603 12:43:19.320884 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHUsername
	I0603 12:43:19.321063 1096371 main.go:141] libmachine: Using SSH client type: native
	I0603 12:43:19.321232 1096371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I0603 12:43:19.321243 1096371 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0603 12:43:19.434387 1096371 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0603 12:43:19.434463 1096371 main.go:141] libmachine: found compatible host: buildroot
	I0603 12:43:19.434472 1096371 main.go:141] libmachine: Provisioning with buildroot...
	I0603 12:43:19.434482 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetMachineName
	I0603 12:43:19.434811 1096371 buildroot.go:166] provisioning hostname "ha-220492-m03"
	I0603 12:43:19.434843 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetMachineName
	I0603 12:43:19.435078 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHHostname
	I0603 12:43:19.438030 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:19.438406 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-220492-m03 Clientid:01:52:54:00:ae:60:87}
	I0603 12:43:19.438438 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:19.438578 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHPort
	I0603 12:43:19.438798 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHKeyPath
	I0603 12:43:19.439004 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHKeyPath
	I0603 12:43:19.439273 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHUsername
	I0603 12:43:19.439511 1096371 main.go:141] libmachine: Using SSH client type: native
	I0603 12:43:19.439748 1096371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I0603 12:43:19.439766 1096371 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-220492-m03 && echo "ha-220492-m03" | sudo tee /etc/hostname
	I0603 12:43:19.570221 1096371 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-220492-m03
	
	I0603 12:43:19.570258 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHHostname
	I0603 12:43:19.573051 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:19.573536 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-220492-m03 Clientid:01:52:54:00:ae:60:87}
	I0603 12:43:19.573572 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:19.573735 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHPort
	I0603 12:43:19.573941 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHKeyPath
	I0603 12:43:19.574113 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHKeyPath
	I0603 12:43:19.574228 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHUsername
	I0603 12:43:19.574430 1096371 main.go:141] libmachine: Using SSH client type: native
	I0603 12:43:19.574652 1096371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I0603 12:43:19.574677 1096371 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-220492-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-220492-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-220492-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 12:43:19.695101 1096371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:43:19.695145 1096371 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19011-1078924/.minikube CaCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19011-1078924/.minikube}
	I0603 12:43:19.695176 1096371 buildroot.go:174] setting up certificates
	I0603 12:43:19.695188 1096371 provision.go:84] configureAuth start
	I0603 12:43:19.695203 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetMachineName
	I0603 12:43:19.695535 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetIP
	I0603 12:43:19.698321 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:19.698660 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-220492-m03 Clientid:01:52:54:00:ae:60:87}
	I0603 12:43:19.698692 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:19.698820 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHHostname
	I0603 12:43:19.700861 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:19.701183 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-220492-m03 Clientid:01:52:54:00:ae:60:87}
	I0603 12:43:19.701215 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:19.701337 1096371 provision.go:143] copyHostCerts
	I0603 12:43:19.701373 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 12:43:19.701426 1096371 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem, removing ...
	I0603 12:43:19.701439 1096371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 12:43:19.701511 1096371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem (1078 bytes)
	I0603 12:43:19.701586 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 12:43:19.701606 1096371 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem, removing ...
	I0603 12:43:19.701613 1096371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 12:43:19.701636 1096371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem (1123 bytes)
	I0603 12:43:19.701683 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 12:43:19.701699 1096371 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem, removing ...
	I0603 12:43:19.701706 1096371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 12:43:19.701726 1096371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem (1675 bytes)
	I0603 12:43:19.701776 1096371 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem org=jenkins.ha-220492-m03 san=[127.0.0.1 192.168.39.169 ha-220492-m03 localhost minikube]
	I0603 12:43:20.001276 1096371 provision.go:177] copyRemoteCerts
	I0603 12:43:20.001334 1096371 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 12:43:20.001359 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHHostname
	I0603 12:43:20.003939 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:20.004239 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-220492-m03 Clientid:01:52:54:00:ae:60:87}
	I0603 12:43:20.004268 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:20.004520 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHPort
	I0603 12:43:20.004727 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHKeyPath
	I0603 12:43:20.004875 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHUsername
	I0603 12:43:20.005010 1096371 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m03/id_rsa Username:docker}
	I0603 12:43:20.092470 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0603 12:43:20.092538 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0603 12:43:20.118037 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0603 12:43:20.118115 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 12:43:20.143685 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0603 12:43:20.143758 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 12:43:20.168381 1096371 provision.go:87] duration metric: took 473.178136ms to configureAuth
	I0603 12:43:20.168414 1096371 buildroot.go:189] setting minikube options for container-runtime
	I0603 12:43:20.168639 1096371 config.go:182] Loaded profile config "ha-220492": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:43:20.168724 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHHostname
	I0603 12:43:20.171425 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:20.171794 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-220492-m03 Clientid:01:52:54:00:ae:60:87}
	I0603 12:43:20.171822 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:20.171970 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHPort
	I0603 12:43:20.172177 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHKeyPath
	I0603 12:43:20.172336 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHKeyPath
	I0603 12:43:20.172484 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHUsername
	I0603 12:43:20.172643 1096371 main.go:141] libmachine: Using SSH client type: native
	I0603 12:43:20.172821 1096371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I0603 12:43:20.172839 1096371 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 12:43:20.453638 1096371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 12:43:20.453674 1096371 main.go:141] libmachine: Checking connection to Docker...
	I0603 12:43:20.453684 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetURL
	I0603 12:43:20.455245 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | Using libvirt version 6000000
	I0603 12:43:20.457867 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:20.458347 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-220492-m03 Clientid:01:52:54:00:ae:60:87}
	I0603 12:43:20.458396 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:20.458483 1096371 main.go:141] libmachine: Docker is up and running!
	I0603 12:43:20.458495 1096371 main.go:141] libmachine: Reticulating splines...
	I0603 12:43:20.458502 1096371 client.go:171] duration metric: took 26.115344616s to LocalClient.Create
	I0603 12:43:20.458526 1096371 start.go:167] duration metric: took 26.11540413s to libmachine.API.Create "ha-220492"
	I0603 12:43:20.458538 1096371 start.go:293] postStartSetup for "ha-220492-m03" (driver="kvm2")
	I0603 12:43:20.458553 1096371 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 12:43:20.458571 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .DriverName
	I0603 12:43:20.458829 1096371 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 12:43:20.458854 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHHostname
	I0603 12:43:20.461283 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:20.461622 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-220492-m03 Clientid:01:52:54:00:ae:60:87}
	I0603 12:43:20.461649 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:20.461855 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHPort
	I0603 12:43:20.462088 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHKeyPath
	I0603 12:43:20.462274 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHUsername
	I0603 12:43:20.462460 1096371 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m03/id_rsa Username:docker}
	I0603 12:43:20.553625 1096371 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 12:43:20.558471 1096371 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 12:43:20.558535 1096371 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/addons for local assets ...
	I0603 12:43:20.558610 1096371 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/files for local assets ...
	I0603 12:43:20.558691 1096371 filesync.go:149] local asset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> 10862512.pem in /etc/ssl/certs
	I0603 12:43:20.558703 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> /etc/ssl/certs/10862512.pem
	I0603 12:43:20.558783 1096371 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 12:43:20.569972 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 12:43:20.595859 1096371 start.go:296] duration metric: took 137.299966ms for postStartSetup
	I0603 12:43:20.595921 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetConfigRaw
	I0603 12:43:20.596583 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetIP
	I0603 12:43:20.599761 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:20.600203 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-220492-m03 Clientid:01:52:54:00:ae:60:87}
	I0603 12:43:20.600223 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:20.600606 1096371 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/config.json ...
	I0603 12:43:20.600826 1096371 start.go:128] duration metric: took 26.277116804s to createHost
	I0603 12:43:20.600858 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHHostname
	I0603 12:43:20.603058 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:20.603486 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-220492-m03 Clientid:01:52:54:00:ae:60:87}
	I0603 12:43:20.603509 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:20.603699 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHPort
	I0603 12:43:20.603957 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHKeyPath
	I0603 12:43:20.604121 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHKeyPath
	I0603 12:43:20.604249 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHUsername
	I0603 12:43:20.604410 1096371 main.go:141] libmachine: Using SSH client type: native
	I0603 12:43:20.604590 1096371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I0603 12:43:20.604600 1096371 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 12:43:20.720338 1096371 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717418600.696165887
	
	I0603 12:43:20.720374 1096371 fix.go:216] guest clock: 1717418600.696165887
	I0603 12:43:20.720386 1096371 fix.go:229] Guest: 2024-06-03 12:43:20.696165887 +0000 UTC Remote: 2024-06-03 12:43:20.600841955 +0000 UTC m=+155.483411943 (delta=95.323932ms)
	I0603 12:43:20.720414 1096371 fix.go:200] guest clock delta is within tolerance: 95.323932ms
	I0603 12:43:20.720422 1096371 start.go:83] releasing machines lock for "ha-220492-m03", held for 26.396858432s
	I0603 12:43:20.720449 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .DriverName
	I0603 12:43:20.720784 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetIP
	I0603 12:43:20.723898 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:20.724327 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-220492-m03 Clientid:01:52:54:00:ae:60:87}
	I0603 12:43:20.724358 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:20.726351 1096371 out.go:177] * Found network options:
	I0603 12:43:20.728299 1096371 out.go:177]   - NO_PROXY=192.168.39.6,192.168.39.106
	W0603 12:43:20.729832 1096371 proxy.go:119] fail to check proxy env: Error ip not in block
	W0603 12:43:20.729861 1096371 proxy.go:119] fail to check proxy env: Error ip not in block
	I0603 12:43:20.729881 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .DriverName
	I0603 12:43:20.730652 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .DriverName
	I0603 12:43:20.730895 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .DriverName
	I0603 12:43:20.731055 1096371 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 12:43:20.731108 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHHostname
	W0603 12:43:20.731132 1096371 proxy.go:119] fail to check proxy env: Error ip not in block
	W0603 12:43:20.731162 1096371 proxy.go:119] fail to check proxy env: Error ip not in block
	I0603 12:43:20.731283 1096371 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 12:43:20.731300 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHHostname
	I0603 12:43:20.734387 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:20.734430 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:20.734770 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-220492-m03 Clientid:01:52:54:00:ae:60:87}
	I0603 12:43:20.734801 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:20.734827 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-220492-m03 Clientid:01:52:54:00:ae:60:87}
	I0603 12:43:20.734841 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:20.734881 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHPort
	I0603 12:43:20.735097 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHKeyPath
	I0603 12:43:20.735110 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHPort
	I0603 12:43:20.735275 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHKeyPath
	I0603 12:43:20.735344 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHUsername
	I0603 12:43:20.735427 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHUsername
	I0603 12:43:20.735499 1096371 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m03/id_rsa Username:docker}
	I0603 12:43:20.735570 1096371 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m03/id_rsa Username:docker}
	I0603 12:43:20.981022 1096371 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 12:43:20.988392 1096371 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 12:43:20.988507 1096371 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 12:43:21.006315 1096371 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 12:43:21.006443 1096371 start.go:494] detecting cgroup driver to use...
	I0603 12:43:21.006547 1096371 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 12:43:21.025644 1096371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:43:21.040335 1096371 docker.go:217] disabling cri-docker service (if available) ...
	I0603 12:43:21.040399 1096371 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 12:43:21.056817 1096371 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 12:43:21.072459 1096371 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 12:43:21.207222 1096371 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 12:43:21.352527 1096371 docker.go:233] disabling docker service ...
	I0603 12:43:21.352611 1096371 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 12:43:21.369215 1096371 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 12:43:21.383793 1096371 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 12:43:21.530137 1096371 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 12:43:21.643860 1096371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 12:43:21.658602 1096371 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:43:21.678687 1096371 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 12:43:21.678761 1096371 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:43:21.689934 1096371 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 12:43:21.690016 1096371 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:43:21.701661 1096371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:43:21.714292 1096371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:43:21.726116 1096371 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 12:43:21.738093 1096371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:43:21.750313 1096371 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:43:21.770297 1096371 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:43:21.783297 1096371 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 12:43:21.794238 1096371 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 12:43:21.794304 1096371 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 12:43:21.807985 1096371 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 12:43:21.818315 1096371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:43:21.964194 1096371 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 12:43:22.115343 1096371 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 12:43:22.115462 1096371 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 12:43:22.120272 1096371 start.go:562] Will wait 60s for crictl version
	I0603 12:43:22.120327 1096371 ssh_runner.go:195] Run: which crictl
	I0603 12:43:22.124229 1096371 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 12:43:22.172026 1096371 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 12:43:22.172099 1096371 ssh_runner.go:195] Run: crio --version
	I0603 12:43:22.202369 1096371 ssh_runner.go:195] Run: crio --version
	I0603 12:43:22.233707 1096371 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 12:43:22.235401 1096371 out.go:177]   - env NO_PROXY=192.168.39.6
	I0603 12:43:22.236873 1096371 out.go:177]   - env NO_PROXY=192.168.39.6,192.168.39.106
	I0603 12:43:22.238349 1096371 main.go:141] libmachine: (ha-220492-m03) Calling .GetIP
	I0603 12:43:22.241427 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:22.241843 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-220492-m03 Clientid:01:52:54:00:ae:60:87}
	I0603 12:43:22.241868 1096371 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:43:22.242108 1096371 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0603 12:43:22.246783 1096371 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:43:22.259935 1096371 mustload.go:65] Loading cluster: ha-220492
	I0603 12:43:22.260185 1096371 config.go:182] Loaded profile config "ha-220492": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:43:22.260462 1096371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:43:22.260520 1096371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:43:22.277384 1096371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35971
	I0603 12:43:22.277925 1096371 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:43:22.278444 1096371 main.go:141] libmachine: Using API Version  1
	I0603 12:43:22.278469 1096371 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:43:22.278957 1096371 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:43:22.279163 1096371 main.go:141] libmachine: (ha-220492) Calling .GetState
	I0603 12:43:22.280930 1096371 host.go:66] Checking if "ha-220492" exists ...
	I0603 12:43:22.281238 1096371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:43:22.281286 1096371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:43:22.296092 1096371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37509
	I0603 12:43:22.296571 1096371 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:43:22.297036 1096371 main.go:141] libmachine: Using API Version  1
	I0603 12:43:22.297054 1096371 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:43:22.297385 1096371 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:43:22.297640 1096371 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:43:22.297834 1096371 certs.go:68] Setting up /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492 for IP: 192.168.39.169
	I0603 12:43:22.297849 1096371 certs.go:194] generating shared ca certs ...
	I0603 12:43:22.297870 1096371 certs.go:226] acquiring lock for ca certs: {Name:mkeec5aabce7c9540fcb31b78e4f96c2851d54f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:43:22.298030 1096371 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key
	I0603 12:43:22.298082 1096371 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key
	I0603 12:43:22.298097 1096371 certs.go:256] generating profile certs ...
	I0603 12:43:22.298197 1096371 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/client.key
	I0603 12:43:22.298231 1096371 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key.4fd59e07
	I0603 12:43:22.298272 1096371 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt.4fd59e07 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.6 192.168.39.106 192.168.39.169 192.168.39.254]
	I0603 12:43:22.384345 1096371 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt.4fd59e07 ...
	I0603 12:43:22.384396 1096371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt.4fd59e07: {Name:mk9434cb6dd09b3cdb5570cdf26f69733c2691cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:43:22.384595 1096371 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key.4fd59e07 ...
	I0603 12:43:22.384608 1096371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key.4fd59e07: {Name:mk5ce8cb87692994d1dd4d129a27c585f4731b23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:43:22.384681 1096371 certs.go:381] copying /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt.4fd59e07 -> /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt
	I0603 12:43:22.384833 1096371 certs.go:385] copying /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key.4fd59e07 -> /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key
	I0603 12:43:22.384963 1096371 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/proxy-client.key
	I0603 12:43:22.384980 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0603 12:43:22.384993 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0603 12:43:22.385007 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0603 12:43:22.385020 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0603 12:43:22.385033 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0603 12:43:22.385045 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0603 12:43:22.385057 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0603 12:43:22.385072 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0603 12:43:22.385118 1096371 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem (1338 bytes)
	W0603 12:43:22.385150 1096371 certs.go:480] ignoring /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251_empty.pem, impossibly tiny 0 bytes
	I0603 12:43:22.385166 1096371 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 12:43:22.385189 1096371 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem (1078 bytes)
	I0603 12:43:22.385211 1096371 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem (1123 bytes)
	I0603 12:43:22.385232 1096371 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem (1675 bytes)
	I0603 12:43:22.385272 1096371 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 12:43:22.385299 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem -> /usr/share/ca-certificates/1086251.pem
	I0603 12:43:22.385316 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> /usr/share/ca-certificates/10862512.pem
	I0603 12:43:22.385328 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:43:22.385362 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:43:22.388803 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:43:22.389462 1096371 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:43:22.389491 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:43:22.389727 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:43:22.389957 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:43:22.390116 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:43:22.390327 1096371 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/id_rsa Username:docker}
	I0603 12:43:22.465777 1096371 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0603 12:43:22.471023 1096371 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0603 12:43:22.485916 1096371 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0603 12:43:22.491124 1096371 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0603 12:43:22.504898 1096371 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0603 12:43:22.509773 1096371 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0603 12:43:22.523642 1096371 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0603 12:43:22.527983 1096371 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0603 12:43:22.539008 1096371 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0603 12:43:22.543891 1096371 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0603 12:43:22.557986 1096371 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0603 12:43:22.565354 1096371 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0603 12:43:22.577915 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 12:43:22.604742 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 12:43:22.629391 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 12:43:22.659078 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0603 12:43:22.686576 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0603 12:43:22.712999 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0603 12:43:22.738945 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 12:43:22.765149 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 12:43:22.793642 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem --> /usr/share/ca-certificates/1086251.pem (1338 bytes)
	I0603 12:43:22.819440 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /usr/share/ca-certificates/10862512.pem (1708 bytes)
	I0603 12:43:22.845526 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 12:43:22.869544 1096371 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0603 12:43:22.886657 1096371 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0603 12:43:22.904446 1096371 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0603 12:43:22.921153 1096371 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0603 12:43:22.938320 1096371 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0603 12:43:22.957084 1096371 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0603 12:43:22.974794 1096371 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0603 12:43:22.992634 1096371 ssh_runner.go:195] Run: openssl version
	I0603 12:43:22.999057 1096371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 12:43:23.011182 1096371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:43:23.016117 1096371 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:24 /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:43:23.016177 1096371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:43:23.022552 1096371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 12:43:23.034290 1096371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1086251.pem && ln -fs /usr/share/ca-certificates/1086251.pem /etc/ssl/certs/1086251.pem"
	I0603 12:43:23.046115 1096371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1086251.pem
	I0603 12:43:23.050785 1096371 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:37 /usr/share/ca-certificates/1086251.pem
	I0603 12:43:23.050845 1096371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1086251.pem
	I0603 12:43:23.056647 1096371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1086251.pem /etc/ssl/certs/51391683.0"
	I0603 12:43:23.069008 1096371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10862512.pem && ln -fs /usr/share/ca-certificates/10862512.pem /etc/ssl/certs/10862512.pem"
	I0603 12:43:23.080741 1096371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10862512.pem
	I0603 12:43:23.085671 1096371 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:37 /usr/share/ca-certificates/10862512.pem
	I0603 12:43:23.085736 1096371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10862512.pem
	I0603 12:43:23.092133 1096371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10862512.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 12:43:23.105182 1096371 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 12:43:23.109925 1096371 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 12:43:23.109994 1096371 kubeadm.go:928] updating node {m03 192.168.39.169 8443 v1.30.1 crio true true} ...
	I0603 12:43:23.110105 1096371 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-220492-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.169
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-220492 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 12:43:23.110148 1096371 kube-vip.go:115] generating kube-vip config ...
	I0603 12:43:23.110204 1096371 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0603 12:43:23.131514 1096371 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0603 12:43:23.131635 1096371 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0603 12:43:23.131705 1096371 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 12:43:23.142473 1096371 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0603 12:43:23.142542 1096371 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0603 12:43:23.153137 1096371 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0603 12:43:23.153138 1096371 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256
	I0603 12:43:23.153143 1096371 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256
	I0603 12:43:23.153183 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/linux/amd64/v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0603 12:43:23.153166 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/linux/amd64/v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0603 12:43:23.153215 1096371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:43:23.153253 1096371 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0603 12:43:23.153285 1096371 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0603 12:43:23.172473 1096371 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0603 12:43:23.172525 1096371 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/linux/amd64/v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0603 12:43:23.172581 1096371 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0603 12:43:23.172604 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/linux/amd64/v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0603 12:43:23.172621 1096371 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0603 12:43:23.172526 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/linux/amd64/v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0603 12:43:23.185256 1096371 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0603 12:43:23.185296 1096371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/linux/amd64/v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0603 12:43:24.134865 1096371 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0603 12:43:24.145082 1096371 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0603 12:43:24.162562 1096371 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 12:43:24.179437 1096371 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0603 12:43:24.196174 1096371 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0603 12:43:24.200209 1096371 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 12:43:24.212797 1096371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:43:24.344234 1096371 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:43:24.362738 1096371 host.go:66] Checking if "ha-220492" exists ...
	I0603 12:43:24.363407 1096371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:43:24.363478 1096371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:43:24.382750 1096371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41011
	I0603 12:43:24.383293 1096371 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:43:24.383844 1096371 main.go:141] libmachine: Using API Version  1
	I0603 12:43:24.383869 1096371 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:43:24.384258 1096371 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:43:24.384452 1096371 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:43:24.384614 1096371 start.go:316] joinCluster: &{Name:ha-220492 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cluster
Name:ha-220492 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.106 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.169 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:43:24.384749 1096371 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0603 12:43:24.384776 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:43:24.388207 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:43:24.388797 1096371 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:43:24.388830 1096371 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:43:24.389000 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:43:24.389214 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:43:24.389426 1096371 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:43:24.389609 1096371 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/id_rsa Username:docker}
	I0603 12:43:24.551779 1096371 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.169 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 12:43:24.551825 1096371 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vhxak4.tfp86wxpifu70ily --discovery-token-ca-cert-hash sha256:c33e9516f6d05db03b44f9194bafe44692a1b8ae1d860b8bc74f77578e93fdb1 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-220492-m03 --control-plane --apiserver-advertise-address=192.168.39.169 --apiserver-bind-port=8443"
	I0603 12:43:48.466021 1096371 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vhxak4.tfp86wxpifu70ily --discovery-token-ca-cert-hash sha256:c33e9516f6d05db03b44f9194bafe44692a1b8ae1d860b8bc74f77578e93fdb1 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-220492-m03 --control-plane --apiserver-advertise-address=192.168.39.169 --apiserver-bind-port=8443": (23.914162266s)
	I0603 12:43:48.466076 1096371 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0603 12:43:49.074811 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-220492-m03 minikube.k8s.io/updated_at=2024_06_03T12_43_49_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354 minikube.k8s.io/name=ha-220492 minikube.k8s.io/primary=false
	I0603 12:43:49.193601 1096371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-220492-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0603 12:43:49.309011 1096371 start.go:318] duration metric: took 24.924388938s to joinCluster
	I0603 12:43:49.309110 1096371 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.169 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 12:43:49.310641 1096371 out.go:177] * Verifying Kubernetes components...
	I0603 12:43:49.309516 1096371 config.go:182] Loaded profile config "ha-220492": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:43:49.311715 1096371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:43:49.581989 1096371 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:43:49.608808 1096371 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 12:43:49.609148 1096371 kapi.go:59] client config for ha-220492: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/client.crt", KeyFile:"/home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/client.key", CAFile:"/home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfa500), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0603 12:43:49.609240 1096371 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.6:8443
	I0603 12:43:49.609573 1096371 node_ready.go:35] waiting up to 6m0s for node "ha-220492-m03" to be "Ready" ...
	I0603 12:43:49.609704 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:49.609719 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:49.609729 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:49.609739 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:49.613506 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:43:50.110524 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:50.110557 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:50.110565 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:50.110574 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:50.113651 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:43:50.610441 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:50.610464 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:50.610472 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:50.610476 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:50.613944 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:43:51.109896 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:51.109918 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:51.109927 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:51.109930 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:51.115692 1096371 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 12:43:51.610724 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:51.610745 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:51.610753 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:51.610757 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:51.614405 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:43:51.614931 1096371 node_ready.go:53] node "ha-220492-m03" has status "Ready":"False"
	I0603 12:43:52.110574 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:52.110598 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:52.110607 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:52.110610 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:52.115335 1096371 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:43:52.610167 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:52.610191 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:52.610199 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:52.610203 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:52.614880 1096371 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:43:53.110742 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:53.110771 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:53.110783 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:53.110791 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:53.114681 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:43:53.610742 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:53.610774 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:53.610785 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:53.610789 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:53.615916 1096371 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 12:43:53.616844 1096371 node_ready.go:53] node "ha-220492-m03" has status "Ready":"False"
	I0603 12:43:54.110157 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:54.110205 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:54.110214 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:54.110218 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:54.114472 1096371 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:43:54.610217 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:54.610240 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:54.610250 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:54.610254 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:54.613866 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:43:55.110220 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:55.110245 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:55.110253 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:55.110257 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:55.113487 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:43:55.610455 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:55.610479 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:55.610489 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:55.610496 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:55.614341 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:43:55.614994 1096371 node_ready.go:49] node "ha-220492-m03" has status "Ready":"True"
	I0603 12:43:55.615014 1096371 node_ready.go:38] duration metric: took 6.005414937s for node "ha-220492-m03" to be "Ready" ...
	I0603 12:43:55.615023 1096371 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:43:55.615083 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0603 12:43:55.615092 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:55.615099 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:55.615102 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:55.621643 1096371 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 12:43:55.629636 1096371 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-d2tgp" in "kube-system" namespace to be "Ready" ...
	I0603 12:43:55.629772 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-d2tgp
	I0603 12:43:55.629785 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:55.629793 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:55.629797 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:55.632667 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:43:55.633375 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492
	I0603 12:43:55.633393 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:55.633420 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:55.633426 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:55.636047 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:43:55.636441 1096371 pod_ready.go:92] pod "coredns-7db6d8ff4d-d2tgp" in "kube-system" namespace has status "Ready":"True"
	I0603 12:43:55.636458 1096371 pod_ready.go:81] duration metric: took 6.798231ms for pod "coredns-7db6d8ff4d-d2tgp" in "kube-system" namespace to be "Ready" ...
	I0603 12:43:55.636465 1096371 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-q7687" in "kube-system" namespace to be "Ready" ...
	I0603 12:43:55.636515 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-q7687
	I0603 12:43:55.636523 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:55.636530 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:55.636537 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:55.638896 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:43:55.639602 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492
	I0603 12:43:55.639617 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:55.639623 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:55.639627 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:55.643342 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:43:55.643839 1096371 pod_ready.go:92] pod "coredns-7db6d8ff4d-q7687" in "kube-system" namespace has status "Ready":"True"
	I0603 12:43:55.643860 1096371 pod_ready.go:81] duration metric: took 7.385134ms for pod "coredns-7db6d8ff4d-q7687" in "kube-system" namespace to be "Ready" ...
	I0603 12:43:55.643871 1096371 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-220492" in "kube-system" namespace to be "Ready" ...
	I0603 12:43:55.643931 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492
	I0603 12:43:55.643947 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:55.643957 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:55.643966 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:55.646755 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:43:55.647316 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492
	I0603 12:43:55.647331 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:55.647338 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:55.647343 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:55.650630 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:43:55.651339 1096371 pod_ready.go:92] pod "etcd-ha-220492" in "kube-system" namespace has status "Ready":"True"
	I0603 12:43:55.651355 1096371 pod_ready.go:81] duration metric: took 7.477443ms for pod "etcd-ha-220492" in "kube-system" namespace to be "Ready" ...
	I0603 12:43:55.651363 1096371 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-220492-m02" in "kube-system" namespace to be "Ready" ...
	I0603 12:43:55.651405 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492-m02
	I0603 12:43:55.651413 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:55.651419 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:55.651424 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:55.653855 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:43:55.654466 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:43:55.654488 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:55.654495 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:55.654499 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:55.656908 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:43:55.657483 1096371 pod_ready.go:92] pod "etcd-ha-220492-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 12:43:55.657500 1096371 pod_ready.go:81] duration metric: took 6.129437ms for pod "etcd-ha-220492-m02" in "kube-system" namespace to be "Ready" ...
	I0603 12:43:55.657508 1096371 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-220492-m03" in "kube-system" namespace to be "Ready" ...
	I0603 12:43:55.810790 1096371 request.go:629] Waited for 153.183486ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492-m03
	I0603 12:43:55.810879 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492-m03
	I0603 12:43:55.810887 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:55.810898 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:55.810909 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:55.814643 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:43:56.010887 1096371 request.go:629] Waited for 195.364967ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:56.010959 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:56.010966 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:56.010974 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:56.010978 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:56.016371 1096371 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0603 12:43:56.211372 1096371 request.go:629] Waited for 53.221404ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492-m03
	I0603 12:43:56.211443 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492-m03
	I0603 12:43:56.211449 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:56.211456 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:56.211461 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:56.215030 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:43:56.411325 1096371 request.go:629] Waited for 195.396738ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:56.411387 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:56.411392 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:56.411400 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:56.411404 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:56.414791 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:43:56.658682 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492-m03
	I0603 12:43:56.658707 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:56.658714 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:56.658720 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:56.662087 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:43:56.811307 1096371 request.go:629] Waited for 148.327175ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:56.811410 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:56.811419 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:56.811429 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:56.811441 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:56.815038 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:43:57.158412 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492-m03
	I0603 12:43:57.158436 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:57.158445 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:57.158449 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:57.161908 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:43:57.210946 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:57.210969 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:57.210978 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:57.210982 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:57.214225 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:43:57.658114 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-220492-m03
	I0603 12:43:57.658149 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:57.658162 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:57.658168 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:57.661706 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:43:57.662561 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:57.662584 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:57.662593 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:57.662599 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:57.665464 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:43:57.666181 1096371 pod_ready.go:92] pod "etcd-ha-220492-m03" in "kube-system" namespace has status "Ready":"True"
	I0603 12:43:57.666203 1096371 pod_ready.go:81] duration metric: took 2.008687571s for pod "etcd-ha-220492-m03" in "kube-system" namespace to be "Ready" ...
	I0603 12:43:57.666226 1096371 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-220492" in "kube-system" namespace to be "Ready" ...
	I0603 12:43:57.810534 1096371 request.go:629] Waited for 144.228037ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-220492
	I0603 12:43:57.810625 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-220492
	I0603 12:43:57.810633 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:57.810641 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:57.810646 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:57.815060 1096371 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:43:58.011169 1096371 request.go:629] Waited for 195.365307ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-220492
	I0603 12:43:58.011257 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492
	I0603 12:43:58.011263 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:58.011271 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:58.011279 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:58.015015 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:43:58.015841 1096371 pod_ready.go:92] pod "kube-apiserver-ha-220492" in "kube-system" namespace has status "Ready":"True"
	I0603 12:43:58.015863 1096371 pod_ready.go:81] duration metric: took 349.622915ms for pod "kube-apiserver-ha-220492" in "kube-system" namespace to be "Ready" ...
	I0603 12:43:58.015872 1096371 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-220492-m02" in "kube-system" namespace to be "Ready" ...
	I0603 12:43:58.210933 1096371 request.go:629] Waited for 194.958077ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-220492-m02
	I0603 12:43:58.211005 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-220492-m02
	I0603 12:43:58.211010 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:58.211018 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:58.211026 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:58.215371 1096371 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:43:58.411477 1096371 request.go:629] Waited for 194.387478ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:43:58.411558 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:43:58.411566 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:58.411577 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:58.411597 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:58.426670 1096371 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0603 12:43:58.427342 1096371 pod_ready.go:92] pod "kube-apiserver-ha-220492-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 12:43:58.427369 1096371 pod_ready.go:81] duration metric: took 411.489767ms for pod "kube-apiserver-ha-220492-m02" in "kube-system" namespace to be "Ready" ...
	I0603 12:43:58.427396 1096371 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-220492-m03" in "kube-system" namespace to be "Ready" ...
	I0603 12:43:58.611227 1096371 request.go:629] Waited for 183.715657ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-220492-m03
	I0603 12:43:58.611320 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-220492-m03
	I0603 12:43:58.611336 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:58.611347 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:58.611354 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:58.614851 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:43:58.811360 1096371 request.go:629] Waited for 195.351281ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:58.811467 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:58.811473 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:58.811481 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:58.811486 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:58.815323 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:43:59.010969 1096371 request.go:629] Waited for 83.254261ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-220492-m03
	I0603 12:43:59.011051 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-220492-m03
	I0603 12:43:59.011064 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:59.011079 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:59.011090 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:59.014874 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:43:59.210930 1096371 request.go:629] Waited for 195.37591ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:59.211008 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:59.211015 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:59.211027 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:59.211038 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:59.214615 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:43:59.428256 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-220492-m03
	I0603 12:43:59.428284 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:59.428293 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:59.428297 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:59.432197 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:43:59.611345 1096371 request.go:629] Waited for 178.353076ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:59.611453 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:43:59.611460 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:59.611467 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:59.611472 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:59.614758 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:43:59.928634 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-220492-m03
	I0603 12:43:59.928659 1096371 round_trippers.go:469] Request Headers:
	I0603 12:43:59.928668 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:43:59.928674 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:43:59.932105 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:00.011090 1096371 request.go:629] Waited for 78.235001ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:44:00.011155 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:44:00.011160 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:00.011168 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:00.011173 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:00.014697 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:00.428581 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-220492-m03
	I0603 12:44:00.428606 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:00.428613 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:00.428616 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:00.432148 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:00.433088 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:44:00.433107 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:00.433118 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:00.433127 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:00.436412 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:00.437015 1096371 pod_ready.go:92] pod "kube-apiserver-ha-220492-m03" in "kube-system" namespace has status "Ready":"True"
	I0603 12:44:00.437042 1096371 pod_ready.go:81] duration metric: took 2.009636337s for pod "kube-apiserver-ha-220492-m03" in "kube-system" namespace to be "Ready" ...
	I0603 12:44:00.437055 1096371 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-220492" in "kube-system" namespace to be "Ready" ...
	I0603 12:44:00.611461 1096371 request.go:629] Waited for 174.328001ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220492
	I0603 12:44:00.611561 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220492
	I0603 12:44:00.611581 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:00.611593 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:00.611603 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:00.615409 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:00.811511 1096371 request.go:629] Waited for 195.407138ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-220492
	I0603 12:44:00.811579 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492
	I0603 12:44:00.811584 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:00.811592 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:00.811596 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:00.815065 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:00.815768 1096371 pod_ready.go:92] pod "kube-controller-manager-ha-220492" in "kube-system" namespace has status "Ready":"True"
	I0603 12:44:00.815787 1096371 pod_ready.go:81] duration metric: took 378.723871ms for pod "kube-controller-manager-ha-220492" in "kube-system" namespace to be "Ready" ...
	I0603 12:44:00.815797 1096371 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-220492-m02" in "kube-system" namespace to be "Ready" ...
	I0603 12:44:01.010875 1096371 request.go:629] Waited for 194.987952ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220492-m02
	I0603 12:44:01.010941 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220492-m02
	I0603 12:44:01.010946 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:01.010953 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:01.010957 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:01.014830 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:01.210945 1096371 request.go:629] Waited for 195.366991ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:44:01.211031 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:44:01.211038 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:01.211051 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:01.211062 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:01.214923 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:01.215608 1096371 pod_ready.go:92] pod "kube-controller-manager-ha-220492-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 12:44:01.215632 1096371 pod_ready.go:81] duration metric: took 399.828657ms for pod "kube-controller-manager-ha-220492-m02" in "kube-system" namespace to be "Ready" ...
	I0603 12:44:01.215644 1096371 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-220492-m03" in "kube-system" namespace to be "Ready" ...
	I0603 12:44:01.410600 1096371 request.go:629] Waited for 194.859894ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220492-m03
	I0603 12:44:01.410673 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220492-m03
	I0603 12:44:01.410678 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:01.410686 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:01.410690 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:01.414051 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:01.611128 1096371 request.go:629] Waited for 196.374452ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:44:01.611213 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:44:01.611223 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:01.611234 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:01.611244 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:01.614275 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:44:01.811094 1096371 request.go:629] Waited for 95.2615ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220492-m03
	I0603 12:44:01.811183 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220492-m03
	I0603 12:44:01.811190 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:01.811199 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:01.811205 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:01.815552 1096371 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:44:02.011530 1096371 request.go:629] Waited for 195.337494ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:44:02.011613 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:44:02.011621 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:02.011632 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:02.011638 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:02.015514 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:02.216018 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220492-m03
	I0603 12:44:02.216043 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:02.216052 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:02.216058 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:02.219384 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:02.410775 1096371 request.go:629] Waited for 190.706668ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:44:02.410846 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:44:02.410856 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:02.410865 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:02.410870 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:02.414994 1096371 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:44:02.716753 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220492-m03
	I0603 12:44:02.716786 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:02.716797 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:02.716805 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:02.720353 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:02.811450 1096371 request.go:629] Waited for 90.271231ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:44:02.811537 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:44:02.811543 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:02.811552 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:02.811559 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:02.814813 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:03.216672 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220492-m03
	I0603 12:44:03.216698 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:03.216706 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:03.216710 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:03.220641 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:03.221740 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:44:03.221764 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:03.221775 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:03.221782 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:03.225530 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:03.226056 1096371 pod_ready.go:102] pod "kube-controller-manager-ha-220492-m03" in "kube-system" namespace has status "Ready":"False"
	I0603 12:44:03.716608 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220492-m03
	I0603 12:44:03.716642 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:03.716654 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:03.716660 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:03.720707 1096371 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:44:03.721672 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:44:03.721691 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:03.721698 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:03.721703 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:03.724982 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:04.215874 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220492-m03
	I0603 12:44:04.215903 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:04.215915 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:04.215922 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:04.219253 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:04.219961 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:44:04.219978 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:04.219986 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:04.219992 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:04.222791 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:44:04.716689 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-220492-m03
	I0603 12:44:04.716713 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:04.716721 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:04.716725 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:04.720184 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:04.721021 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:44:04.721038 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:04.721046 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:04.721050 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:04.724144 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:04.724770 1096371 pod_ready.go:92] pod "kube-controller-manager-ha-220492-m03" in "kube-system" namespace has status "Ready":"True"
	I0603 12:44:04.724796 1096371 pod_ready.go:81] duration metric: took 3.509143073s for pod "kube-controller-manager-ha-220492-m03" in "kube-system" namespace to be "Ready" ...
	I0603 12:44:04.724810 1096371 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dkzgt" in "kube-system" namespace to be "Ready" ...
	I0603 12:44:04.724903 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dkzgt
	I0603 12:44:04.724915 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:04.724926 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:04.724935 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:04.727679 1096371 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0603 12:44:04.810611 1096371 request.go:629] Waited for 82.1903ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:44:04.810730 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:44:04.810749 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:04.810757 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:04.810762 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:04.815132 1096371 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:44:04.815977 1096371 pod_ready.go:92] pod "kube-proxy-dkzgt" in "kube-system" namespace has status "Ready":"True"
	I0603 12:44:04.815999 1096371 pod_ready.go:81] duration metric: took 91.179243ms for pod "kube-proxy-dkzgt" in "kube-system" namespace to be "Ready" ...
	I0603 12:44:04.816008 1096371 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m5l8r" in "kube-system" namespace to be "Ready" ...
	I0603 12:44:05.011523 1096371 request.go:629] Waited for 195.432467ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m5l8r
	I0603 12:44:05.011607 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m5l8r
	I0603 12:44:05.011614 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:05.011632 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:05.011647 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:05.014986 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:05.210787 1096371 request.go:629] Waited for 194.851603ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:44:05.210859 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:44:05.210864 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:05.210873 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:05.210878 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:05.214896 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:05.215591 1096371 pod_ready.go:92] pod "kube-proxy-m5l8r" in "kube-system" namespace has status "Ready":"True"
	I0603 12:44:05.215617 1096371 pod_ready.go:81] duration metric: took 399.601076ms for pod "kube-proxy-m5l8r" in "kube-system" namespace to be "Ready" ...
	I0603 12:44:05.215632 1096371 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w2hpg" in "kube-system" namespace to be "Ready" ...
	I0603 12:44:05.410992 1096371 request.go:629] Waited for 195.257151ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w2hpg
	I0603 12:44:05.411099 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w2hpg
	I0603 12:44:05.411113 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:05.411124 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:05.411140 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:05.415054 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:05.610577 1096371 request.go:629] Waited for 194.232033ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-220492
	I0603 12:44:05.610664 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492
	I0603 12:44:05.610669 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:05.610676 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:05.610680 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:05.614567 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:05.615166 1096371 pod_ready.go:92] pod "kube-proxy-w2hpg" in "kube-system" namespace has status "Ready":"True"
	I0603 12:44:05.615192 1096371 pod_ready.go:81] duration metric: took 399.552426ms for pod "kube-proxy-w2hpg" in "kube-system" namespace to be "Ready" ...
	I0603 12:44:05.615201 1096371 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-220492" in "kube-system" namespace to be "Ready" ...
	I0603 12:44:05.811327 1096371 request.go:629] Waited for 196.001211ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-220492
	I0603 12:44:05.811428 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-220492
	I0603 12:44:05.811437 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:05.811447 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:05.811454 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:05.817845 1096371 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 12:44:06.010933 1096371 request.go:629] Waited for 191.357857ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-220492
	I0603 12:44:06.010999 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492
	I0603 12:44:06.011004 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:06.011018 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:06.011022 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:06.014838 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:06.015579 1096371 pod_ready.go:92] pod "kube-scheduler-ha-220492" in "kube-system" namespace has status "Ready":"True"
	I0603 12:44:06.015598 1096371 pod_ready.go:81] duration metric: took 400.390489ms for pod "kube-scheduler-ha-220492" in "kube-system" namespace to be "Ready" ...
	I0603 12:44:06.015609 1096371 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-220492-m02" in "kube-system" namespace to be "Ready" ...
	I0603 12:44:06.210751 1096371 request.go:629] Waited for 195.041049ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-220492-m02
	I0603 12:44:06.210819 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-220492-m02
	I0603 12:44:06.210824 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:06.210832 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:06.210836 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:06.215303 1096371 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:44:06.411210 1096371 request.go:629] Waited for 195.279246ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:44:06.411325 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m02
	I0603 12:44:06.411337 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:06.411348 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:06.411361 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:06.414932 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:06.415448 1096371 pod_ready.go:92] pod "kube-scheduler-ha-220492-m02" in "kube-system" namespace has status "Ready":"True"
	I0603 12:44:06.415467 1096371 pod_ready.go:81] duration metric: took 399.852489ms for pod "kube-scheduler-ha-220492-m02" in "kube-system" namespace to be "Ready" ...
	I0603 12:44:06.415477 1096371 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-220492-m03" in "kube-system" namespace to be "Ready" ...
	I0603 12:44:06.610489 1096371 request.go:629] Waited for 194.919273ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-220492-m03
	I0603 12:44:06.610608 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-220492-m03
	I0603 12:44:06.610612 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:06.610620 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:06.610625 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:06.614770 1096371 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:44:06.810914 1096371 request.go:629] Waited for 195.37202ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:44:06.810997 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-220492-m03
	I0603 12:44:06.811002 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:06.811010 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:06.811015 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:06.815258 1096371 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0603 12:44:06.816177 1096371 pod_ready.go:92] pod "kube-scheduler-ha-220492-m03" in "kube-system" namespace has status "Ready":"True"
	I0603 12:44:06.816196 1096371 pod_ready.go:81] duration metric: took 400.712759ms for pod "kube-scheduler-ha-220492-m03" in "kube-system" namespace to be "Ready" ...
	I0603 12:44:06.816216 1096371 pod_ready.go:38] duration metric: took 11.201183722s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 12:44:06.816239 1096371 api_server.go:52] waiting for apiserver process to appear ...
	I0603 12:44:06.816303 1096371 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:44:06.833784 1096371 api_server.go:72] duration metric: took 17.524633386s to wait for apiserver process to appear ...
	I0603 12:44:06.833813 1096371 api_server.go:88] waiting for apiserver healthz status ...
	I0603 12:44:06.833848 1096371 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0603 12:44:06.838436 1096371 api_server.go:279] https://192.168.39.6:8443/healthz returned 200:
	ok
	I0603 12:44:06.838515 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/version
	I0603 12:44:06.838524 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:06.838531 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:06.838535 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:06.839487 1096371 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0603 12:44:06.839667 1096371 api_server.go:141] control plane version: v1.30.1
	I0603 12:44:06.839685 1096371 api_server.go:131] duration metric: took 5.86597ms to wait for apiserver health ...
	I0603 12:44:06.839693 1096371 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 12:44:07.011386 1096371 request.go:629] Waited for 171.61689ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0603 12:44:07.011494 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0603 12:44:07.011505 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:07.011518 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:07.011530 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:07.019917 1096371 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0603 12:44:07.026879 1096371 system_pods.go:59] 24 kube-system pods found
	I0603 12:44:07.026905 1096371 system_pods.go:61] "coredns-7db6d8ff4d-d2tgp" [534e15ed-2e68-4275-8725-099d7240c25d] Running
	I0603 12:44:07.026910 1096371 system_pods.go:61] "coredns-7db6d8ff4d-q7687" [4e78d9e6-8feb-44ef-b44d-ed6039ab00ee] Running
	I0603 12:44:07.026913 1096371 system_pods.go:61] "etcd-ha-220492" [f2bcc3d2-bb06-4775-9080-72f7652a48b1] Running
	I0603 12:44:07.026916 1096371 system_pods.go:61] "etcd-ha-220492-m02" [035f5269-9ad9-4be8-a582-59bf15e1f6f1] Running
	I0603 12:44:07.026919 1096371 system_pods.go:61] "etcd-ha-220492-m03" [04c1c8e0-cd55-4bcc-99fd-d8a51aa3dde5] Running
	I0603 12:44:07.026922 1096371 system_pods.go:61] "kindnet-5p8f7" [12b97c9f-e363-42c3-9ac9-d808c47de63a] Running
	I0603 12:44:07.026925 1096371 system_pods.go:61] "kindnet-gkd6p" [f810b6d5-e0e8-4b1a-a5ef-c0a44452ecb7] Running
	I0603 12:44:07.026928 1096371 system_pods.go:61] "kindnet-hbl6v" [9f697f13-4a60-4247-bb5e-a8bcdd3336cd] Running
	I0603 12:44:07.026930 1096371 system_pods.go:61] "kube-apiserver-ha-220492" [a5d2882e-9fb6-4c38-b232-5dc8cb7a009e] Running
	I0603 12:44:07.026933 1096371 system_pods.go:61] "kube-apiserver-ha-220492-m02" [8ef1ba46-3175-4524-8979-fbb5f4d0608a] Running
	I0603 12:44:07.026936 1096371 system_pods.go:61] "kube-apiserver-ha-220492-m03" [f91fd8b8-eb1c-4441-88fc-2955f82c8cda] Running
	I0603 12:44:07.026939 1096371 system_pods.go:61] "kube-controller-manager-ha-220492" [38d8a477-8b59-43d0-9004-a70023c07b14] Running
	I0603 12:44:07.026944 1096371 system_pods.go:61] "kube-controller-manager-ha-220492-m02" [9cde04ca-9c61-4015-9f2f-08c9db8439cc] Running
	I0603 12:44:07.026947 1096371 system_pods.go:61] "kube-controller-manager-ha-220492-m03" [98b6bd4a-cc01-489d-a1c6-97428cac9348] Running
	I0603 12:44:07.026950 1096371 system_pods.go:61] "kube-proxy-dkzgt" [e1536cb0-2da1-4d9a-a6f7-50adfb8f7c9a] Running
	I0603 12:44:07.026953 1096371 system_pods.go:61] "kube-proxy-m5l8r" [de526b5c-27a0-4830-9634-039d4eab49b5] Running
	I0603 12:44:07.026956 1096371 system_pods.go:61] "kube-proxy-w2hpg" [51a52e47-6a1e-4f9c-ba1b-feb3e362531a] Running
	I0603 12:44:07.026959 1096371 system_pods.go:61] "kube-scheduler-ha-220492" [40a56d71-3787-44fa-a3b7-9d2dc2bcf5ac] Running
	I0603 12:44:07.026962 1096371 system_pods.go:61] "kube-scheduler-ha-220492-m02" [6dede50f-8a71-4a7a-97fa-8cc4d2a6ef8c] Running
	I0603 12:44:07.026966 1096371 system_pods.go:61] "kube-scheduler-ha-220492-m03" [f3205a74-3d7e-465a-ac13-f1e36535f16a] Running
	I0603 12:44:07.026973 1096371 system_pods.go:61] "kube-vip-ha-220492" [577ecb1f-e5df-4494-b898-7d2d8e79151d] Running
	I0603 12:44:07.026976 1096371 system_pods.go:61] "kube-vip-ha-220492-m02" [a53477a8-aa28-443e-bf5d-1abb3b66ce57] Running
	I0603 12:44:07.026979 1096371 system_pods.go:61] "kube-vip-ha-220492-m03" [6495d959-2043-486b-b207-6314877f6d43] Running
	I0603 12:44:07.026982 1096371 system_pods.go:61] "storage-provisioner" [f85b2808-26fa-4608-a208-2c11eaddc293] Running
	I0603 12:44:07.026987 1096371 system_pods.go:74] duration metric: took 187.288244ms to wait for pod list to return data ...
	I0603 12:44:07.026996 1096371 default_sa.go:34] waiting for default service account to be created ...
	I0603 12:44:07.211455 1096371 request.go:629] Waited for 184.374699ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/default/serviceaccounts
	I0603 12:44:07.211526 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/default/serviceaccounts
	I0603 12:44:07.211532 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:07.211540 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:07.211545 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:07.215131 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:07.215283 1096371 default_sa.go:45] found service account: "default"
	I0603 12:44:07.215298 1096371 default_sa.go:55] duration metric: took 188.293905ms for default service account to be created ...
	I0603 12:44:07.215307 1096371 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 12:44:07.411495 1096371 request.go:629] Waited for 196.082869ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0603 12:44:07.411581 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0603 12:44:07.411590 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:07.411598 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:07.411604 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:07.418357 1096371 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0603 12:44:07.424425 1096371 system_pods.go:86] 24 kube-system pods found
	I0603 12:44:07.424456 1096371 system_pods.go:89] "coredns-7db6d8ff4d-d2tgp" [534e15ed-2e68-4275-8725-099d7240c25d] Running
	I0603 12:44:07.424460 1096371 system_pods.go:89] "coredns-7db6d8ff4d-q7687" [4e78d9e6-8feb-44ef-b44d-ed6039ab00ee] Running
	I0603 12:44:07.424465 1096371 system_pods.go:89] "etcd-ha-220492" [f2bcc3d2-bb06-4775-9080-72f7652a48b1] Running
	I0603 12:44:07.424468 1096371 system_pods.go:89] "etcd-ha-220492-m02" [035f5269-9ad9-4be8-a582-59bf15e1f6f1] Running
	I0603 12:44:07.424472 1096371 system_pods.go:89] "etcd-ha-220492-m03" [04c1c8e0-cd55-4bcc-99fd-d8a51aa3dde5] Running
	I0603 12:44:07.424476 1096371 system_pods.go:89] "kindnet-5p8f7" [12b97c9f-e363-42c3-9ac9-d808c47de63a] Running
	I0603 12:44:07.424480 1096371 system_pods.go:89] "kindnet-gkd6p" [f810b6d5-e0e8-4b1a-a5ef-c0a44452ecb7] Running
	I0603 12:44:07.424484 1096371 system_pods.go:89] "kindnet-hbl6v" [9f697f13-4a60-4247-bb5e-a8bcdd3336cd] Running
	I0603 12:44:07.424488 1096371 system_pods.go:89] "kube-apiserver-ha-220492" [a5d2882e-9fb6-4c38-b232-5dc8cb7a009e] Running
	I0603 12:44:07.424492 1096371 system_pods.go:89] "kube-apiserver-ha-220492-m02" [8ef1ba46-3175-4524-8979-fbb5f4d0608a] Running
	I0603 12:44:07.424495 1096371 system_pods.go:89] "kube-apiserver-ha-220492-m03" [f91fd8b8-eb1c-4441-88fc-2955f82c8cda] Running
	I0603 12:44:07.424499 1096371 system_pods.go:89] "kube-controller-manager-ha-220492" [38d8a477-8b59-43d0-9004-a70023c07b14] Running
	I0603 12:44:07.424504 1096371 system_pods.go:89] "kube-controller-manager-ha-220492-m02" [9cde04ca-9c61-4015-9f2f-08c9db8439cc] Running
	I0603 12:44:07.424508 1096371 system_pods.go:89] "kube-controller-manager-ha-220492-m03" [98b6bd4a-cc01-489d-a1c6-97428cac9348] Running
	I0603 12:44:07.424511 1096371 system_pods.go:89] "kube-proxy-dkzgt" [e1536cb0-2da1-4d9a-a6f7-50adfb8f7c9a] Running
	I0603 12:44:07.424515 1096371 system_pods.go:89] "kube-proxy-m5l8r" [de526b5c-27a0-4830-9634-039d4eab49b5] Running
	I0603 12:44:07.424523 1096371 system_pods.go:89] "kube-proxy-w2hpg" [51a52e47-6a1e-4f9c-ba1b-feb3e362531a] Running
	I0603 12:44:07.424526 1096371 system_pods.go:89] "kube-scheduler-ha-220492" [40a56d71-3787-44fa-a3b7-9d2dc2bcf5ac] Running
	I0603 12:44:07.424530 1096371 system_pods.go:89] "kube-scheduler-ha-220492-m02" [6dede50f-8a71-4a7a-97fa-8cc4d2a6ef8c] Running
	I0603 12:44:07.424536 1096371 system_pods.go:89] "kube-scheduler-ha-220492-m03" [f3205a74-3d7e-465a-ac13-f1e36535f16a] Running
	I0603 12:44:07.424540 1096371 system_pods.go:89] "kube-vip-ha-220492" [577ecb1f-e5df-4494-b898-7d2d8e79151d] Running
	I0603 12:44:07.424545 1096371 system_pods.go:89] "kube-vip-ha-220492-m02" [a53477a8-aa28-443e-bf5d-1abb3b66ce57] Running
	I0603 12:44:07.424550 1096371 system_pods.go:89] "kube-vip-ha-220492-m03" [6495d959-2043-486b-b207-6314877f6d43] Running
	I0603 12:44:07.424556 1096371 system_pods.go:89] "storage-provisioner" [f85b2808-26fa-4608-a208-2c11eaddc293] Running
	I0603 12:44:07.424562 1096371 system_pods.go:126] duration metric: took 209.250131ms to wait for k8s-apps to be running ...
	I0603 12:44:07.424571 1096371 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 12:44:07.424620 1096371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:44:07.442242 1096371 system_svc.go:56] duration metric: took 17.658928ms WaitForService to wait for kubelet
	I0603 12:44:07.442285 1096371 kubeadm.go:576] duration metric: took 18.133140007s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 12:44:07.442306 1096371 node_conditions.go:102] verifying NodePressure condition ...
	I0603 12:44:07.610604 1096371 request.go:629] Waited for 168.185982ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes
	I0603 12:44:07.610688 1096371 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes
	I0603 12:44:07.610697 1096371 round_trippers.go:469] Request Headers:
	I0603 12:44:07.610705 1096371 round_trippers.go:473]     Accept: application/json, */*
	I0603 12:44:07.610711 1096371 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0603 12:44:07.614118 1096371 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0603 12:44:07.615044 1096371 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 12:44:07.615063 1096371 node_conditions.go:123] node cpu capacity is 2
	I0603 12:44:07.615075 1096371 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 12:44:07.615078 1096371 node_conditions.go:123] node cpu capacity is 2
	I0603 12:44:07.615083 1096371 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 12:44:07.615085 1096371 node_conditions.go:123] node cpu capacity is 2
	I0603 12:44:07.615089 1096371 node_conditions.go:105] duration metric: took 172.779216ms to run NodePressure ...
	I0603 12:44:07.615101 1096371 start.go:240] waiting for startup goroutines ...
	I0603 12:44:07.615124 1096371 start.go:254] writing updated cluster config ...
	I0603 12:44:07.615413 1096371 ssh_runner.go:195] Run: rm -f paused
	I0603 12:44:07.672218 1096371 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 12:44:07.674263 1096371 out.go:177] * Done! kubectl is now configured to use "ha-220492" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jun 03 12:48:31 ha-220492 crio[683]: time="2024-06-03 12:48:31.468620277Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717418911468595980,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9163c6c5-30cf-4209-b127-b50a61818cbf name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:48:31 ha-220492 crio[683]: time="2024-06-03 12:48:31.469446610Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=84d94e32-bc3a-4d93-9dab-88447bb9e0de name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:48:31 ha-220492 crio[683]: time="2024-06-03 12:48:31.469499926Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=84d94e32-bc3a-4d93-9dab-88447bb9e0de name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:48:31 ha-220492 crio[683]: time="2024-06-03 12:48:31.469712802Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:76c9e115804f2ef0da9fc274ec7485309eb70b62384b05254dfa5ac8c6728e13,PodSandboxId:c73634cd0ed838aca7c928c176507e2dfa568ab421aa9e03f90ddc76ca3f89e3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717418649829753613,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5z6j2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 776fef6b-c7d6-4793-a168-5102737dd302,},Annotations:map[string]string{io.kubernetes.container.hash: be6db159,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b000c5164ef9debe3c82089d543b68405e7ae72c0f46e233daab8b658621dac,PodSandboxId:dfdd288abd0dbb2c347f4b52b1f4e58c064f35d506428817069f29bad7ee5c6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717418505770304298,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f85b2808-26fa-4608-a208-2c11eaddc293,},Annotations:map[string]string{io.kubernetes.container.hash: 90de17f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c67da4b30c5f444556405b8a25da6fbb0b38f383d298669f9f21785ed464934,PodSandboxId:1b5bd65416e85f6689ee15ecb3ab55e907fbd1077ac5e5439bec68eb110a6f2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717418505830825065,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-q7687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e78d9e6-8feb-44ef-b44d-ed6039ab00ee,},Annotations:map[string]string{io.kubernetes.container.hash: 62ef9a49,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50f524d71cd1f2697116e7f21f2de4dce2f9e5561c46a64f6c24713c3a56514e,PodSandboxId:6d9c5f1a45b9ec2700f63795dc3d92103fc5b6472ac9f6d2a638e50fb379eb54,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717418505830864910,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-d2tgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534e15ed-2e
68-4275-8725-099d7240c25d,},Annotations:map[string]string{io.kubernetes.container.hash: 5fcebe2b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e802c94fbf7b652f64d20242a16e1a092bc293af274ffda5f7da2cdb3726110f,PodSandboxId:2f740d6ed5034e98ab9ab5be6be150e83efaaa3ca14221bf7e2586014c1790e5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CO
NTAINER_RUNNING,CreatedAt:1717418503917588747,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hbl6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f697f13-4a60-4247-bb5e-a8bcdd3336cd,},Annotations:map[string]string{io.kubernetes.container.hash: be10c8dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16c93dcdad420f0831a36fd31ab05cb7c3a9fefd9706a928d0b31b781e1cbcb5,PodSandboxId:4d41713a63ac581773f2729e379c68f79cb014627aff280bda59d0e8a7cf22e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:171741850
1295981097,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w2hpg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a52e47-6a1e-4f9c-ba1b-feb3e362531a,},Annotations:map[string]string{io.kubernetes.container.hash: c1dc988b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fe31d7dcb7c4bece73cdae47d1e4f870a32eb28d62d5b5be6ba47c7aebeef6b,PodSandboxId:c6c8762f9acbca1e30161dad5fa6f6e02048664609c121db69cc5ffa0fb414fb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17174184833
00952885,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 304b8acf7929042c2226774df1f72a6f,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c6a50d20a2f169936062c7c4c41810fed1d7c1fbfd8db5b78066436668c44c,PodSandboxId:5c63ebce798f7c7bfe5cbe2b12cef00c703beddbd786066f6d5df12732ae6a1b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717418481117930462,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4620ab680afec05b26612f993071a866,},Annotations:map[string]string{io.kubernetes.container.hash: 4b374434,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24aa5625e9a8ad09c021e567710cafe54b2645a693d4daeb7b4e26ef9afea15b,PodSandboxId:03368aff48ff11a612512e49f5c15d496c6a6ada9e73d6b27bef501137483fc4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717418481049274321,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39cb6903f3b5aa40ef8bd7e72aabe415,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1c2bb32752f666af65f18178d8dd09b063abaa5dd50c071c9f8f377fc63156,PodSandboxId:ba8b6aec50011e9fe8c42d7e92a51e5a0907e4e61a030192574cfa79773380e4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717418481055328781,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4887038ca9eb66694db5e7bd6f010727,},Annotations:map[string]string{io.kubernetes.container.hash: d01e1cae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86f8a60e5333435d8ac7bc454e10cecb904b633e2ae00b080728114f5f1b1f35,PodSandboxId:b96e7f287499d7304c1d1aa216ee6aea5b51e6bbe5bfda82d347772d73f33297,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717418480896785560,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.na
me: kube-scheduler-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7cb138fa0228e501515fc556113859c,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=84d94e32-bc3a-4d93-9dab-88447bb9e0de name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:48:31 ha-220492 crio[683]: time="2024-06-03 12:48:31.510642106Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a4c287fe-60e2-492c-9a71-e4f635307a8b name=/runtime.v1.RuntimeService/Version
	Jun 03 12:48:31 ha-220492 crio[683]: time="2024-06-03 12:48:31.510717276Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a4c287fe-60e2-492c-9a71-e4f635307a8b name=/runtime.v1.RuntimeService/Version
	Jun 03 12:48:31 ha-220492 crio[683]: time="2024-06-03 12:48:31.512079458Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cab56aeb-2b34-4b59-a203-075026e804a1 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:48:31 ha-220492 crio[683]: time="2024-06-03 12:48:31.512843079Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717418911512815048,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cab56aeb-2b34-4b59-a203-075026e804a1 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:48:31 ha-220492 crio[683]: time="2024-06-03 12:48:31.513341145Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5fb05c36-ca51-4f09-94a8-664a243dbd35 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:48:31 ha-220492 crio[683]: time="2024-06-03 12:48:31.513388725Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5fb05c36-ca51-4f09-94a8-664a243dbd35 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:48:31 ha-220492 crio[683]: time="2024-06-03 12:48:31.513760944Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:76c9e115804f2ef0da9fc274ec7485309eb70b62384b05254dfa5ac8c6728e13,PodSandboxId:c73634cd0ed838aca7c928c176507e2dfa568ab421aa9e03f90ddc76ca3f89e3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717418649829753613,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5z6j2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 776fef6b-c7d6-4793-a168-5102737dd302,},Annotations:map[string]string{io.kubernetes.container.hash: be6db159,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b000c5164ef9debe3c82089d543b68405e7ae72c0f46e233daab8b658621dac,PodSandboxId:dfdd288abd0dbb2c347f4b52b1f4e58c064f35d506428817069f29bad7ee5c6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717418505770304298,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f85b2808-26fa-4608-a208-2c11eaddc293,},Annotations:map[string]string{io.kubernetes.container.hash: 90de17f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c67da4b30c5f444556405b8a25da6fbb0b38f383d298669f9f21785ed464934,PodSandboxId:1b5bd65416e85f6689ee15ecb3ab55e907fbd1077ac5e5439bec68eb110a6f2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717418505830825065,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-q7687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e78d9e6-8feb-44ef-b44d-ed6039ab00ee,},Annotations:map[string]string{io.kubernetes.container.hash: 62ef9a49,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50f524d71cd1f2697116e7f21f2de4dce2f9e5561c46a64f6c24713c3a56514e,PodSandboxId:6d9c5f1a45b9ec2700f63795dc3d92103fc5b6472ac9f6d2a638e50fb379eb54,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717418505830864910,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-d2tgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534e15ed-2e
68-4275-8725-099d7240c25d,},Annotations:map[string]string{io.kubernetes.container.hash: 5fcebe2b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e802c94fbf7b652f64d20242a16e1a092bc293af274ffda5f7da2cdb3726110f,PodSandboxId:2f740d6ed5034e98ab9ab5be6be150e83efaaa3ca14221bf7e2586014c1790e5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CO
NTAINER_RUNNING,CreatedAt:1717418503917588747,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hbl6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f697f13-4a60-4247-bb5e-a8bcdd3336cd,},Annotations:map[string]string{io.kubernetes.container.hash: be10c8dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16c93dcdad420f0831a36fd31ab05cb7c3a9fefd9706a928d0b31b781e1cbcb5,PodSandboxId:4d41713a63ac581773f2729e379c68f79cb014627aff280bda59d0e8a7cf22e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:171741850
1295981097,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w2hpg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a52e47-6a1e-4f9c-ba1b-feb3e362531a,},Annotations:map[string]string{io.kubernetes.container.hash: c1dc988b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fe31d7dcb7c4bece73cdae47d1e4f870a32eb28d62d5b5be6ba47c7aebeef6b,PodSandboxId:c6c8762f9acbca1e30161dad5fa6f6e02048664609c121db69cc5ffa0fb414fb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17174184833
00952885,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 304b8acf7929042c2226774df1f72a6f,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c6a50d20a2f169936062c7c4c41810fed1d7c1fbfd8db5b78066436668c44c,PodSandboxId:5c63ebce798f7c7bfe5cbe2b12cef00c703beddbd786066f6d5df12732ae6a1b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717418481117930462,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4620ab680afec05b26612f993071a866,},Annotations:map[string]string{io.kubernetes.container.hash: 4b374434,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24aa5625e9a8ad09c021e567710cafe54b2645a693d4daeb7b4e26ef9afea15b,PodSandboxId:03368aff48ff11a612512e49f5c15d496c6a6ada9e73d6b27bef501137483fc4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717418481049274321,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39cb6903f3b5aa40ef8bd7e72aabe415,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1c2bb32752f666af65f18178d8dd09b063abaa5dd50c071c9f8f377fc63156,PodSandboxId:ba8b6aec50011e9fe8c42d7e92a51e5a0907e4e61a030192574cfa79773380e4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717418481055328781,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4887038ca9eb66694db5e7bd6f010727,},Annotations:map[string]string{io.kubernetes.container.hash: d01e1cae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86f8a60e5333435d8ac7bc454e10cecb904b633e2ae00b080728114f5f1b1f35,PodSandboxId:b96e7f287499d7304c1d1aa216ee6aea5b51e6bbe5bfda82d347772d73f33297,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717418480896785560,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.na
me: kube-scheduler-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7cb138fa0228e501515fc556113859c,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5fb05c36-ca51-4f09-94a8-664a243dbd35 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:48:31 ha-220492 crio[683]: time="2024-06-03 12:48:31.554464721Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=70b86167-9a87-4116-a6ba-cb36d86bdd9a name=/runtime.v1.RuntimeService/Version
	Jun 03 12:48:31 ha-220492 crio[683]: time="2024-06-03 12:48:31.554533731Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=70b86167-9a87-4116-a6ba-cb36d86bdd9a name=/runtime.v1.RuntimeService/Version
	Jun 03 12:48:31 ha-220492 crio[683]: time="2024-06-03 12:48:31.556909505Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ec517ada-758b-49b1-9e95-ceb968266b77 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:48:31 ha-220492 crio[683]: time="2024-06-03 12:48:31.557790265Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717418911557763683,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ec517ada-758b-49b1-9e95-ceb968266b77 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:48:31 ha-220492 crio[683]: time="2024-06-03 12:48:31.558679175Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5c4a9614-f565-44bb-b907-602a06adc1f2 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:48:31 ha-220492 crio[683]: time="2024-06-03 12:48:31.558740509Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5c4a9614-f565-44bb-b907-602a06adc1f2 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:48:31 ha-220492 crio[683]: time="2024-06-03 12:48:31.558981697Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:76c9e115804f2ef0da9fc274ec7485309eb70b62384b05254dfa5ac8c6728e13,PodSandboxId:c73634cd0ed838aca7c928c176507e2dfa568ab421aa9e03f90ddc76ca3f89e3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717418649829753613,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5z6j2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 776fef6b-c7d6-4793-a168-5102737dd302,},Annotations:map[string]string{io.kubernetes.container.hash: be6db159,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b000c5164ef9debe3c82089d543b68405e7ae72c0f46e233daab8b658621dac,PodSandboxId:dfdd288abd0dbb2c347f4b52b1f4e58c064f35d506428817069f29bad7ee5c6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717418505770304298,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f85b2808-26fa-4608-a208-2c11eaddc293,},Annotations:map[string]string{io.kubernetes.container.hash: 90de17f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c67da4b30c5f444556405b8a25da6fbb0b38f383d298669f9f21785ed464934,PodSandboxId:1b5bd65416e85f6689ee15ecb3ab55e907fbd1077ac5e5439bec68eb110a6f2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717418505830825065,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-q7687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e78d9e6-8feb-44ef-b44d-ed6039ab00ee,},Annotations:map[string]string{io.kubernetes.container.hash: 62ef9a49,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50f524d71cd1f2697116e7f21f2de4dce2f9e5561c46a64f6c24713c3a56514e,PodSandboxId:6d9c5f1a45b9ec2700f63795dc3d92103fc5b6472ac9f6d2a638e50fb379eb54,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717418505830864910,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-d2tgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534e15ed-2e
68-4275-8725-099d7240c25d,},Annotations:map[string]string{io.kubernetes.container.hash: 5fcebe2b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e802c94fbf7b652f64d20242a16e1a092bc293af274ffda5f7da2cdb3726110f,PodSandboxId:2f740d6ed5034e98ab9ab5be6be150e83efaaa3ca14221bf7e2586014c1790e5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CO
NTAINER_RUNNING,CreatedAt:1717418503917588747,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hbl6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f697f13-4a60-4247-bb5e-a8bcdd3336cd,},Annotations:map[string]string{io.kubernetes.container.hash: be10c8dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16c93dcdad420f0831a36fd31ab05cb7c3a9fefd9706a928d0b31b781e1cbcb5,PodSandboxId:4d41713a63ac581773f2729e379c68f79cb014627aff280bda59d0e8a7cf22e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:171741850
1295981097,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w2hpg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a52e47-6a1e-4f9c-ba1b-feb3e362531a,},Annotations:map[string]string{io.kubernetes.container.hash: c1dc988b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fe31d7dcb7c4bece73cdae47d1e4f870a32eb28d62d5b5be6ba47c7aebeef6b,PodSandboxId:c6c8762f9acbca1e30161dad5fa6f6e02048664609c121db69cc5ffa0fb414fb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17174184833
00952885,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 304b8acf7929042c2226774df1f72a6f,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c6a50d20a2f169936062c7c4c41810fed1d7c1fbfd8db5b78066436668c44c,PodSandboxId:5c63ebce798f7c7bfe5cbe2b12cef00c703beddbd786066f6d5df12732ae6a1b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717418481117930462,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4620ab680afec05b26612f993071a866,},Annotations:map[string]string{io.kubernetes.container.hash: 4b374434,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24aa5625e9a8ad09c021e567710cafe54b2645a693d4daeb7b4e26ef9afea15b,PodSandboxId:03368aff48ff11a612512e49f5c15d496c6a6ada9e73d6b27bef501137483fc4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717418481049274321,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39cb6903f3b5aa40ef8bd7e72aabe415,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1c2bb32752f666af65f18178d8dd09b063abaa5dd50c071c9f8f377fc63156,PodSandboxId:ba8b6aec50011e9fe8c42d7e92a51e5a0907e4e61a030192574cfa79773380e4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717418481055328781,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4887038ca9eb66694db5e7bd6f010727,},Annotations:map[string]string{io.kubernetes.container.hash: d01e1cae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86f8a60e5333435d8ac7bc454e10cecb904b633e2ae00b080728114f5f1b1f35,PodSandboxId:b96e7f287499d7304c1d1aa216ee6aea5b51e6bbe5bfda82d347772d73f33297,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717418480896785560,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.na
me: kube-scheduler-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7cb138fa0228e501515fc556113859c,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5c4a9614-f565-44bb-b907-602a06adc1f2 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:48:31 ha-220492 crio[683]: time="2024-06-03 12:48:31.596921753Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4b345d7c-d27f-47a3-b30e-0cd59697458d name=/runtime.v1.RuntimeService/Version
	Jun 03 12:48:31 ha-220492 crio[683]: time="2024-06-03 12:48:31.596995487Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4b345d7c-d27f-47a3-b30e-0cd59697458d name=/runtime.v1.RuntimeService/Version
	Jun 03 12:48:31 ha-220492 crio[683]: time="2024-06-03 12:48:31.600387708Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=47178fcd-b16f-4766-b830-1d3a0afbfcc7 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:48:31 ha-220492 crio[683]: time="2024-06-03 12:48:31.601385246Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717418911601359473,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=47178fcd-b16f-4766-b830-1d3a0afbfcc7 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:48:31 ha-220492 crio[683]: time="2024-06-03 12:48:31.602216474Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9c0ac2ec-f519-4744-b1fa-b9217e2ac63d name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:48:31 ha-220492 crio[683]: time="2024-06-03 12:48:31.602270345Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9c0ac2ec-f519-4744-b1fa-b9217e2ac63d name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:48:31 ha-220492 crio[683]: time="2024-06-03 12:48:31.602526988Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:76c9e115804f2ef0da9fc274ec7485309eb70b62384b05254dfa5ac8c6728e13,PodSandboxId:c73634cd0ed838aca7c928c176507e2dfa568ab421aa9e03f90ddc76ca3f89e3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717418649829753613,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5z6j2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 776fef6b-c7d6-4793-a168-5102737dd302,},Annotations:map[string]string{io.kubernetes.container.hash: be6db159,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b000c5164ef9debe3c82089d543b68405e7ae72c0f46e233daab8b658621dac,PodSandboxId:dfdd288abd0dbb2c347f4b52b1f4e58c064f35d506428817069f29bad7ee5c6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717418505770304298,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f85b2808-26fa-4608-a208-2c11eaddc293,},Annotations:map[string]string{io.kubernetes.container.hash: 90de17f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c67da4b30c5f444556405b8a25da6fbb0b38f383d298669f9f21785ed464934,PodSandboxId:1b5bd65416e85f6689ee15ecb3ab55e907fbd1077ac5e5439bec68eb110a6f2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717418505830825065,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-q7687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e78d9e6-8feb-44ef-b44d-ed6039ab00ee,},Annotations:map[string]string{io.kubernetes.container.hash: 62ef9a49,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50f524d71cd1f2697116e7f21f2de4dce2f9e5561c46a64f6c24713c3a56514e,PodSandboxId:6d9c5f1a45b9ec2700f63795dc3d92103fc5b6472ac9f6d2a638e50fb379eb54,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717418505830864910,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-d2tgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534e15ed-2e
68-4275-8725-099d7240c25d,},Annotations:map[string]string{io.kubernetes.container.hash: 5fcebe2b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e802c94fbf7b652f64d20242a16e1a092bc293af274ffda5f7da2cdb3726110f,PodSandboxId:2f740d6ed5034e98ab9ab5be6be150e83efaaa3ca14221bf7e2586014c1790e5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CO
NTAINER_RUNNING,CreatedAt:1717418503917588747,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hbl6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f697f13-4a60-4247-bb5e-a8bcdd3336cd,},Annotations:map[string]string{io.kubernetes.container.hash: be10c8dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16c93dcdad420f0831a36fd31ab05cb7c3a9fefd9706a928d0b31b781e1cbcb5,PodSandboxId:4d41713a63ac581773f2729e379c68f79cb014627aff280bda59d0e8a7cf22e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:171741850
1295981097,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w2hpg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a52e47-6a1e-4f9c-ba1b-feb3e362531a,},Annotations:map[string]string{io.kubernetes.container.hash: c1dc988b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fe31d7dcb7c4bece73cdae47d1e4f870a32eb28d62d5b5be6ba47c7aebeef6b,PodSandboxId:c6c8762f9acbca1e30161dad5fa6f6e02048664609c121db69cc5ffa0fb414fb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17174184833
00952885,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 304b8acf7929042c2226774df1f72a6f,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c6a50d20a2f169936062c7c4c41810fed1d7c1fbfd8db5b78066436668c44c,PodSandboxId:5c63ebce798f7c7bfe5cbe2b12cef00c703beddbd786066f6d5df12732ae6a1b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717418481117930462,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4620ab680afec05b26612f993071a866,},Annotations:map[string]string{io.kubernetes.container.hash: 4b374434,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24aa5625e9a8ad09c021e567710cafe54b2645a693d4daeb7b4e26ef9afea15b,PodSandboxId:03368aff48ff11a612512e49f5c15d496c6a6ada9e73d6b27bef501137483fc4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717418481049274321,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39cb6903f3b5aa40ef8bd7e72aabe415,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1c2bb32752f666af65f18178d8dd09b063abaa5dd50c071c9f8f377fc63156,PodSandboxId:ba8b6aec50011e9fe8c42d7e92a51e5a0907e4e61a030192574cfa79773380e4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717418481055328781,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4887038ca9eb66694db5e7bd6f010727,},Annotations:map[string]string{io.kubernetes.container.hash: d01e1cae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86f8a60e5333435d8ac7bc454e10cecb904b633e2ae00b080728114f5f1b1f35,PodSandboxId:b96e7f287499d7304c1d1aa216ee6aea5b51e6bbe5bfda82d347772d73f33297,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717418480896785560,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.na
me: kube-scheduler-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7cb138fa0228e501515fc556113859c,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9c0ac2ec-f519-4744-b1fa-b9217e2ac63d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	76c9e115804f2       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   c73634cd0ed83       busybox-fc5497c4f-5z6j2
	50f524d71cd1f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   6d9c5f1a45b9e       coredns-7db6d8ff4d-d2tgp
	7c67da4b30c5f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   1b5bd65416e85       coredns-7db6d8ff4d-q7687
	1b000c5164ef9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   dfdd288abd0db       storage-provisioner
	e802c94fbf7b6       docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266    6 minutes ago       Running             kindnet-cni               0                   2f740d6ed5034       kindnet-hbl6v
	16c93dcdad420       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      6 minutes ago       Running             kube-proxy                0                   4d41713a63ac5       kube-proxy-w2hpg
	1fe31d7dcb7c4       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   c6c8762f9acbc       kube-vip-ha-220492
	f2c6a50d20a2f       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      7 minutes ago       Running             kube-apiserver            0                   5c63ebce798f7       kube-apiserver-ha-220492
	3f1c2bb32752f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   ba8b6aec50011       etcd-ha-220492
	24aa5625e9a8a       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      7 minutes ago       Running             kube-controller-manager   0                   03368aff48ff1       kube-controller-manager-ha-220492
	86f8a60e53334       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      7 minutes ago       Running             kube-scheduler            0                   b96e7f287499d       kube-scheduler-ha-220492
	
	
	==> coredns [50f524d71cd1f2697116e7f21f2de4dce2f9e5561c46a64f6c24713c3a56514e] <==
	[INFO] 10.244.0.4:58974 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000157943s
	[INFO] 10.244.0.4:40096 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002616941s
	[INFO] 10.244.0.4:60549 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000175831s
	[INFO] 10.244.0.4:38004 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110811s
	[INFO] 10.244.0.4:35443 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076183s
	[INFO] 10.244.1.2:40738 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000162292s
	[INFO] 10.244.1.2:47526 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000136462s
	[INFO] 10.244.1.2:53322 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114547s
	[INFO] 10.244.2.2:47547 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145066s
	[INFO] 10.244.2.2:43785 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000094815s
	[INFO] 10.244.2.2:54501 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001319495s
	[INFO] 10.244.2.2:55983 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000086973s
	[INFO] 10.244.2.2:56195 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069334s
	[INFO] 10.244.0.4:42110 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000064533s
	[INFO] 10.244.0.4:48697 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000058629s
	[INFO] 10.244.1.2:42865 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168668s
	[INFO] 10.244.1.2:56794 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000111494s
	[INFO] 10.244.1.2:58581 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000084125s
	[INFO] 10.244.1.2:50954 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099179s
	[INFO] 10.244.2.2:42915 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142235s
	[INFO] 10.244.2.2:49410 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000102812s
	[INFO] 10.244.0.4:51178 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00019093s
	[INFO] 10.244.1.2:40502 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168017s
	[INFO] 10.244.1.2:35921 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000180824s
	[INFO] 10.244.1.2:40369 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000155572s
	
	
	==> coredns [7c67da4b30c5f444556405b8a25da6fbb0b38f383d298669f9f21785ed464934] <==
	[INFO] 10.244.2.2:43079 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000108982s
	[INFO] 10.244.2.2:44322 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001648137s
	[INFO] 10.244.0.4:45431 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000092001s
	[INFO] 10.244.0.4:56388 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.032198469s
	[INFO] 10.244.0.4:55805 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000292911s
	[INFO] 10.244.1.2:36984 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001651953s
	[INFO] 10.244.1.2:59707 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00013257s
	[INFO] 10.244.1.2:43132 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001294041s
	[INFO] 10.244.1.2:50044 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000148444s
	[INFO] 10.244.1.2:46108 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000262338s
	[INFO] 10.244.2.2:59857 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001619455s
	[INFO] 10.244.2.2:37703 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098955s
	[INFO] 10.244.2.2:51044 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000180769s
	[INFO] 10.244.0.4:56245 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000077571s
	[INFO] 10.244.0.4:40429 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00005283s
	[INFO] 10.244.2.2:55900 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000100265s
	[INFO] 10.244.2.2:57003 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000955s
	[INFO] 10.244.0.4:39653 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107486s
	[INFO] 10.244.0.4:50505 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000152153s
	[INFO] 10.244.0.4:40598 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000156098s
	[INFO] 10.244.1.2:37651 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000154868s
	[INFO] 10.244.2.2:47903 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111761s
	[INFO] 10.244.2.2:55067 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000076585s
	[INFO] 10.244.2.2:39348 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000123715s
	[INFO] 10.244.2.2:33705 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000109704s
	
	
	==> describe nodes <==
	Name:               ha-220492
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-220492
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	                    minikube.k8s.io/name=ha-220492
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_03T12_41_28_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 12:41:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-220492
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 12:48:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 12:44:31 +0000   Mon, 03 Jun 2024 12:41:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 12:44:31 +0000   Mon, 03 Jun 2024 12:41:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 12:44:31 +0000   Mon, 03 Jun 2024 12:41:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 12:44:31 +0000   Mon, 03 Jun 2024 12:41:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.6
	  Hostname:    ha-220492
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bebf6ef8229e4a0498f737d165a96550
	  System UUID:                bebf6ef8-229e-4a04-98f7-37d165a96550
	  Boot ID:                    38c7d220-f8e0-4890-a7e1-09c3bc826d0b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-5z6j2              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m23s
	  kube-system                 coredns-7db6d8ff4d-d2tgp             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m51s
	  kube-system                 coredns-7db6d8ff4d-q7687             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m51s
	  kube-system                 etcd-ha-220492                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m4s
	  kube-system                 kindnet-hbl6v                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m51s
	  kube-system                 kube-apiserver-ha-220492             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m4s
	  kube-system                 kube-controller-manager-ha-220492    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m4s
	  kube-system                 kube-proxy-w2hpg                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m51s
	  kube-system                 kube-scheduler-ha-220492             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m4s
	  kube-system                 kube-vip-ha-220492                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m4s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m50s  kube-proxy       
	  Normal  Starting                 7m4s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m4s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m4s   kubelet          Node ha-220492 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m4s   kubelet          Node ha-220492 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m4s   kubelet          Node ha-220492 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m52s  node-controller  Node ha-220492 event: Registered Node ha-220492 in Controller
	  Normal  NodeReady                6m46s  kubelet          Node ha-220492 status is now: NodeReady
	  Normal  RegisteredNode           5m39s  node-controller  Node ha-220492 event: Registered Node ha-220492 in Controller
	  Normal  RegisteredNode           4m28s  node-controller  Node ha-220492 event: Registered Node ha-220492 in Controller
	
	
	Name:               ha-220492-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-220492-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	                    minikube.k8s.io/name=ha-220492
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_03T12_42_37_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 12:42:34 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-220492-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 12:45:08 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 03 Jun 2024 12:44:37 +0000   Mon, 03 Jun 2024 12:45:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 03 Jun 2024 12:44:37 +0000   Mon, 03 Jun 2024 12:45:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 03 Jun 2024 12:44:37 +0000   Mon, 03 Jun 2024 12:45:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 03 Jun 2024 12:44:37 +0000   Mon, 03 Jun 2024 12:45:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.106
	  Hostname:    ha-220492-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1658a5c6e8394d57a265332808e714ab
	  System UUID:                1658a5c6-e839-4d57-a265-332808e714ab
	  Boot ID:                    a5e41f0e-e9a1-4e3c-9d02-9b2c849b1b76
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-m229v                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m23s
	  kube-system                 etcd-ha-220492-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m55s
	  kube-system                 kindnet-5p8f7                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m57s
	  kube-system                 kube-apiserver-ha-220492-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m56s
	  kube-system                 kube-controller-manager-ha-220492-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m56s
	  kube-system                 kube-proxy-dkzgt                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m57s
	  kube-system                 kube-scheduler-ha-220492-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m56s
	  kube-system                 kube-vip-ha-220492-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m53s                  kube-proxy       
	  Normal  RegisteredNode           5m57s                  node-controller  Node ha-220492-m02 event: Registered Node ha-220492-m02 in Controller
	  Normal  NodeHasSufficientMemory  5m57s (x8 over 5m57s)  kubelet          Node ha-220492-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m57s (x8 over 5m57s)  kubelet          Node ha-220492-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m57s (x7 over 5m57s)  kubelet          Node ha-220492-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m57s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m39s                  node-controller  Node ha-220492-m02 event: Registered Node ha-220492-m02 in Controller
	  Normal  RegisteredNode           4m28s                  node-controller  Node ha-220492-m02 event: Registered Node ha-220492-m02 in Controller
	  Normal  NodeNotReady             2m43s                  node-controller  Node ha-220492-m02 status is now: NodeNotReady
	
	
	Name:               ha-220492-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-220492-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	                    minikube.k8s.io/name=ha-220492
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_03T12_43_49_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 12:43:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-220492-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 12:48:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 12:44:15 +0000   Mon, 03 Jun 2024 12:43:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 12:44:15 +0000   Mon, 03 Jun 2024 12:43:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 12:44:15 +0000   Mon, 03 Jun 2024 12:43:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 12:44:15 +0000   Mon, 03 Jun 2024 12:43:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.169
	  Hostname:    ha-220492-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c1055ed032f443e996570a5a0e130a0f
	  System UUID:                c1055ed0-32f4-43e9-9657-0a5a0e130a0f
	  Boot ID:                    eb7a7193-bab4-4090-948a-7512b86c5924
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-stmtj                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m23s
	  kube-system                 etcd-ha-220492-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m44s
	  kube-system                 kindnet-gkd6p                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m46s
	  kube-system                 kube-apiserver-ha-220492-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m44s
	  kube-system                 kube-controller-manager-ha-220492-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m37s
	  kube-system                 kube-proxy-m5l8r                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m46s
	  kube-system                 kube-scheduler-ha-220492-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m44s
	  kube-system                 kube-vip-ha-220492-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m41s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m46s (x8 over 4m46s)  kubelet          Node ha-220492-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m46s (x8 over 4m46s)  kubelet          Node ha-220492-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m46s (x7 over 4m46s)  kubelet          Node ha-220492-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m44s                  node-controller  Node ha-220492-m03 event: Registered Node ha-220492-m03 in Controller
	  Normal  RegisteredNode           4m42s                  node-controller  Node ha-220492-m03 event: Registered Node ha-220492-m03 in Controller
	  Normal  RegisteredNode           4m28s                  node-controller  Node ha-220492-m03 event: Registered Node ha-220492-m03 in Controller
	
	
	Name:               ha-220492-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-220492-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	                    minikube.k8s.io/name=ha-220492
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_03T12_44_42_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 12:44:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-220492-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 12:48:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 12:45:12 +0000   Mon, 03 Jun 2024 12:44:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 12:45:12 +0000   Mon, 03 Jun 2024 12:44:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 12:45:12 +0000   Mon, 03 Jun 2024 12:44:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 12:45:12 +0000   Mon, 03 Jun 2024 12:44:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.76
	  Hostname:    ha-220492-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c6e57d6c6ec64017a56a85c3aa55fe71
	  System UUID:                c6e57d6c-6ec6-4017-a56a-85c3aa55fe71
	  Boot ID:                    89c71749-9840-4d2b-813a-335eed63de23
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-l7rsb       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m50s
	  kube-system                 kube-proxy-ggdgz    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m44s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m50s (x2 over 3m51s)  kubelet          Node ha-220492-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m50s (x2 over 3m51s)  kubelet          Node ha-220492-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m50s (x2 over 3m51s)  kubelet          Node ha-220492-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m49s                  node-controller  Node ha-220492-m04 event: Registered Node ha-220492-m04 in Controller
	  Normal  RegisteredNode           3m48s                  node-controller  Node ha-220492-m04 event: Registered Node ha-220492-m04 in Controller
	  Normal  RegisteredNode           3m45s                  node-controller  Node ha-220492-m04 event: Registered Node ha-220492-m04 in Controller
	  Normal  NodeReady                3m41s                  kubelet          Node ha-220492-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jun 3 12:40] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051399] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040129] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.496498] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.471458] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.572370] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jun 3 12:41] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.058474] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056951] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.164438] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.150241] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.258150] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.221845] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +4.557727] systemd-fstab-generator[946]: Ignoring "noauto" option for root device
	[  +0.059101] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.202766] systemd-fstab-generator[1365]: Ignoring "noauto" option for root device
	[  +0.082984] kauditd_printk_skb: 79 callbacks suppressed
	[ +14.083493] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.330072] kauditd_printk_skb: 68 callbacks suppressed
	
	
	==> etcd [3f1c2bb32752f666af65f18178d8dd09b063abaa5dd50c071c9f8f377fc63156] <==
	{"level":"warn","ts":"2024-06-03T12:48:31.860721Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T12:48:31.873172Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T12:48:31.886273Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T12:48:31.890149Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T12:48:31.904536Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T12:48:31.911975Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T12:48:31.912897Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T12:48:31.922904Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T12:48:31.92889Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T12:48:31.932762Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T12:48:31.942686Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T12:48:31.948774Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T12:48:31.955571Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T12:48:31.955743Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T12:48:31.959673Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T12:48:31.962757Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T12:48:31.969735Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T12:48:31.976433Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T12:48:31.982426Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T12:48:31.986961Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T12:48:31.990477Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T12:48:31.996828Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T12:48:32.002422Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T12:48:32.014965Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-03T12:48:32.055904Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 12:48:32 up 7 min,  0 users,  load average: 0.38, 0.31, 0.16
	Linux ha-220492 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [e802c94fbf7b652f64d20242a16e1a092bc293af274ffda5f7da2cdb3726110f] <==
	I0603 12:47:55.132369       1 main.go:250] Node ha-220492-m04 has CIDR [10.244.3.0/24] 
	I0603 12:48:05.149548       1 main.go:223] Handling node with IPs: map[192.168.39.6:{}]
	I0603 12:48:05.149619       1 main.go:227] handling current node
	I0603 12:48:05.149653       1 main.go:223] Handling node with IPs: map[192.168.39.106:{}]
	I0603 12:48:05.149658       1 main.go:250] Node ha-220492-m02 has CIDR [10.244.1.0/24] 
	I0603 12:48:05.149811       1 main.go:223] Handling node with IPs: map[192.168.39.169:{}]
	I0603 12:48:05.149840       1 main.go:250] Node ha-220492-m03 has CIDR [10.244.2.0/24] 
	I0603 12:48:05.149891       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0603 12:48:05.149911       1 main.go:250] Node ha-220492-m04 has CIDR [10.244.3.0/24] 
	I0603 12:48:15.159477       1 main.go:223] Handling node with IPs: map[192.168.39.6:{}]
	I0603 12:48:15.159564       1 main.go:227] handling current node
	I0603 12:48:15.159589       1 main.go:223] Handling node with IPs: map[192.168.39.106:{}]
	I0603 12:48:15.159606       1 main.go:250] Node ha-220492-m02 has CIDR [10.244.1.0/24] 
	I0603 12:48:15.159768       1 main.go:223] Handling node with IPs: map[192.168.39.169:{}]
	I0603 12:48:15.159790       1 main.go:250] Node ha-220492-m03 has CIDR [10.244.2.0/24] 
	I0603 12:48:15.159853       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0603 12:48:15.159871       1 main.go:250] Node ha-220492-m04 has CIDR [10.244.3.0/24] 
	I0603 12:48:25.175720       1 main.go:223] Handling node with IPs: map[192.168.39.6:{}]
	I0603 12:48:25.175801       1 main.go:227] handling current node
	I0603 12:48:25.175824       1 main.go:223] Handling node with IPs: map[192.168.39.106:{}]
	I0603 12:48:25.175841       1 main.go:250] Node ha-220492-m02 has CIDR [10.244.1.0/24] 
	I0603 12:48:25.175973       1 main.go:223] Handling node with IPs: map[192.168.39.169:{}]
	I0603 12:48:25.175992       1 main.go:250] Node ha-220492-m03 has CIDR [10.244.2.0/24] 
	I0603 12:48:25.176116       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0603 12:48:25.176139       1 main.go:250] Node ha-220492-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [f2c6a50d20a2f169936062c7c4c41810fed1d7c1fbfd8db5b78066436668c44c] <==
	I0603 12:41:26.555382       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0603 12:41:27.119199       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0603 12:41:27.135822       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0603 12:41:27.303429       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0603 12:41:40.412270       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0603 12:41:40.610430       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0603 12:44:11.353702       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40030: use of closed network connection
	E0603 12:44:11.562518       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40048: use of closed network connection
	E0603 12:44:11.742844       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40062: use of closed network connection
	E0603 12:44:11.979592       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40072: use of closed network connection
	E0603 12:44:12.166800       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40090: use of closed network connection
	E0603 12:44:12.353276       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40118: use of closed network connection
	E0603 12:44:12.541514       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40124: use of closed network connection
	E0603 12:44:12.730492       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40138: use of closed network connection
	E0603 12:44:12.935958       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40158: use of closed network connection
	E0603 12:44:13.258657       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40180: use of closed network connection
	E0603 12:44:13.443719       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40206: use of closed network connection
	E0603 12:44:13.629404       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40216: use of closed network connection
	E0603 12:44:13.809363       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40230: use of closed network connection
	E0603 12:44:14.007269       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40252: use of closed network connection
	E0603 12:44:14.198902       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40272: use of closed network connection
	E0603 12:44:42.929146       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0603 12:44:42.929186       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0603 12:44:42.930374       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0603 12:44:42.930593       1 timeout.go:142] post-timeout activity - time-elapsed: 2.024799ms, GET "/api/v1/nodes" result: <nil>
	
	
	==> kube-controller-manager [24aa5625e9a8ad09c021e567710cafe54b2645a693d4daeb7b4e26ef9afea15b] <==
	I0603 12:43:49.913966       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-220492-m03"
	I0603 12:44:08.594527       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.593843ms"
	I0603 12:44:08.627660       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.878377ms"
	I0603 12:44:08.627814       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.191µs"
	I0603 12:44:08.637872       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="412.096µs"
	I0603 12:44:08.798573       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="151.084361ms"
	I0603 12:44:08.990690       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="192.043093ms"
	E0603 12:44:08.990947       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0603 12:44:09.170301       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="179.042579ms"
	I0603 12:44:09.170723       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="213.135µs"
	I0603 12:44:10.059779       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.898187ms"
	I0603 12:44:10.060098       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="109.975µs"
	I0603 12:44:10.364592       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.921701ms"
	I0603 12:44:10.365156       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="145.576µs"
	I0603 12:44:10.857084       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.566086ms"
	I0603 12:44:10.857350       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.316µs"
	E0603 12:44:41.979137       1 certificate_controller.go:146] Sync csr-dsj4z failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-dsj4z": the object has been modified; please apply your changes to the latest version and try again
	E0603 12:44:41.982628       1 certificate_controller.go:146] Sync csr-dsj4z failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-dsj4z": the object has been modified; please apply your changes to the latest version and try again
	I0603 12:44:42.261853       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-220492-m04\" does not exist"
	I0603 12:44:42.277634       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-220492-m04" podCIDRs=["10.244.3.0/24"]
	I0603 12:44:44.938801       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-220492-m04"
	I0603 12:44:51.906684       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-220492-m04"
	I0603 12:45:48.763656       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-220492-m04"
	I0603 12:45:48.860503       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.875491ms"
	I0603 12:45:48.861605       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="103.062µs"
	
	
	==> kube-proxy [16c93dcdad420f0831a36fd31ab05cb7c3a9fefd9706a928d0b31b781e1cbcb5] <==
	I0603 12:41:41.653918       1 server_linux.go:69] "Using iptables proxy"
	I0603 12:41:41.666625       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.6"]
	I0603 12:41:41.746204       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 12:41:41.746284       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 12:41:41.746307       1 server_linux.go:165] "Using iptables Proxier"
	I0603 12:41:41.756637       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 12:41:41.759208       1 server.go:872] "Version info" version="v1.30.1"
	I0603 12:41:41.759292       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 12:41:41.764714       1 config.go:101] "Starting endpoint slice config controller"
	I0603 12:41:41.764758       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 12:41:41.764789       1 config.go:192] "Starting service config controller"
	I0603 12:41:41.764793       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 12:41:41.765665       1 config.go:319] "Starting node config controller"
	I0603 12:41:41.765696       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 12:41:41.865499       1 shared_informer.go:320] Caches are synced for service config
	I0603 12:41:41.865545       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 12:41:41.865850       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [86f8a60e5333435d8ac7bc454e10cecb904b633e2ae00b080728114f5f1b1f35] <==
	W0603 12:41:25.685528       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0603 12:41:25.685568       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0603 12:41:25.795923       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0603 12:41:25.796132       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0603 12:41:25.805134       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0603 12:41:25.805179       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0603 12:41:25.890812       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0603 12:41:25.890868       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0603 12:41:25.955492       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0603 12:41:25.955540       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0603 12:41:26.130687       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0603 12:41:26.130750       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 12:41:29.273960       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0603 12:43:45.274599       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-m5l8r\": pod kube-proxy-m5l8r is already assigned to node \"ha-220492-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-m5l8r" node="ha-220492-m03"
	E0603 12:43:45.274917       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-m5l8r\": pod kube-proxy-m5l8r is already assigned to node \"ha-220492-m03\"" pod="kube-system/kube-proxy-m5l8r"
	I0603 12:43:45.274999       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-m5l8r" node="ha-220492-m03"
	I0603 12:44:08.559432       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="9566ea60-1af8-46e1-93a0-071ebaa32d09" pod="default/busybox-fc5497c4f-m229v" assumedNode="ha-220492-m02" currentNode="ha-220492-m03"
	E0603 12:44:08.570382       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-m229v\": pod busybox-fc5497c4f-m229v is already assigned to node \"ha-220492-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-m229v" node="ha-220492-m03"
	E0603 12:44:08.570476       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 9566ea60-1af8-46e1-93a0-071ebaa32d09(default/busybox-fc5497c4f-m229v) was assumed on ha-220492-m03 but assigned to ha-220492-m02" pod="default/busybox-fc5497c4f-m229v"
	E0603 12:44:08.570509       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-m229v\": pod busybox-fc5497c4f-m229v is already assigned to node \"ha-220492-m02\"" pod="default/busybox-fc5497c4f-m229v"
	I0603 12:44:08.570570       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-m229v" node="ha-220492-m02"
	E0603 12:44:42.337402       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-ggdgz\": pod kube-proxy-ggdgz is already assigned to node \"ha-220492-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-ggdgz" node="ha-220492-m04"
	E0603 12:44:42.337477       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 6de7aa57-0339-4982-a792-5adf344ad155(kube-system/kube-proxy-ggdgz) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-ggdgz"
	E0603 12:44:42.337499       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-ggdgz\": pod kube-proxy-ggdgz is already assigned to node \"ha-220492-m04\"" pod="kube-system/kube-proxy-ggdgz"
	I0603 12:44:42.337519       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-ggdgz" node="ha-220492-m04"
	
	
	==> kubelet <==
	Jun 03 12:44:27 ha-220492 kubelet[1372]: E0603 12:44:27.283768    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 12:44:27 ha-220492 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 12:44:27 ha-220492 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 12:44:27 ha-220492 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 12:44:27 ha-220492 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 12:45:27 ha-220492 kubelet[1372]: E0603 12:45:27.279817    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 12:45:27 ha-220492 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 12:45:27 ha-220492 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 12:45:27 ha-220492 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 12:45:27 ha-220492 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 12:46:27 ha-220492 kubelet[1372]: E0603 12:46:27.280890    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 12:46:27 ha-220492 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 12:46:27 ha-220492 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 12:46:27 ha-220492 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 12:46:27 ha-220492 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 12:47:27 ha-220492 kubelet[1372]: E0603 12:47:27.280298    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 12:47:27 ha-220492 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 12:47:27 ha-220492 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 12:47:27 ha-220492 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 12:47:27 ha-220492 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 12:48:27 ha-220492 kubelet[1372]: E0603 12:48:27.280430    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 12:48:27 ha-220492 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 12:48:27 ha-220492 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 12:48:27 ha-220492 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 12:48:27 ha-220492 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-220492 -n ha-220492
helpers_test.go:261: (dbg) Run:  kubectl --context ha-220492 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (55.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (385.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-220492 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-220492 -v=7 --alsologtostderr
E0603 12:49:58.228941 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/functional-093300/client.crt: no such file or directory
E0603 12:50:25.914012 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/functional-093300/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-220492 -v=7 --alsologtostderr: exit status 82 (2m1.998743821s)

                                                
                                                
-- stdout --
	* Stopping node "ha-220492-m04"  ...
	* Stopping node "ha-220492-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0603 12:48:33.515394 1102167 out.go:291] Setting OutFile to fd 1 ...
	I0603 12:48:33.515628 1102167 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:48:33.515637 1102167 out.go:304] Setting ErrFile to fd 2...
	I0603 12:48:33.515641 1102167 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:48:33.515848 1102167 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
	I0603 12:48:33.516092 1102167 out.go:298] Setting JSON to false
	I0603 12:48:33.516174 1102167 mustload.go:65] Loading cluster: ha-220492
	I0603 12:48:33.516610 1102167 config.go:182] Loaded profile config "ha-220492": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:48:33.516709 1102167 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/config.json ...
	I0603 12:48:33.516917 1102167 mustload.go:65] Loading cluster: ha-220492
	I0603 12:48:33.517089 1102167 config.go:182] Loaded profile config "ha-220492": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:48:33.517131 1102167 stop.go:39] StopHost: ha-220492-m04
	I0603 12:48:33.517591 1102167 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:48:33.517652 1102167 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:48:33.533240 1102167 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33517
	I0603 12:48:33.533734 1102167 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:48:33.534341 1102167 main.go:141] libmachine: Using API Version  1
	I0603 12:48:33.534371 1102167 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:48:33.534751 1102167 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:48:33.537105 1102167 out.go:177] * Stopping node "ha-220492-m04"  ...
	I0603 12:48:33.538506 1102167 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0603 12:48:33.538551 1102167 main.go:141] libmachine: (ha-220492-m04) Calling .DriverName
	I0603 12:48:33.538865 1102167 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0603 12:48:33.538897 1102167 main.go:141] libmachine: (ha-220492-m04) Calling .GetSSHHostname
	I0603 12:48:33.541818 1102167 main.go:141] libmachine: (ha-220492-m04) DBG | domain ha-220492-m04 has defined MAC address 52:54:00:ce:45:9f in network mk-ha-220492
	I0603 12:48:33.542300 1102167 main.go:141] libmachine: (ha-220492-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:45:9f", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:44:29 +0000 UTC Type:0 Mac:52:54:00:ce:45:9f Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-220492-m04 Clientid:01:52:54:00:ce:45:9f}
	I0603 12:48:33.542335 1102167 main.go:141] libmachine: (ha-220492-m04) DBG | domain ha-220492-m04 has defined IP address 192.168.39.76 and MAC address 52:54:00:ce:45:9f in network mk-ha-220492
	I0603 12:48:33.542465 1102167 main.go:141] libmachine: (ha-220492-m04) Calling .GetSSHPort
	I0603 12:48:33.542661 1102167 main.go:141] libmachine: (ha-220492-m04) Calling .GetSSHKeyPath
	I0603 12:48:33.542815 1102167 main.go:141] libmachine: (ha-220492-m04) Calling .GetSSHUsername
	I0603 12:48:33.542975 1102167 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m04/id_rsa Username:docker}
	I0603 12:48:33.632305 1102167 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0603 12:48:33.686768 1102167 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0603 12:48:33.741744 1102167 main.go:141] libmachine: Stopping "ha-220492-m04"...
	I0603 12:48:33.741774 1102167 main.go:141] libmachine: (ha-220492-m04) Calling .GetState
	I0603 12:48:33.743562 1102167 main.go:141] libmachine: (ha-220492-m04) Calling .Stop
	I0603 12:48:33.747278 1102167 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 0/120
	I0603 12:48:35.021819 1102167 main.go:141] libmachine: (ha-220492-m04) Calling .GetState
	I0603 12:48:35.023184 1102167 main.go:141] libmachine: Machine "ha-220492-m04" was stopped.
	I0603 12:48:35.023203 1102167 stop.go:75] duration metric: took 1.484699468s to stop
	I0603 12:48:35.023223 1102167 stop.go:39] StopHost: ha-220492-m03
	I0603 12:48:35.023639 1102167 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:48:35.023705 1102167 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:48:35.039606 1102167 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43689
	I0603 12:48:35.040044 1102167 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:48:35.040549 1102167 main.go:141] libmachine: Using API Version  1
	I0603 12:48:35.040572 1102167 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:48:35.040913 1102167 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:48:35.043060 1102167 out.go:177] * Stopping node "ha-220492-m03"  ...
	I0603 12:48:35.044291 1102167 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0603 12:48:35.044316 1102167 main.go:141] libmachine: (ha-220492-m03) Calling .DriverName
	I0603 12:48:35.044556 1102167 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0603 12:48:35.044583 1102167 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHHostname
	I0603 12:48:35.047931 1102167 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:48:35.048292 1102167 main.go:141] libmachine: (ha-220492-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:60:87", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:43:09 +0000 UTC Type:0 Mac:52:54:00:ae:60:87 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-220492-m03 Clientid:01:52:54:00:ae:60:87}
	I0603 12:48:35.048328 1102167 main.go:141] libmachine: (ha-220492-m03) DBG | domain ha-220492-m03 has defined IP address 192.168.39.169 and MAC address 52:54:00:ae:60:87 in network mk-ha-220492
	I0603 12:48:35.048467 1102167 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHPort
	I0603 12:48:35.048677 1102167 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHKeyPath
	I0603 12:48:35.048871 1102167 main.go:141] libmachine: (ha-220492-m03) Calling .GetSSHUsername
	I0603 12:48:35.049033 1102167 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m03/id_rsa Username:docker}
	I0603 12:48:35.140855 1102167 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0603 12:48:35.194527 1102167 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0603 12:48:35.256262 1102167 main.go:141] libmachine: Stopping "ha-220492-m03"...
	I0603 12:48:35.256318 1102167 main.go:141] libmachine: (ha-220492-m03) Calling .GetState
	I0603 12:48:35.258126 1102167 main.go:141] libmachine: (ha-220492-m03) Calling .Stop
	I0603 12:48:35.261999 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 0/120
	I0603 12:48:36.263439 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 1/120
	I0603 12:48:37.264825 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 2/120
	I0603 12:48:38.266732 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 3/120
	I0603 12:48:39.269397 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 4/120
	I0603 12:48:40.271374 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 5/120
	I0603 12:48:41.273208 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 6/120
	I0603 12:48:42.274660 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 7/120
	I0603 12:48:43.276442 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 8/120
	I0603 12:48:44.277988 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 9/120
	I0603 12:48:45.280345 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 10/120
	I0603 12:48:46.282001 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 11/120
	I0603 12:48:47.283706 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 12/120
	I0603 12:48:48.285249 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 13/120
	I0603 12:48:49.286679 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 14/120
	I0603 12:48:50.288558 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 15/120
	I0603 12:48:51.290110 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 16/120
	I0603 12:48:52.291791 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 17/120
	I0603 12:48:53.293343 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 18/120
	I0603 12:48:54.294980 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 19/120
	I0603 12:48:55.297397 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 20/120
	I0603 12:48:56.299175 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 21/120
	I0603 12:48:57.300573 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 22/120
	I0603 12:48:58.302182 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 23/120
	I0603 12:48:59.303670 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 24/120
	I0603 12:49:00.305552 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 25/120
	I0603 12:49:01.307223 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 26/120
	I0603 12:49:02.308689 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 27/120
	I0603 12:49:03.310445 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 28/120
	I0603 12:49:04.311853 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 29/120
	I0603 12:49:05.313580 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 30/120
	I0603 12:49:06.315158 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 31/120
	I0603 12:49:07.316550 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 32/120
	I0603 12:49:08.318270 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 33/120
	I0603 12:49:09.319853 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 34/120
	I0603 12:49:10.321208 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 35/120
	I0603 12:49:11.322799 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 36/120
	I0603 12:49:12.324171 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 37/120
	I0603 12:49:13.325523 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 38/120
	I0603 12:49:14.326799 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 39/120
	I0603 12:49:15.328722 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 40/120
	I0603 12:49:16.330057 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 41/120
	I0603 12:49:17.332205 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 42/120
	I0603 12:49:18.333606 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 43/120
	I0603 12:49:19.335141 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 44/120
	I0603 12:49:20.337280 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 45/120
	I0603 12:49:21.338912 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 46/120
	I0603 12:49:22.340432 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 47/120
	I0603 12:49:23.342082 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 48/120
	I0603 12:49:24.343485 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 49/120
	I0603 12:49:25.345394 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 50/120
	I0603 12:49:26.346854 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 51/120
	I0603 12:49:27.348379 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 52/120
	I0603 12:49:28.349866 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 53/120
	I0603 12:49:29.351475 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 54/120
	I0603 12:49:30.353224 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 55/120
	I0603 12:49:31.355058 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 56/120
	I0603 12:49:32.356613 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 57/120
	I0603 12:49:33.358225 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 58/120
	I0603 12:49:34.359579 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 59/120
	I0603 12:49:35.361882 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 60/120
	I0603 12:49:36.364065 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 61/120
	I0603 12:49:37.365951 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 62/120
	I0603 12:49:38.367347 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 63/120
	I0603 12:49:39.368861 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 64/120
	I0603 12:49:40.370131 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 65/120
	I0603 12:49:41.371696 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 66/120
	I0603 12:49:42.374127 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 67/120
	I0603 12:49:43.375605 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 68/120
	I0603 12:49:44.376926 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 69/120
	I0603 12:49:45.378894 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 70/120
	I0603 12:49:46.380303 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 71/120
	I0603 12:49:47.381668 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 72/120
	I0603 12:49:48.382982 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 73/120
	I0603 12:49:49.384303 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 74/120
	I0603 12:49:50.386000 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 75/120
	I0603 12:49:51.387770 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 76/120
	I0603 12:49:52.388960 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 77/120
	I0603 12:49:53.390492 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 78/120
	I0603 12:49:54.391875 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 79/120
	I0603 12:49:55.393675 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 80/120
	I0603 12:49:56.395063 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 81/120
	I0603 12:49:57.396393 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 82/120
	I0603 12:49:58.397996 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 83/120
	I0603 12:49:59.399627 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 84/120
	I0603 12:50:00.401225 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 85/120
	I0603 12:50:01.402631 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 86/120
	I0603 12:50:02.404105 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 87/120
	I0603 12:50:03.405556 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 88/120
	I0603 12:50:04.406937 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 89/120
	I0603 12:50:05.408472 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 90/120
	I0603 12:50:06.410796 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 91/120
	I0603 12:50:07.412138 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 92/120
	I0603 12:50:08.413839 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 93/120
	I0603 12:50:09.415181 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 94/120
	I0603 12:50:10.417196 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 95/120
	I0603 12:50:11.418412 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 96/120
	I0603 12:50:12.420033 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 97/120
	I0603 12:50:13.421354 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 98/120
	I0603 12:50:14.423289 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 99/120
	I0603 12:50:15.425462 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 100/120
	I0603 12:50:16.427682 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 101/120
	I0603 12:50:17.429368 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 102/120
	I0603 12:50:18.430565 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 103/120
	I0603 12:50:19.431929 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 104/120
	I0603 12:50:20.433685 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 105/120
	I0603 12:50:21.434957 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 106/120
	I0603 12:50:22.436518 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 107/120
	I0603 12:50:23.437833 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 108/120
	I0603 12:50:24.439370 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 109/120
	I0603 12:50:25.441060 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 110/120
	I0603 12:50:26.442524 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 111/120
	I0603 12:50:27.444111 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 112/120
	I0603 12:50:28.445552 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 113/120
	I0603 12:50:29.446702 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 114/120
	I0603 12:50:30.448395 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 115/120
	I0603 12:50:31.449978 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 116/120
	I0603 12:50:32.451335 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 117/120
	I0603 12:50:33.452608 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 118/120
	I0603 12:50:34.454458 1102167 main.go:141] libmachine: (ha-220492-m03) Waiting for machine to stop 119/120
	I0603 12:50:35.455439 1102167 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0603 12:50:35.455535 1102167 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0603 12:50:35.457461 1102167 out.go:177] 
	W0603 12:50:35.459080 1102167 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0603 12:50:35.459098 1102167 out.go:239] * 
	* 
	W0603 12:50:35.463675 1102167 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0603 12:50:35.465069 1102167 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-220492 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-220492 --wait=true -v=7 --alsologtostderr
E0603 12:52:45.541499 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.crt: no such file or directory
E0603 12:54:08.589816 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-220492 --wait=true -v=7 --alsologtostderr: (4m20.648215899s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-220492
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-220492 -n ha-220492
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-220492 logs -n 25: (1.794626063s)
E0603 12:54:58.228807 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/functional-093300/client.crt: no such file or directory
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-220492 cp ha-220492-m03:/home/docker/cp-test.txt                              | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492-m02:/home/docker/cp-test_ha-220492-m03_ha-220492-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-220492 ssh -n                                                                 | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-220492 ssh -n ha-220492-m02 sudo cat                                          | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | /home/docker/cp-test_ha-220492-m03_ha-220492-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-220492 cp ha-220492-m03:/home/docker/cp-test.txt                              | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492-m04:/home/docker/cp-test_ha-220492-m03_ha-220492-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-220492 ssh -n                                                                 | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-220492 ssh -n ha-220492-m04 sudo cat                                          | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | /home/docker/cp-test_ha-220492-m03_ha-220492-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-220492 cp testdata/cp-test.txt                                                | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-220492 ssh -n                                                                 | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-220492 cp ha-220492-m04:/home/docker/cp-test.txt                              | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3428699095/001/cp-test_ha-220492-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-220492 ssh -n                                                                 | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-220492 cp ha-220492-m04:/home/docker/cp-test.txt                              | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492:/home/docker/cp-test_ha-220492-m04_ha-220492.txt                       |           |         |         |                     |                     |
	| ssh     | ha-220492 ssh -n                                                                 | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-220492 ssh -n ha-220492 sudo cat                                              | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | /home/docker/cp-test_ha-220492-m04_ha-220492.txt                                 |           |         |         |                     |                     |
	| cp      | ha-220492 cp ha-220492-m04:/home/docker/cp-test.txt                              | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492-m02:/home/docker/cp-test_ha-220492-m04_ha-220492-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-220492 ssh -n                                                                 | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-220492 ssh -n ha-220492-m02 sudo cat                                          | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | /home/docker/cp-test_ha-220492-m04_ha-220492-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-220492 cp ha-220492-m04:/home/docker/cp-test.txt                              | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492-m03:/home/docker/cp-test_ha-220492-m04_ha-220492-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-220492 ssh -n                                                                 | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-220492 ssh -n ha-220492-m03 sudo cat                                          | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | /home/docker/cp-test_ha-220492-m04_ha-220492-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-220492 node stop m02 -v=7                                                     | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-220492 node start m02 -v=7                                                    | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:47 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-220492 -v=7                                                           | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:48 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-220492 -v=7                                                                | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:48 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-220492 --wait=true -v=7                                                    | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:50 UTC | 03 Jun 24 12:54 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-220492                                                                | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:54 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 12:50:35
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 12:50:35.514830 1102637 out.go:291] Setting OutFile to fd 1 ...
	I0603 12:50:35.515133 1102637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:50:35.515143 1102637 out.go:304] Setting ErrFile to fd 2...
	I0603 12:50:35.515148 1102637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:50:35.515311 1102637 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
	I0603 12:50:35.515865 1102637 out.go:298] Setting JSON to false
	I0603 12:50:35.516966 1102637 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12782,"bootTime":1717406253,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 12:50:35.517029 1102637 start.go:139] virtualization: kvm guest
	I0603 12:50:35.519522 1102637 out.go:177] * [ha-220492] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 12:50:35.520944 1102637 notify.go:220] Checking for updates...
	I0603 12:50:35.520949 1102637 out.go:177]   - MINIKUBE_LOCATION=19011
	I0603 12:50:35.522446 1102637 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 12:50:35.523880 1102637 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 12:50:35.525245 1102637 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 12:50:35.526538 1102637 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 12:50:35.527652 1102637 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 12:50:35.529206 1102637 config.go:182] Loaded profile config "ha-220492": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:50:35.529361 1102637 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 12:50:35.529833 1102637 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:50:35.529896 1102637 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:50:35.545387 1102637 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46183
	I0603 12:50:35.545847 1102637 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:50:35.546423 1102637 main.go:141] libmachine: Using API Version  1
	I0603 12:50:35.546443 1102637 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:50:35.546886 1102637 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:50:35.547111 1102637 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:50:35.582575 1102637 out.go:177] * Using the kvm2 driver based on existing profile
	I0603 12:50:35.583720 1102637 start.go:297] selected driver: kvm2
	I0603 12:50:35.583742 1102637 start.go:901] validating driver "kvm2" against &{Name:ha-220492 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.1 ClusterName:ha-220492 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.106 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.169 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.76 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:
false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:50:35.583925 1102637 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 12:50:35.584320 1102637 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 12:50:35.584400 1102637 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19011-1078924/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0603 12:50:35.599685 1102637 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0603 12:50:35.600408 1102637 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 12:50:35.600502 1102637 cni.go:84] Creating CNI manager for ""
	I0603 12:50:35.600517 1102637 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0603 12:50:35.600594 1102637 start.go:340] cluster config:
	{Name:ha-220492 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-220492 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.106 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.169 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.76 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tille
r:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:50:35.600742 1102637 iso.go:125] acquiring lock: {Name:mka26d6a83f88b83737ccc78b57cc462fbe70fe1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 12:50:35.602341 1102637 out.go:177] * Starting "ha-220492" primary control-plane node in "ha-220492" cluster
	I0603 12:50:35.603459 1102637 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 12:50:35.603491 1102637 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0603 12:50:35.603506 1102637 cache.go:56] Caching tarball of preloaded images
	I0603 12:50:35.603592 1102637 preload.go:173] Found /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 12:50:35.603607 1102637 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0603 12:50:35.603726 1102637 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/config.json ...
	I0603 12:50:35.603916 1102637 start.go:360] acquireMachinesLock for ha-220492: {Name:mk20baaab39609d00406b78ad309423511e633ec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 12:50:35.603962 1102637 start.go:364] duration metric: took 26.027µs to acquireMachinesLock for "ha-220492"
	I0603 12:50:35.603981 1102637 start.go:96] Skipping create...Using existing machine configuration
	I0603 12:50:35.603992 1102637 fix.go:54] fixHost starting: 
	I0603 12:50:35.604256 1102637 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:50:35.604294 1102637 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:50:35.619312 1102637 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39319
	I0603 12:50:35.619706 1102637 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:50:35.620170 1102637 main.go:141] libmachine: Using API Version  1
	I0603 12:50:35.620186 1102637 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:50:35.620566 1102637 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:50:35.620771 1102637 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:50:35.620914 1102637 main.go:141] libmachine: (ha-220492) Calling .GetState
	I0603 12:50:35.622495 1102637 fix.go:112] recreateIfNeeded on ha-220492: state=Running err=<nil>
	W0603 12:50:35.622515 1102637 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 12:50:35.624360 1102637 out.go:177] * Updating the running kvm2 "ha-220492" VM ...
	I0603 12:50:35.625691 1102637 machine.go:94] provisionDockerMachine start ...
	I0603 12:50:35.625714 1102637 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:50:35.625912 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:50:35.628346 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:50:35.628817 1102637 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:50:35.628848 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:50:35.628947 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:50:35.629151 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:50:35.629350 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:50:35.629521 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:50:35.629695 1102637 main.go:141] libmachine: Using SSH client type: native
	I0603 12:50:35.629919 1102637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0603 12:50:35.629935 1102637 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 12:50:35.746750 1102637 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-220492
	
	I0603 12:50:35.746779 1102637 main.go:141] libmachine: (ha-220492) Calling .GetMachineName
	I0603 12:50:35.747023 1102637 buildroot.go:166] provisioning hostname "ha-220492"
	I0603 12:50:35.747057 1102637 main.go:141] libmachine: (ha-220492) Calling .GetMachineName
	I0603 12:50:35.747265 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:50:35.749695 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:50:35.750078 1102637 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:50:35.750104 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:50:35.750206 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:50:35.750394 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:50:35.750581 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:50:35.750732 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:50:35.750877 1102637 main.go:141] libmachine: Using SSH client type: native
	I0603 12:50:35.751088 1102637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0603 12:50:35.751101 1102637 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-220492 && echo "ha-220492" | sudo tee /etc/hostname
	I0603 12:50:35.887820 1102637 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-220492
	
	I0603 12:50:35.887850 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:50:35.890703 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:50:35.891126 1102637 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:50:35.891155 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:50:35.891374 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:50:35.891586 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:50:35.891748 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:50:35.891917 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:50:35.892088 1102637 main.go:141] libmachine: Using SSH client type: native
	I0603 12:50:35.892307 1102637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0603 12:50:35.892329 1102637 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-220492' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-220492/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-220492' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 12:50:36.002397 1102637 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:50:36.002451 1102637 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19011-1078924/.minikube CaCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19011-1078924/.minikube}
	I0603 12:50:36.002477 1102637 buildroot.go:174] setting up certificates
	I0603 12:50:36.002489 1102637 provision.go:84] configureAuth start
	I0603 12:50:36.002504 1102637 main.go:141] libmachine: (ha-220492) Calling .GetMachineName
	I0603 12:50:36.002791 1102637 main.go:141] libmachine: (ha-220492) Calling .GetIP
	I0603 12:50:36.005045 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:50:36.005489 1102637 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:50:36.005534 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:50:36.005720 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:50:36.007979 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:50:36.008371 1102637 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:50:36.008414 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:50:36.008549 1102637 provision.go:143] copyHostCerts
	I0603 12:50:36.008587 1102637 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 12:50:36.008641 1102637 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem, removing ...
	I0603 12:50:36.008655 1102637 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 12:50:36.008715 1102637 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem (1078 bytes)
	I0603 12:50:36.008813 1102637 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 12:50:36.008834 1102637 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem, removing ...
	I0603 12:50:36.008838 1102637 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 12:50:36.008865 1102637 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem (1123 bytes)
	I0603 12:50:36.008931 1102637 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 12:50:36.008947 1102637 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem, removing ...
	I0603 12:50:36.008953 1102637 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 12:50:36.008973 1102637 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem (1675 bytes)
	I0603 12:50:36.009031 1102637 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem org=jenkins.ha-220492 san=[127.0.0.1 192.168.39.6 ha-220492 localhost minikube]
	I0603 12:50:36.426579 1102637 provision.go:177] copyRemoteCerts
	I0603 12:50:36.426645 1102637 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 12:50:36.426673 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:50:36.429050 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:50:36.429506 1102637 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:50:36.429535 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:50:36.429719 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:50:36.429931 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:50:36.430110 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:50:36.430266 1102637 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/id_rsa Username:docker}
	I0603 12:50:36.516114 1102637 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0603 12:50:36.516187 1102637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 12:50:36.542904 1102637 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0603 12:50:36.542968 1102637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0603 12:50:36.567580 1102637 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0603 12:50:36.567634 1102637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 12:50:36.595013 1102637 provision.go:87] duration metric: took 592.509171ms to configureAuth
	I0603 12:50:36.595036 1102637 buildroot.go:189] setting minikube options for container-runtime
	I0603 12:50:36.595236 1102637 config.go:182] Loaded profile config "ha-220492": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:50:36.595332 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:50:36.597935 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:50:36.598319 1102637 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:50:36.598350 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:50:36.598531 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:50:36.598780 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:50:36.598945 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:50:36.599091 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:50:36.599233 1102637 main.go:141] libmachine: Using SSH client type: native
	I0603 12:50:36.599408 1102637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0603 12:50:36.599423 1102637 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 12:52:07.347681 1102637 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 12:52:07.347723 1102637 machine.go:97] duration metric: took 1m31.722013399s to provisionDockerMachine
	I0603 12:52:07.347740 1102637 start.go:293] postStartSetup for "ha-220492" (driver="kvm2")
	I0603 12:52:07.347754 1102637 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 12:52:07.347778 1102637 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:52:07.348150 1102637 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 12:52:07.348197 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:52:07.351364 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:52:07.351779 1102637 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:52:07.351809 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:52:07.351971 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:52:07.352156 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:52:07.352294 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:52:07.352395 1102637 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/id_rsa Username:docker}
	I0603 12:52:07.441051 1102637 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 12:52:07.445489 1102637 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 12:52:07.445521 1102637 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/addons for local assets ...
	I0603 12:52:07.445582 1102637 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/files for local assets ...
	I0603 12:52:07.445653 1102637 filesync.go:149] local asset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> 10862512.pem in /etc/ssl/certs
	I0603 12:52:07.445664 1102637 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> /etc/ssl/certs/10862512.pem
	I0603 12:52:07.445740 1102637 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 12:52:07.455307 1102637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 12:52:07.479112 1102637 start.go:296] duration metric: took 131.353728ms for postStartSetup
	I0603 12:52:07.479169 1102637 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:52:07.479534 1102637 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0603 12:52:07.479563 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:52:07.482050 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:52:07.482452 1102637 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:52:07.482475 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:52:07.482629 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:52:07.482807 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:52:07.482961 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:52:07.483080 1102637 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/id_rsa Username:docker}
	W0603 12:52:07.567189 1102637 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0603 12:52:07.567220 1102637 fix.go:56] duration metric: took 1m31.963229548s for fixHost
	I0603 12:52:07.567251 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:52:07.569872 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:52:07.570344 1102637 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:52:07.570374 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:52:07.570549 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:52:07.570753 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:52:07.570934 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:52:07.571062 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:52:07.571235 1102637 main.go:141] libmachine: Using SSH client type: native
	I0603 12:52:07.571406 1102637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0603 12:52:07.571417 1102637 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 12:52:07.682126 1102637 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717419127.651620369
	
	I0603 12:52:07.682152 1102637 fix.go:216] guest clock: 1717419127.651620369
	I0603 12:52:07.682159 1102637 fix.go:229] Guest: 2024-06-03 12:52:07.651620369 +0000 UTC Remote: 2024-06-03 12:52:07.567236399 +0000 UTC m=+92.091400626 (delta=84.38397ms)
	I0603 12:52:07.682181 1102637 fix.go:200] guest clock delta is within tolerance: 84.38397ms
	I0603 12:52:07.682186 1102637 start.go:83] releasing machines lock for "ha-220492", held for 1m32.078213239s
	I0603 12:52:07.682210 1102637 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:52:07.682493 1102637 main.go:141] libmachine: (ha-220492) Calling .GetIP
	I0603 12:52:07.685004 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:52:07.685375 1102637 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:52:07.685419 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:52:07.685578 1102637 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:52:07.686077 1102637 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:52:07.686283 1102637 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:52:07.686376 1102637 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 12:52:07.686443 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:52:07.686532 1102637 ssh_runner.go:195] Run: cat /version.json
	I0603 12:52:07.686550 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:52:07.689066 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:52:07.689266 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:52:07.689531 1102637 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:52:07.689564 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:52:07.689729 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:52:07.689869 1102637 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:52:07.689895 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:52:07.689903 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:52:07.690031 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:52:07.690088 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:52:07.690225 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:52:07.690289 1102637 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/id_rsa Username:docker}
	I0603 12:52:07.690399 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:52:07.690562 1102637 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/id_rsa Username:docker}
	I0603 12:52:07.796817 1102637 ssh_runner.go:195] Run: systemctl --version
	I0603 12:52:07.803106 1102637 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 12:52:07.966552 1102637 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 12:52:07.973438 1102637 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 12:52:07.973515 1102637 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 12:52:07.983548 1102637 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0603 12:52:07.983573 1102637 start.go:494] detecting cgroup driver to use...
	I0603 12:52:07.983645 1102637 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 12:52:08.002448 1102637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:52:08.017143 1102637 docker.go:217] disabling cri-docker service (if available) ...
	I0603 12:52:08.017196 1102637 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 12:52:08.033497 1102637 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 12:52:08.047156 1102637 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 12:52:08.208328 1102637 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 12:52:08.352367 1102637 docker.go:233] disabling docker service ...
	I0603 12:52:08.352446 1102637 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 12:52:08.368218 1102637 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 12:52:08.381892 1102637 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 12:52:08.529995 1102637 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 12:52:08.676854 1102637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 12:52:08.690303 1102637 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:52:08.708663 1102637 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 12:52:08.708740 1102637 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:52:08.718943 1102637 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 12:52:08.719008 1102637 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:52:08.728952 1102637 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:52:08.739045 1102637 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:52:08.749070 1102637 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 12:52:08.759914 1102637 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:52:08.770265 1102637 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:52:08.781587 1102637 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:52:08.791987 1102637 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 12:52:08.801365 1102637 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 12:52:08.810615 1102637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:52:08.949844 1102637 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 12:52:18.307826 1102637 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.357935721s)
	I0603 12:52:18.307874 1102637 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 12:52:18.307938 1102637 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 12:52:18.313139 1102637 start.go:562] Will wait 60s for crictl version
	I0603 12:52:18.313206 1102637 ssh_runner.go:195] Run: which crictl
	I0603 12:52:18.317717 1102637 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 12:52:18.369157 1102637 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 12:52:18.369249 1102637 ssh_runner.go:195] Run: crio --version
	I0603 12:52:18.403441 1102637 ssh_runner.go:195] Run: crio --version
	I0603 12:52:18.438271 1102637 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 12:52:18.439459 1102637 main.go:141] libmachine: (ha-220492) Calling .GetIP
	I0603 12:52:18.442111 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:52:18.442466 1102637 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:52:18.442492 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:52:18.442699 1102637 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0603 12:52:18.447687 1102637 kubeadm.go:877] updating cluster {Name:ha-220492 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:ha-220492 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.106 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.169 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.76 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 12:52:18.447857 1102637 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 12:52:18.447924 1102637 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:52:18.491510 1102637 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 12:52:18.491534 1102637 crio.go:433] Images already preloaded, skipping extraction
	I0603 12:52:18.491585 1102637 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:52:18.532159 1102637 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 12:52:18.532187 1102637 cache_images.go:84] Images are preloaded, skipping loading
	I0603 12:52:18.532201 1102637 kubeadm.go:928] updating node { 192.168.39.6 8443 v1.30.1 crio true true} ...
	I0603 12:52:18.532361 1102637 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-220492 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-220492 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 12:52:18.532447 1102637 ssh_runner.go:195] Run: crio config
	I0603 12:52:18.589584 1102637 cni.go:84] Creating CNI manager for ""
	I0603 12:52:18.589609 1102637 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0603 12:52:18.589619 1102637 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 12:52:18.589642 1102637 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.6 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-220492 NodeName:ha-220492 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 12:52:18.589855 1102637 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-220492"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 12:52:18.589881 1102637 kube-vip.go:115] generating kube-vip config ...
	I0603 12:52:18.589923 1102637 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0603 12:52:18.601928 1102637 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0603 12:52:18.602121 1102637 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0603 12:52:18.602197 1102637 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 12:52:18.611932 1102637 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 12:52:18.611992 1102637 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0603 12:52:18.621252 1102637 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0603 12:52:18.638068 1102637 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 12:52:18.654346 1102637 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0603 12:52:18.671483 1102637 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0603 12:52:18.689096 1102637 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0603 12:52:18.692932 1102637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:52:18.836964 1102637 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:52:18.851718 1102637 certs.go:68] Setting up /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492 for IP: 192.168.39.6
	I0603 12:52:18.851750 1102637 certs.go:194] generating shared ca certs ...
	I0603 12:52:18.851775 1102637 certs.go:226] acquiring lock for ca certs: {Name:mkeec5aabce7c9540fcb31b78e4f96c2851d54f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:52:18.851995 1102637 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key
	I0603 12:52:18.852049 1102637 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key
	I0603 12:52:18.852062 1102637 certs.go:256] generating profile certs ...
	I0603 12:52:18.852199 1102637 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/client.key
	I0603 12:52:18.852235 1102637 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key.ebd0a88a
	I0603 12:52:18.852254 1102637 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt.ebd0a88a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.6 192.168.39.106 192.168.39.169 192.168.39.254]
	I0603 12:52:19.038051 1102637 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt.ebd0a88a ...
	I0603 12:52:19.038085 1102637 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt.ebd0a88a: {Name:mk67a70e707ac0a534b1f8641bdf1100f902e28f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:52:19.038266 1102637 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key.ebd0a88a ...
	I0603 12:52:19.038278 1102637 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key.ebd0a88a: {Name:mkf25d39e1cf8c5ffb2c8ddbb157ca55f89f967b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:52:19.038347 1102637 certs.go:381] copying /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt.ebd0a88a -> /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt
	I0603 12:52:19.038498 1102637 certs.go:385] copying /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key.ebd0a88a -> /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key
	I0603 12:52:19.038631 1102637 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/proxy-client.key
	I0603 12:52:19.038646 1102637 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0603 12:52:19.038662 1102637 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0603 12:52:19.038675 1102637 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0603 12:52:19.038688 1102637 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0603 12:52:19.038700 1102637 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0603 12:52:19.038711 1102637 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0603 12:52:19.038725 1102637 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0603 12:52:19.038737 1102637 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0603 12:52:19.038794 1102637 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem (1338 bytes)
	W0603 12:52:19.038822 1102637 certs.go:480] ignoring /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251_empty.pem, impossibly tiny 0 bytes
	I0603 12:52:19.038832 1102637 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 12:52:19.038852 1102637 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem (1078 bytes)
	I0603 12:52:19.038874 1102637 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem (1123 bytes)
	I0603 12:52:19.038896 1102637 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem (1675 bytes)
	I0603 12:52:19.038932 1102637 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 12:52:19.038956 1102637 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:52:19.038970 1102637 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem -> /usr/share/ca-certificates/1086251.pem
	I0603 12:52:19.038979 1102637 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> /usr/share/ca-certificates/10862512.pem
	I0603 12:52:19.039551 1102637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 12:52:19.065283 1102637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 12:52:19.089051 1102637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 12:52:19.114873 1102637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0603 12:52:19.139510 1102637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0603 12:52:19.162567 1102637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 12:52:19.185849 1102637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 12:52:19.209559 1102637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 12:52:19.232743 1102637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 12:52:19.256092 1102637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem --> /usr/share/ca-certificates/1086251.pem (1338 bytes)
	I0603 12:52:19.280001 1102637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /usr/share/ca-certificates/10862512.pem (1708 bytes)
	I0603 12:52:19.303208 1102637 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 12:52:19.320154 1102637 ssh_runner.go:195] Run: openssl version
	I0603 12:52:19.326131 1102637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1086251.pem && ln -fs /usr/share/ca-certificates/1086251.pem /etc/ssl/certs/1086251.pem"
	I0603 12:52:19.336911 1102637 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1086251.pem
	I0603 12:52:19.341242 1102637 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:37 /usr/share/ca-certificates/1086251.pem
	I0603 12:52:19.341301 1102637 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1086251.pem
	I0603 12:52:19.346946 1102637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1086251.pem /etc/ssl/certs/51391683.0"
	I0603 12:52:19.356719 1102637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10862512.pem && ln -fs /usr/share/ca-certificates/10862512.pem /etc/ssl/certs/10862512.pem"
	I0603 12:52:19.367870 1102637 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10862512.pem
	I0603 12:52:19.372463 1102637 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:37 /usr/share/ca-certificates/10862512.pem
	I0603 12:52:19.372519 1102637 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10862512.pem
	I0603 12:52:19.378237 1102637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10862512.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 12:52:19.388171 1102637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 12:52:19.399393 1102637 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:52:19.403735 1102637 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:24 /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:52:19.403782 1102637 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:52:19.409342 1102637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 12:52:19.418843 1102637 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 12:52:19.423188 1102637 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 12:52:19.428716 1102637 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 12:52:19.434182 1102637 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 12:52:19.439688 1102637 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 12:52:19.445061 1102637 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 12:52:19.450518 1102637 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 12:52:19.455940 1102637 kubeadm.go:391] StartCluster: {Name:ha-220492 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-220492 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.106 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.169 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.76 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod
:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:52:19.456055 1102637 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 12:52:19.456092 1102637 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:52:19.497257 1102637 cri.go:89] found id: "8284f0e2d92cb3b5e720af6b59495e2dc6938032cc42de3a067fb58acf0d7e2b"
	I0603 12:52:19.497287 1102637 cri.go:89] found id: "bdd7def84a8632d39ffccfb334eaa29f667c14ace1db051c37cefe35b2acda2c"
	I0603 12:52:19.497291 1102637 cri.go:89] found id: "8ad33e865c65e667b1c3a3abe78d044c6e9acdaf073c00b7bace9095deb02715"
	I0603 12:52:19.497294 1102637 cri.go:89] found id: "e6ce4d724e4d0c279447ac2f5973f49a09a32638d6fbcbe53b93079a62ed667e"
	I0603 12:52:19.497296 1102637 cri.go:89] found id: "50f524d71cd1f2697116e7f21f2de4dce2f9e5561c46a64f6c24713c3a56514e"
	I0603 12:52:19.497305 1102637 cri.go:89] found id: "7c67da4b30c5f444556405b8a25da6fbb0b38f383d298669f9f21785ed464934"
	I0603 12:52:19.497308 1102637 cri.go:89] found id: "1b000c5164ef9debe3c82089d543b68405e7ae72c0f46e233daab8b658621dac"
	I0603 12:52:19.497310 1102637 cri.go:89] found id: "e802c94fbf7b652f64d20242a16e1a092bc293af274ffda5f7da2cdb3726110f"
	I0603 12:52:19.497313 1102637 cri.go:89] found id: "16c93dcdad420f0831a36fd31ab05cb7c3a9fefd9706a928d0b31b781e1cbcb5"
	I0603 12:52:19.497319 1102637 cri.go:89] found id: "1fe31d7dcb7c4bece73cdae47d1e4f870a32eb28d62d5b5be6ba47c7aebeef6b"
	I0603 12:52:19.497321 1102637 cri.go:89] found id: "f2c6a50d20a2f169936062c7c4c41810fed1d7c1fbfd8db5b78066436668c44c"
	I0603 12:52:19.497324 1102637 cri.go:89] found id: "3f1c2bb32752f666af65f18178d8dd09b063abaa5dd50c071c9f8f377fc63156"
	I0603 12:52:19.497326 1102637 cri.go:89] found id: "24aa5625e9a8ad09c021e567710cafe54b2645a693d4daeb7b4e26ef9afea15b"
	I0603 12:52:19.497328 1102637 cri.go:89] found id: "86f8a60e5333435d8ac7bc454e10cecb904b633e2ae00b080728114f5f1b1f35"
	I0603 12:52:19.497333 1102637 cri.go:89] found id: ""
	I0603 12:52:19.497376 1102637 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jun 03 12:54:56 ha-220492 crio[3827]: time="2024-06-03 12:54:56.802586541Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717419296802562872,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=25afe2c3-8346-4413-9340-932ea89f42d2 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:54:56 ha-220492 crio[3827]: time="2024-06-03 12:54:56.803343356Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=15a6cdae-fa2a-4b6d-a546-79372a7bd860 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:54:56 ha-220492 crio[3827]: time="2024-06-03 12:54:56.803402122Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=15a6cdae-fa2a-4b6d-a546-79372a7bd860 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:54:56 ha-220492 crio[3827]: time="2024-06-03 12:54:56.803871982Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3bdbad040139c6bcb4ecf79851d910f1956ae7d70ddb13a89f98e6f364d182cc,PodSandboxId:6fc8d1bbe01374efe9d05407c11d59e8f13779c8d401a0f6641cdb919f0d6a31,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717419225277983339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f85b2808-26fa-4608-a208-2c11eaddc293,},Annotations:map[string]string{io.kubernetes.container.hash: 90de17f7,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed5b6aa1d959c00513e5e99b6b1c366a721b56bf4296e42444533d15a3d5ea89,PodSandboxId:486909c094af46ad1d93db33bc54ae123e60d8222931ab1e608c83e878ba5fa4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1717419203265462929,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hbl6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f697f13-4a60-4247-bb5e-a8bcdd3336cd,},Annotations:map[string]string{io.kubernetes.container.hash: be10c8dc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b5b47bf0b5f628ddb3f76561fa28630a9d7dedc2bb4c83f094255ae244dca8e,PodSandboxId:453f86c770842027c9ff0c29f1ccebad7218dbfd77885056a022ac8cb4bc43c2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717419191260651439,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39cb6903f3b5aa40ef8bd7e72aabe415,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:814ee2909ea750a0b5ad76b4b2083b3c8847b7a38ab14da4d756db2e147a7bf5,PodSandboxId:6d8da102188476dd055ffac241996f145d4a29e4e78cb13e080e8a4dee1d8ad5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717419178550858553,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5z6j2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 776fef6b-c7d6-4793-a168-5102737dd302,},Annotations:map[string]string{io.kubernetes.container.hash: be6db159,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3d2b258b246f4a87838c7b594819a6cc46d5a5410924b49d29d57a56ad652be,PodSandboxId:878e194eb4c5d65a6da7fc151f4dca4ffc321bd5b9e8c2df7be3e1bcffd1d07c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717419176588367856,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4620ab680afec05b26612f993071a866,},Annotations:map[string]string{io.kubernetes.container.hash: 4b374434,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f1ebe7c407f4bdc7d5296580d428b5ce113f202ffbd23a4e808f0b6bc6b3f3d,PodSandboxId:6fc8d1bbe01374efe9d05407c11d59e8f13779c8d401a0f6641cdb919f0d6a31,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717419175255850915,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f85b2808-26fa-4608-a208-2c11eaddc293,},Annotations:map[string]string{io.kubernetes.container.hash: 90de17f7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdf7fec087647f8578e25827a877d7e975efb0c1d754cbd1eeee97e5ce09fa80,PodSandboxId:92b8d227c6d1ed130f22356a680e6f7b9c919e83912a9ae9508debd5183f2caf,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1717419157557932569,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61fd699b695dc1beae4468f8fa8e57e9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:7c3064afc1c4a526855ad3df05b9a3c438da7b156ff6f30af2f40e926433a359,PodSandboxId:498fb53617c69c314b9a3f9014055e57574506035ab66bb13a4f46960f0c2223,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717419146600723623,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w2hpg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a52e47-6a1e-4f9c-ba1b-feb3e362531a,},Annotations:map[string]string{io.kubernetes.container.hash: c1dc988b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f1dffbb
4b704b536b1bcb6e68533fef327f63a3d78ca90949c0e8033c83dd2a,PodSandboxId:486909c094af46ad1d93db33bc54ae123e60d8222931ab1e608c83e878ba5fa4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717419145401912650,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hbl6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f697f13-4a60-4247-bb5e-a8bcdd3336cd,},Annotations:map[string]string{io.kubernetes.container.hash: be10c8dc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c80e06763a2b7bb46c689ad6c8fef0f893f1765af29
1317b666a68ab2bbbc8ec,PodSandboxId:e2131cfde7d9492dab997655ed6ea3d6bc4fe67b5ba4becca7259f73e3c5aa5e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717419145422308062,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-q7687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e78d9e6-8feb-44ef-b44d-ed6039ab00ee,},Annotations:map[string]string{io.kubernetes.container.hash: 62ef9a49,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2dd659fd5934802b2bfca420abad6aaf55ea7fbb840dce1459f7090b32a7618,PodSandboxId:7b970b2197689db12520ccfc228623fa21aff90847dab91f9a332f6ea866e828,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717419145375975426,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-d2tgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534e15ed-2e68-4275-8725-099d7240c25d,},Annotations:map[string]string{io.kubernetes.container.hash: 5fcebe2b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e5273b3a26c85e72a54ec34a7be83af05bfc0f3643495817233c3de238c2cdd,PodSandboxId:878e194eb4c5d65a6da7fc151f4dca4ffc321bd5b9e8c2df7be3e1bcffd1d07c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717419145231319917,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4620ab
680afec05b26612f993071a866,},Annotations:map[string]string{io.kubernetes.container.hash: 4b374434,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40a382da798af9922fb6ae55c5c75fdaaf42decaca661a820f6e168e79fce883,PodSandboxId:02cc9c7643ae1d60c77512d0aeddf63e8f21db58a64f13cd32f2de0e6f5846de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717419145208721134,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7cb138fa0228e501515fc55611
3859c,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fcebee1743ba61371dce9de34e1b9c613c40f2a1ecca8bfc2308d417f6df457,PodSandboxId:7f89ba975704b62f04831d12706488d1e68f340ca348752766da5bcf3979e5cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717419145145376217,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4887038ca9eb66694db5e7bd6f010727,},Annotations:map[string]string{io.kubernetes
.container.hash: d01e1cae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07ce13ba943e5cd4286981b654cc48caf310b4baef07c15405714e750e44b1b5,PodSandboxId:453f86c770842027c9ff0c29f1ccebad7218dbfd77885056a022ac8cb4bc43c2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717419145058310427,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39cb6903f3b5aa40ef8bd7e72aabe415,},Annotations:map[string]string{io.kuber
netes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76c9e115804f2ef0da9fc274ec7485309eb70b62384b05254dfa5ac8c6728e13,PodSandboxId:c73634cd0ed838aca7c928c176507e2dfa568ab421aa9e03f90ddc76ca3f89e3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1717418649829835451,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5z6j2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 776fef6b-c7d6-4793-a168-5102737dd302,},Annotations:map[string]string{io.kuberne
tes.container.hash: be6db159,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c67da4b30c5f444556405b8a25da6fbb0b38f383d298669f9f21785ed464934,PodSandboxId:1b5bd65416e85f6689ee15ecb3ab55e907fbd1077ac5e5439bec68eb110a6f2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717418505831123170,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-q7687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e78d9e6-8feb-44ef-b44d-ed6039ab00ee,},Annotations:map[string]string{io.kubernetes.container.hash: 62ef9a49,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50f524d71cd1f2697116e7f21f2de4dce2f9e5561c46a64f6c24713c3a56514e,PodSandboxId:6d9c5f1a45b9ec2700f63795dc3d92103fc5b6472ac9f6d2a638e50fb379eb54,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717418505831526984,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-d2tgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534e15ed-2e68-4275-8725-099d7240c25d,},Annotations:map[string]string{io.kubernetes.container.hash: 5fcebe2b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16c93dcdad420f0831a36fd31ab05cb7c3a9fefd9706a928d0b31b781e1cbcb5,PodSandboxId:4d41713a63ac581773f2729e379c68f79cb014627aff280bda59d0e8a7cf22e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717418501295990343,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w2hpg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a52e47-6a1e-4f9c-ba1b-feb3e362531a,},Annotations:map[string]string{io.kubernetes.container.hash: c1dc988b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1c2bb32752f666af65f18178d8dd09b063abaa5dd50c071c9f8f377fc63156,PodSandboxId:ba8b6aec50011e9fe8c42d7e92a51e5a0907e4e61a030192574cfa79773380e4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1717418481055378274,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4887038ca9eb66694db5e7bd6f010727,},Annotations:map[string]string{io.kubernetes.container.hash: d01e1cae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86f8a60e5333435d8ac7bc454e10cecb904b633e2ae00b080728114f5f1b1f35,PodSandboxId:b96e7f287499d7304c1d1aa216ee6aea5b51e6bbe5bfda82d347772d73f33297,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedA
t:1717418480896883638,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7cb138fa0228e501515fc556113859c,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=15a6cdae-fa2a-4b6d-a546-79372a7bd860 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:54:56 ha-220492 crio[3827]: time="2024-06-03 12:54:56.845998593Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6d2b758a-282f-4bcd-adc1-234c9312985e name=/runtime.v1.RuntimeService/Version
	Jun 03 12:54:56 ha-220492 crio[3827]: time="2024-06-03 12:54:56.846119867Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6d2b758a-282f-4bcd-adc1-234c9312985e name=/runtime.v1.RuntimeService/Version
	Jun 03 12:54:56 ha-220492 crio[3827]: time="2024-06-03 12:54:56.847294548Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=15a8ad3d-a01f-4d7a-9333-9a6463c0084a name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:54:56 ha-220492 crio[3827]: time="2024-06-03 12:54:56.847728708Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717419296847707719,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=15a8ad3d-a01f-4d7a-9333-9a6463c0084a name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:54:56 ha-220492 crio[3827]: time="2024-06-03 12:54:56.848324644Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2ba04ce3-d9d9-4c0b-9bff-8487de20181a name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:54:56 ha-220492 crio[3827]: time="2024-06-03 12:54:56.848380006Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2ba04ce3-d9d9-4c0b-9bff-8487de20181a name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:54:56 ha-220492 crio[3827]: time="2024-06-03 12:54:56.848822695Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3bdbad040139c6bcb4ecf79851d910f1956ae7d70ddb13a89f98e6f364d182cc,PodSandboxId:6fc8d1bbe01374efe9d05407c11d59e8f13779c8d401a0f6641cdb919f0d6a31,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717419225277983339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f85b2808-26fa-4608-a208-2c11eaddc293,},Annotations:map[string]string{io.kubernetes.container.hash: 90de17f7,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed5b6aa1d959c00513e5e99b6b1c366a721b56bf4296e42444533d15a3d5ea89,PodSandboxId:486909c094af46ad1d93db33bc54ae123e60d8222931ab1e608c83e878ba5fa4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1717419203265462929,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hbl6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f697f13-4a60-4247-bb5e-a8bcdd3336cd,},Annotations:map[string]string{io.kubernetes.container.hash: be10c8dc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b5b47bf0b5f628ddb3f76561fa28630a9d7dedc2bb4c83f094255ae244dca8e,PodSandboxId:453f86c770842027c9ff0c29f1ccebad7218dbfd77885056a022ac8cb4bc43c2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717419191260651439,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39cb6903f3b5aa40ef8bd7e72aabe415,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:814ee2909ea750a0b5ad76b4b2083b3c8847b7a38ab14da4d756db2e147a7bf5,PodSandboxId:6d8da102188476dd055ffac241996f145d4a29e4e78cb13e080e8a4dee1d8ad5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717419178550858553,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5z6j2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 776fef6b-c7d6-4793-a168-5102737dd302,},Annotations:map[string]string{io.kubernetes.container.hash: be6db159,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3d2b258b246f4a87838c7b594819a6cc46d5a5410924b49d29d57a56ad652be,PodSandboxId:878e194eb4c5d65a6da7fc151f4dca4ffc321bd5b9e8c2df7be3e1bcffd1d07c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717419176588367856,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4620ab680afec05b26612f993071a866,},Annotations:map[string]string{io.kubernetes.container.hash: 4b374434,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f1ebe7c407f4bdc7d5296580d428b5ce113f202ffbd23a4e808f0b6bc6b3f3d,PodSandboxId:6fc8d1bbe01374efe9d05407c11d59e8f13779c8d401a0f6641cdb919f0d6a31,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717419175255850915,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f85b2808-26fa-4608-a208-2c11eaddc293,},Annotations:map[string]string{io.kubernetes.container.hash: 90de17f7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdf7fec087647f8578e25827a877d7e975efb0c1d754cbd1eeee97e5ce09fa80,PodSandboxId:92b8d227c6d1ed130f22356a680e6f7b9c919e83912a9ae9508debd5183f2caf,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1717419157557932569,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61fd699b695dc1beae4468f8fa8e57e9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:7c3064afc1c4a526855ad3df05b9a3c438da7b156ff6f30af2f40e926433a359,PodSandboxId:498fb53617c69c314b9a3f9014055e57574506035ab66bb13a4f46960f0c2223,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717419146600723623,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w2hpg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a52e47-6a1e-4f9c-ba1b-feb3e362531a,},Annotations:map[string]string{io.kubernetes.container.hash: c1dc988b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f1dffbb
4b704b536b1bcb6e68533fef327f63a3d78ca90949c0e8033c83dd2a,PodSandboxId:486909c094af46ad1d93db33bc54ae123e60d8222931ab1e608c83e878ba5fa4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717419145401912650,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hbl6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f697f13-4a60-4247-bb5e-a8bcdd3336cd,},Annotations:map[string]string{io.kubernetes.container.hash: be10c8dc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c80e06763a2b7bb46c689ad6c8fef0f893f1765af29
1317b666a68ab2bbbc8ec,PodSandboxId:e2131cfde7d9492dab997655ed6ea3d6bc4fe67b5ba4becca7259f73e3c5aa5e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717419145422308062,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-q7687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e78d9e6-8feb-44ef-b44d-ed6039ab00ee,},Annotations:map[string]string{io.kubernetes.container.hash: 62ef9a49,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2dd659fd5934802b2bfca420abad6aaf55ea7fbb840dce1459f7090b32a7618,PodSandboxId:7b970b2197689db12520ccfc228623fa21aff90847dab91f9a332f6ea866e828,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717419145375975426,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-d2tgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534e15ed-2e68-4275-8725-099d7240c25d,},Annotations:map[string]string{io.kubernetes.container.hash: 5fcebe2b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e5273b3a26c85e72a54ec34a7be83af05bfc0f3643495817233c3de238c2cdd,PodSandboxId:878e194eb4c5d65a6da7fc151f4dca4ffc321bd5b9e8c2df7be3e1bcffd1d07c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717419145231319917,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4620ab
680afec05b26612f993071a866,},Annotations:map[string]string{io.kubernetes.container.hash: 4b374434,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40a382da798af9922fb6ae55c5c75fdaaf42decaca661a820f6e168e79fce883,PodSandboxId:02cc9c7643ae1d60c77512d0aeddf63e8f21db58a64f13cd32f2de0e6f5846de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717419145208721134,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7cb138fa0228e501515fc55611
3859c,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fcebee1743ba61371dce9de34e1b9c613c40f2a1ecca8bfc2308d417f6df457,PodSandboxId:7f89ba975704b62f04831d12706488d1e68f340ca348752766da5bcf3979e5cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717419145145376217,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4887038ca9eb66694db5e7bd6f010727,},Annotations:map[string]string{io.kubernetes
.container.hash: d01e1cae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07ce13ba943e5cd4286981b654cc48caf310b4baef07c15405714e750e44b1b5,PodSandboxId:453f86c770842027c9ff0c29f1ccebad7218dbfd77885056a022ac8cb4bc43c2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717419145058310427,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39cb6903f3b5aa40ef8bd7e72aabe415,},Annotations:map[string]string{io.kuber
netes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76c9e115804f2ef0da9fc274ec7485309eb70b62384b05254dfa5ac8c6728e13,PodSandboxId:c73634cd0ed838aca7c928c176507e2dfa568ab421aa9e03f90ddc76ca3f89e3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1717418649829835451,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5z6j2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 776fef6b-c7d6-4793-a168-5102737dd302,},Annotations:map[string]string{io.kuberne
tes.container.hash: be6db159,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c67da4b30c5f444556405b8a25da6fbb0b38f383d298669f9f21785ed464934,PodSandboxId:1b5bd65416e85f6689ee15ecb3ab55e907fbd1077ac5e5439bec68eb110a6f2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717418505831123170,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-q7687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e78d9e6-8feb-44ef-b44d-ed6039ab00ee,},Annotations:map[string]string{io.kubernetes.container.hash: 62ef9a49,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50f524d71cd1f2697116e7f21f2de4dce2f9e5561c46a64f6c24713c3a56514e,PodSandboxId:6d9c5f1a45b9ec2700f63795dc3d92103fc5b6472ac9f6d2a638e50fb379eb54,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717418505831526984,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-d2tgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534e15ed-2e68-4275-8725-099d7240c25d,},Annotations:map[string]string{io.kubernetes.container.hash: 5fcebe2b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16c93dcdad420f0831a36fd31ab05cb7c3a9fefd9706a928d0b31b781e1cbcb5,PodSandboxId:4d41713a63ac581773f2729e379c68f79cb014627aff280bda59d0e8a7cf22e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717418501295990343,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w2hpg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a52e47-6a1e-4f9c-ba1b-feb3e362531a,},Annotations:map[string]string{io.kubernetes.container.hash: c1dc988b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1c2bb32752f666af65f18178d8dd09b063abaa5dd50c071c9f8f377fc63156,PodSandboxId:ba8b6aec50011e9fe8c42d7e92a51e5a0907e4e61a030192574cfa79773380e4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1717418481055378274,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4887038ca9eb66694db5e7bd6f010727,},Annotations:map[string]string{io.kubernetes.container.hash: d01e1cae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86f8a60e5333435d8ac7bc454e10cecb904b633e2ae00b080728114f5f1b1f35,PodSandboxId:b96e7f287499d7304c1d1aa216ee6aea5b51e6bbe5bfda82d347772d73f33297,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedA
t:1717418480896883638,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7cb138fa0228e501515fc556113859c,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2ba04ce3-d9d9-4c0b-9bff-8487de20181a name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:54:56 ha-220492 crio[3827]: time="2024-06-03 12:54:56.893800310Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e330cbe9-bad6-48da-8c8a-43b434d8a6c5 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:54:56 ha-220492 crio[3827]: time="2024-06-03 12:54:56.893880091Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e330cbe9-bad6-48da-8c8a-43b434d8a6c5 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:54:56 ha-220492 crio[3827]: time="2024-06-03 12:54:56.895290948Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c676683a-c7b3-4651-b985-ea453ff7c0c1 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:54:56 ha-220492 crio[3827]: time="2024-06-03 12:54:56.896127666Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717419296896091394,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c676683a-c7b3-4651-b985-ea453ff7c0c1 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:54:56 ha-220492 crio[3827]: time="2024-06-03 12:54:56.896712562Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=389bfb6e-16ca-4968-a3e7-337cb6299fdb name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:54:56 ha-220492 crio[3827]: time="2024-06-03 12:54:56.896788315Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=389bfb6e-16ca-4968-a3e7-337cb6299fdb name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:54:56 ha-220492 crio[3827]: time="2024-06-03 12:54:56.897265456Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3bdbad040139c6bcb4ecf79851d910f1956ae7d70ddb13a89f98e6f364d182cc,PodSandboxId:6fc8d1bbe01374efe9d05407c11d59e8f13779c8d401a0f6641cdb919f0d6a31,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717419225277983339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f85b2808-26fa-4608-a208-2c11eaddc293,},Annotations:map[string]string{io.kubernetes.container.hash: 90de17f7,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed5b6aa1d959c00513e5e99b6b1c366a721b56bf4296e42444533d15a3d5ea89,PodSandboxId:486909c094af46ad1d93db33bc54ae123e60d8222931ab1e608c83e878ba5fa4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1717419203265462929,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hbl6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f697f13-4a60-4247-bb5e-a8bcdd3336cd,},Annotations:map[string]string{io.kubernetes.container.hash: be10c8dc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b5b47bf0b5f628ddb3f76561fa28630a9d7dedc2bb4c83f094255ae244dca8e,PodSandboxId:453f86c770842027c9ff0c29f1ccebad7218dbfd77885056a022ac8cb4bc43c2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717419191260651439,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39cb6903f3b5aa40ef8bd7e72aabe415,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:814ee2909ea750a0b5ad76b4b2083b3c8847b7a38ab14da4d756db2e147a7bf5,PodSandboxId:6d8da102188476dd055ffac241996f145d4a29e4e78cb13e080e8a4dee1d8ad5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717419178550858553,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5z6j2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 776fef6b-c7d6-4793-a168-5102737dd302,},Annotations:map[string]string{io.kubernetes.container.hash: be6db159,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3d2b258b246f4a87838c7b594819a6cc46d5a5410924b49d29d57a56ad652be,PodSandboxId:878e194eb4c5d65a6da7fc151f4dca4ffc321bd5b9e8c2df7be3e1bcffd1d07c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717419176588367856,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4620ab680afec05b26612f993071a866,},Annotations:map[string]string{io.kubernetes.container.hash: 4b374434,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f1ebe7c407f4bdc7d5296580d428b5ce113f202ffbd23a4e808f0b6bc6b3f3d,PodSandboxId:6fc8d1bbe01374efe9d05407c11d59e8f13779c8d401a0f6641cdb919f0d6a31,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717419175255850915,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f85b2808-26fa-4608-a208-2c11eaddc293,},Annotations:map[string]string{io.kubernetes.container.hash: 90de17f7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdf7fec087647f8578e25827a877d7e975efb0c1d754cbd1eeee97e5ce09fa80,PodSandboxId:92b8d227c6d1ed130f22356a680e6f7b9c919e83912a9ae9508debd5183f2caf,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1717419157557932569,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61fd699b695dc1beae4468f8fa8e57e9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:7c3064afc1c4a526855ad3df05b9a3c438da7b156ff6f30af2f40e926433a359,PodSandboxId:498fb53617c69c314b9a3f9014055e57574506035ab66bb13a4f46960f0c2223,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717419146600723623,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w2hpg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a52e47-6a1e-4f9c-ba1b-feb3e362531a,},Annotations:map[string]string{io.kubernetes.container.hash: c1dc988b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f1dffbb
4b704b536b1bcb6e68533fef327f63a3d78ca90949c0e8033c83dd2a,PodSandboxId:486909c094af46ad1d93db33bc54ae123e60d8222931ab1e608c83e878ba5fa4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717419145401912650,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hbl6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f697f13-4a60-4247-bb5e-a8bcdd3336cd,},Annotations:map[string]string{io.kubernetes.container.hash: be10c8dc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c80e06763a2b7bb46c689ad6c8fef0f893f1765af29
1317b666a68ab2bbbc8ec,PodSandboxId:e2131cfde7d9492dab997655ed6ea3d6bc4fe67b5ba4becca7259f73e3c5aa5e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717419145422308062,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-q7687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e78d9e6-8feb-44ef-b44d-ed6039ab00ee,},Annotations:map[string]string{io.kubernetes.container.hash: 62ef9a49,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2dd659fd5934802b2bfca420abad6aaf55ea7fbb840dce1459f7090b32a7618,PodSandboxId:7b970b2197689db12520ccfc228623fa21aff90847dab91f9a332f6ea866e828,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717419145375975426,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-d2tgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534e15ed-2e68-4275-8725-099d7240c25d,},Annotations:map[string]string{io.kubernetes.container.hash: 5fcebe2b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e5273b3a26c85e72a54ec34a7be83af05bfc0f3643495817233c3de238c2cdd,PodSandboxId:878e194eb4c5d65a6da7fc151f4dca4ffc321bd5b9e8c2df7be3e1bcffd1d07c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717419145231319917,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4620ab
680afec05b26612f993071a866,},Annotations:map[string]string{io.kubernetes.container.hash: 4b374434,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40a382da798af9922fb6ae55c5c75fdaaf42decaca661a820f6e168e79fce883,PodSandboxId:02cc9c7643ae1d60c77512d0aeddf63e8f21db58a64f13cd32f2de0e6f5846de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717419145208721134,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7cb138fa0228e501515fc55611
3859c,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fcebee1743ba61371dce9de34e1b9c613c40f2a1ecca8bfc2308d417f6df457,PodSandboxId:7f89ba975704b62f04831d12706488d1e68f340ca348752766da5bcf3979e5cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717419145145376217,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4887038ca9eb66694db5e7bd6f010727,},Annotations:map[string]string{io.kubernetes
.container.hash: d01e1cae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07ce13ba943e5cd4286981b654cc48caf310b4baef07c15405714e750e44b1b5,PodSandboxId:453f86c770842027c9ff0c29f1ccebad7218dbfd77885056a022ac8cb4bc43c2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717419145058310427,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39cb6903f3b5aa40ef8bd7e72aabe415,},Annotations:map[string]string{io.kuber
netes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76c9e115804f2ef0da9fc274ec7485309eb70b62384b05254dfa5ac8c6728e13,PodSandboxId:c73634cd0ed838aca7c928c176507e2dfa568ab421aa9e03f90ddc76ca3f89e3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1717418649829835451,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5z6j2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 776fef6b-c7d6-4793-a168-5102737dd302,},Annotations:map[string]string{io.kuberne
tes.container.hash: be6db159,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c67da4b30c5f444556405b8a25da6fbb0b38f383d298669f9f21785ed464934,PodSandboxId:1b5bd65416e85f6689ee15ecb3ab55e907fbd1077ac5e5439bec68eb110a6f2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717418505831123170,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-q7687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e78d9e6-8feb-44ef-b44d-ed6039ab00ee,},Annotations:map[string]string{io.kubernetes.container.hash: 62ef9a49,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50f524d71cd1f2697116e7f21f2de4dce2f9e5561c46a64f6c24713c3a56514e,PodSandboxId:6d9c5f1a45b9ec2700f63795dc3d92103fc5b6472ac9f6d2a638e50fb379eb54,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717418505831526984,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-d2tgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534e15ed-2e68-4275-8725-099d7240c25d,},Annotations:map[string]string{io.kubernetes.container.hash: 5fcebe2b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16c93dcdad420f0831a36fd31ab05cb7c3a9fefd9706a928d0b31b781e1cbcb5,PodSandboxId:4d41713a63ac581773f2729e379c68f79cb014627aff280bda59d0e8a7cf22e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717418501295990343,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w2hpg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a52e47-6a1e-4f9c-ba1b-feb3e362531a,},Annotations:map[string]string{io.kubernetes.container.hash: c1dc988b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1c2bb32752f666af65f18178d8dd09b063abaa5dd50c071c9f8f377fc63156,PodSandboxId:ba8b6aec50011e9fe8c42d7e92a51e5a0907e4e61a030192574cfa79773380e4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1717418481055378274,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4887038ca9eb66694db5e7bd6f010727,},Annotations:map[string]string{io.kubernetes.container.hash: d01e1cae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86f8a60e5333435d8ac7bc454e10cecb904b633e2ae00b080728114f5f1b1f35,PodSandboxId:b96e7f287499d7304c1d1aa216ee6aea5b51e6bbe5bfda82d347772d73f33297,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedA
t:1717418480896883638,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7cb138fa0228e501515fc556113859c,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=389bfb6e-16ca-4968-a3e7-337cb6299fdb name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:54:56 ha-220492 crio[3827]: time="2024-06-03 12:54:56.939475917Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2f4d5d1c-75c0-47a1-8104-9b265639392a name=/runtime.v1.RuntimeService/Version
	Jun 03 12:54:56 ha-220492 crio[3827]: time="2024-06-03 12:54:56.939550420Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2f4d5d1c-75c0-47a1-8104-9b265639392a name=/runtime.v1.RuntimeService/Version
	Jun 03 12:54:56 ha-220492 crio[3827]: time="2024-06-03 12:54:56.940852075Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=edfcbe58-d060-4ef0-a7b4-c50bba002b69 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:54:56 ha-220492 crio[3827]: time="2024-06-03 12:54:56.941456785Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717419296941435139,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=edfcbe58-d060-4ef0-a7b4-c50bba002b69 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:54:56 ha-220492 crio[3827]: time="2024-06-03 12:54:56.942581560Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fda36e77-a757-4c08-b71a-5ab7702c9e10 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:54:56 ha-220492 crio[3827]: time="2024-06-03 12:54:56.942663439Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fda36e77-a757-4c08-b71a-5ab7702c9e10 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:54:56 ha-220492 crio[3827]: time="2024-06-03 12:54:56.943546411Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3bdbad040139c6bcb4ecf79851d910f1956ae7d70ddb13a89f98e6f364d182cc,PodSandboxId:6fc8d1bbe01374efe9d05407c11d59e8f13779c8d401a0f6641cdb919f0d6a31,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717419225277983339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f85b2808-26fa-4608-a208-2c11eaddc293,},Annotations:map[string]string{io.kubernetes.container.hash: 90de17f7,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed5b6aa1d959c00513e5e99b6b1c366a721b56bf4296e42444533d15a3d5ea89,PodSandboxId:486909c094af46ad1d93db33bc54ae123e60d8222931ab1e608c83e878ba5fa4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1717419203265462929,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hbl6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f697f13-4a60-4247-bb5e-a8bcdd3336cd,},Annotations:map[string]string{io.kubernetes.container.hash: be10c8dc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b5b47bf0b5f628ddb3f76561fa28630a9d7dedc2bb4c83f094255ae244dca8e,PodSandboxId:453f86c770842027c9ff0c29f1ccebad7218dbfd77885056a022ac8cb4bc43c2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717419191260651439,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39cb6903f3b5aa40ef8bd7e72aabe415,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:814ee2909ea750a0b5ad76b4b2083b3c8847b7a38ab14da4d756db2e147a7bf5,PodSandboxId:6d8da102188476dd055ffac241996f145d4a29e4e78cb13e080e8a4dee1d8ad5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717419178550858553,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5z6j2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 776fef6b-c7d6-4793-a168-5102737dd302,},Annotations:map[string]string{io.kubernetes.container.hash: be6db159,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3d2b258b246f4a87838c7b594819a6cc46d5a5410924b49d29d57a56ad652be,PodSandboxId:878e194eb4c5d65a6da7fc151f4dca4ffc321bd5b9e8c2df7be3e1bcffd1d07c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717419176588367856,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4620ab680afec05b26612f993071a866,},Annotations:map[string]string{io.kubernetes.container.hash: 4b374434,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f1ebe7c407f4bdc7d5296580d428b5ce113f202ffbd23a4e808f0b6bc6b3f3d,PodSandboxId:6fc8d1bbe01374efe9d05407c11d59e8f13779c8d401a0f6641cdb919f0d6a31,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717419175255850915,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f85b2808-26fa-4608-a208-2c11eaddc293,},Annotations:map[string]string{io.kubernetes.container.hash: 90de17f7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdf7fec087647f8578e25827a877d7e975efb0c1d754cbd1eeee97e5ce09fa80,PodSandboxId:92b8d227c6d1ed130f22356a680e6f7b9c919e83912a9ae9508debd5183f2caf,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1717419157557932569,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61fd699b695dc1beae4468f8fa8e57e9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:7c3064afc1c4a526855ad3df05b9a3c438da7b156ff6f30af2f40e926433a359,PodSandboxId:498fb53617c69c314b9a3f9014055e57574506035ab66bb13a4f46960f0c2223,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717419146600723623,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w2hpg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a52e47-6a1e-4f9c-ba1b-feb3e362531a,},Annotations:map[string]string{io.kubernetes.container.hash: c1dc988b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f1dffbb
4b704b536b1bcb6e68533fef327f63a3d78ca90949c0e8033c83dd2a,PodSandboxId:486909c094af46ad1d93db33bc54ae123e60d8222931ab1e608c83e878ba5fa4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717419145401912650,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hbl6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f697f13-4a60-4247-bb5e-a8bcdd3336cd,},Annotations:map[string]string{io.kubernetes.container.hash: be10c8dc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c80e06763a2b7bb46c689ad6c8fef0f893f1765af29
1317b666a68ab2bbbc8ec,PodSandboxId:e2131cfde7d9492dab997655ed6ea3d6bc4fe67b5ba4becca7259f73e3c5aa5e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717419145422308062,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-q7687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e78d9e6-8feb-44ef-b44d-ed6039ab00ee,},Annotations:map[string]string{io.kubernetes.container.hash: 62ef9a49,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2dd659fd5934802b2bfca420abad6aaf55ea7fbb840dce1459f7090b32a7618,PodSandboxId:7b970b2197689db12520ccfc228623fa21aff90847dab91f9a332f6ea866e828,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717419145375975426,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-d2tgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534e15ed-2e68-4275-8725-099d7240c25d,},Annotations:map[string]string{io.kubernetes.container.hash: 5fcebe2b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e5273b3a26c85e72a54ec34a7be83af05bfc0f3643495817233c3de238c2cdd,PodSandboxId:878e194eb4c5d65a6da7fc151f4dca4ffc321bd5b9e8c2df7be3e1bcffd1d07c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717419145231319917,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4620ab
680afec05b26612f993071a866,},Annotations:map[string]string{io.kubernetes.container.hash: 4b374434,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40a382da798af9922fb6ae55c5c75fdaaf42decaca661a820f6e168e79fce883,PodSandboxId:02cc9c7643ae1d60c77512d0aeddf63e8f21db58a64f13cd32f2de0e6f5846de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717419145208721134,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7cb138fa0228e501515fc55611
3859c,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fcebee1743ba61371dce9de34e1b9c613c40f2a1ecca8bfc2308d417f6df457,PodSandboxId:7f89ba975704b62f04831d12706488d1e68f340ca348752766da5bcf3979e5cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717419145145376217,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4887038ca9eb66694db5e7bd6f010727,},Annotations:map[string]string{io.kubernetes
.container.hash: d01e1cae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07ce13ba943e5cd4286981b654cc48caf310b4baef07c15405714e750e44b1b5,PodSandboxId:453f86c770842027c9ff0c29f1ccebad7218dbfd77885056a022ac8cb4bc43c2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717419145058310427,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39cb6903f3b5aa40ef8bd7e72aabe415,},Annotations:map[string]string{io.kuber
netes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76c9e115804f2ef0da9fc274ec7485309eb70b62384b05254dfa5ac8c6728e13,PodSandboxId:c73634cd0ed838aca7c928c176507e2dfa568ab421aa9e03f90ddc76ca3f89e3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1717418649829835451,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5z6j2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 776fef6b-c7d6-4793-a168-5102737dd302,},Annotations:map[string]string{io.kuberne
tes.container.hash: be6db159,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c67da4b30c5f444556405b8a25da6fbb0b38f383d298669f9f21785ed464934,PodSandboxId:1b5bd65416e85f6689ee15ecb3ab55e907fbd1077ac5e5439bec68eb110a6f2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717418505831123170,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-q7687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e78d9e6-8feb-44ef-b44d-ed6039ab00ee,},Annotations:map[string]string{io.kubernetes.container.hash: 62ef9a49,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50f524d71cd1f2697116e7f21f2de4dce2f9e5561c46a64f6c24713c3a56514e,PodSandboxId:6d9c5f1a45b9ec2700f63795dc3d92103fc5b6472ac9f6d2a638e50fb379eb54,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717418505831526984,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-d2tgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534e15ed-2e68-4275-8725-099d7240c25d,},Annotations:map[string]string{io.kubernetes.container.hash: 5fcebe2b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16c93dcdad420f0831a36fd31ab05cb7c3a9fefd9706a928d0b31b781e1cbcb5,PodSandboxId:4d41713a63ac581773f2729e379c68f79cb014627aff280bda59d0e8a7cf22e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717418501295990343,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w2hpg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a52e47-6a1e-4f9c-ba1b-feb3e362531a,},Annotations:map[string]string{io.kubernetes.container.hash: c1dc988b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1c2bb32752f666af65f18178d8dd09b063abaa5dd50c071c9f8f377fc63156,PodSandboxId:ba8b6aec50011e9fe8c42d7e92a51e5a0907e4e61a030192574cfa79773380e4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1717418481055378274,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4887038ca9eb66694db5e7bd6f010727,},Annotations:map[string]string{io.kubernetes.container.hash: d01e1cae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86f8a60e5333435d8ac7bc454e10cecb904b633e2ae00b080728114f5f1b1f35,PodSandboxId:b96e7f287499d7304c1d1aa216ee6aea5b51e6bbe5bfda82d347772d73f33297,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedA
t:1717418480896883638,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7cb138fa0228e501515fc556113859c,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fda36e77-a757-4c08-b71a-5ab7702c9e10 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	3bdbad040139c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   6fc8d1bbe0137       storage-provisioner
	ed5b6aa1d959c       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      About a minute ago   Running             kindnet-cni               3                   486909c094af4       kindnet-hbl6v
	8b5b47bf0b5f6       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      About a minute ago   Running             kube-controller-manager   2                   453f86c770842       kube-controller-manager-ha-220492
	814ee2909ea75       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   6d8da10218847       busybox-fc5497c4f-5z6j2
	f3d2b258b246f       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      2 minutes ago        Running             kube-apiserver            3                   878e194eb4c5d       kube-apiserver-ha-220492
	7f1ebe7c407f4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       3                   6fc8d1bbe0137       storage-provisioner
	fdf7fec087647       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   92b8d227c6d1e       kube-vip-ha-220492
	7c3064afc1c4a       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      2 minutes ago        Running             kube-proxy                1                   498fb53617c69       kube-proxy-w2hpg
	c80e06763a2b7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   e2131cfde7d94       coredns-7db6d8ff4d-q7687
	6f1dffbb4b704       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      2 minutes ago        Exited              kindnet-cni               2                   486909c094af4       kindnet-hbl6v
	f2dd659fd5934       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   7b970b2197689       coredns-7db6d8ff4d-d2tgp
	4e5273b3a26c8       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      2 minutes ago        Exited              kube-apiserver            2                   878e194eb4c5d       kube-apiserver-ha-220492
	40a382da798af       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      2 minutes ago        Running             kube-scheduler            1                   02cc9c7643ae1       kube-scheduler-ha-220492
	2fcebee1743ba       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   7f89ba975704b       etcd-ha-220492
	07ce13ba943e5       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      2 minutes ago        Exited              kube-controller-manager   1                   453f86c770842       kube-controller-manager-ha-220492
	76c9e115804f2       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   c73634cd0ed83       busybox-fc5497c4f-5z6j2
	50f524d71cd1f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   6d9c5f1a45b9e       coredns-7db6d8ff4d-d2tgp
	7c67da4b30c5f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   1b5bd65416e85       coredns-7db6d8ff4d-q7687
	16c93dcdad420       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      13 minutes ago       Exited              kube-proxy                0                   4d41713a63ac5       kube-proxy-w2hpg
	3f1c2bb32752f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago       Exited              etcd                      0                   ba8b6aec50011       etcd-ha-220492
	86f8a60e53334       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      13 minutes ago       Exited              kube-scheduler            0                   b96e7f287499d       kube-scheduler-ha-220492
	
	
	==> coredns [50f524d71cd1f2697116e7f21f2de4dce2f9e5561c46a64f6c24713c3a56514e] <==
	[INFO] 10.244.1.2:53322 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114547s
	[INFO] 10.244.2.2:47547 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145066s
	[INFO] 10.244.2.2:43785 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000094815s
	[INFO] 10.244.2.2:54501 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001319495s
	[INFO] 10.244.2.2:55983 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000086973s
	[INFO] 10.244.2.2:56195 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069334s
	[INFO] 10.244.0.4:42110 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000064533s
	[INFO] 10.244.0.4:48697 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000058629s
	[INFO] 10.244.1.2:42865 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168668s
	[INFO] 10.244.1.2:56794 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000111494s
	[INFO] 10.244.1.2:58581 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000084125s
	[INFO] 10.244.1.2:50954 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099179s
	[INFO] 10.244.2.2:42915 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142235s
	[INFO] 10.244.2.2:49410 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000102812s
	[INFO] 10.244.0.4:51178 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00019093s
	[INFO] 10.244.1.2:40502 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168017s
	[INFO] 10.244.1.2:35921 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000180824s
	[INFO] 10.244.1.2:40369 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000155572s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1812&timeout=5m43s&timeoutSeconds=343&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [7c67da4b30c5f444556405b8a25da6fbb0b38f383d298669f9f21785ed464934] <==
	[INFO] 10.244.1.2:36984 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001651953s
	[INFO] 10.244.1.2:59707 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00013257s
	[INFO] 10.244.1.2:43132 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001294041s
	[INFO] 10.244.1.2:50044 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000148444s
	[INFO] 10.244.1.2:46108 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000262338s
	[INFO] 10.244.2.2:59857 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001619455s
	[INFO] 10.244.2.2:37703 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098955s
	[INFO] 10.244.2.2:51044 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000180769s
	[INFO] 10.244.0.4:56245 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000077571s
	[INFO] 10.244.0.4:40429 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00005283s
	[INFO] 10.244.2.2:55900 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000100265s
	[INFO] 10.244.2.2:57003 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000955s
	[INFO] 10.244.0.4:39653 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107486s
	[INFO] 10.244.0.4:50505 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000152153s
	[INFO] 10.244.0.4:40598 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000156098s
	[INFO] 10.244.1.2:37651 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000154868s
	[INFO] 10.244.2.2:47903 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111761s
	[INFO] 10.244.2.2:55067 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000076585s
	[INFO] 10.244.2.2:39348 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000123715s
	[INFO] 10.244.2.2:33705 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000109704s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c80e06763a2b7bb46c689ad6c8fef0f893f1765af291317b666a68ab2bbbc8ec] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[390088725]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (03-Jun-2024 12:52:30.478) (total time: 10001ms):
	Trace[390088725]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (12:52:40.479)
	Trace[390088725]: [10.001230462s] [10.001230462s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:43708->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:43708->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:35810->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:35810->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:35796->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:35796->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [f2dd659fd5934802b2bfca420abad6aaf55ea7fbb840dce1459f7090b32a7618] <==
	[INFO] plugin/kubernetes: Trace[747306496]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (03-Jun-2024 12:52:30.135) (total time: 10001ms):
	Trace[747306496]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (12:52:40.136)
	Trace[747306496]: [10.001195314s] [10.001195314s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:55372->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:55372->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:50886->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:50886->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:50876->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:50876->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-220492
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-220492
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	                    minikube.k8s.io/name=ha-220492
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_03T12_41_28_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 12:41:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-220492
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 12:54:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 12:53:06 +0000   Mon, 03 Jun 2024 12:41:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 12:53:06 +0000   Mon, 03 Jun 2024 12:41:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 12:53:06 +0000   Mon, 03 Jun 2024 12:41:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 12:53:06 +0000   Mon, 03 Jun 2024 12:41:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.6
	  Hostname:    ha-220492
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bebf6ef8229e4a0498f737d165a96550
	  System UUID:                bebf6ef8-229e-4a04-98f7-37d165a96550
	  Boot ID:                    38c7d220-f8e0-4890-a7e1-09c3bc826d0b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-5z6j2              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-7db6d8ff4d-d2tgp             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-7db6d8ff4d-q7687             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-220492                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-hbl6v                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-220492             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-220492    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-w2hpg                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-220492             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-220492                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 13m    kube-proxy       
	  Normal   Starting                 108s   kube-proxy       
	  Normal   NodeHasNoDiskPressure    13m    kubelet          Node ha-220492 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m    kubelet          Node ha-220492 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     13m    kubelet          Node ha-220492 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m    node-controller  Node ha-220492 event: Registered Node ha-220492 in Controller
	  Normal   NodeReady                13m    kubelet          Node ha-220492 status is now: NodeReady
	  Normal   RegisteredNode           12m    node-controller  Node ha-220492 event: Registered Node ha-220492 in Controller
	  Normal   RegisteredNode           10m    node-controller  Node ha-220492 event: Registered Node ha-220492 in Controller
	  Warning  ContainerGCFailed        3m30s  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           101s   node-controller  Node ha-220492 event: Registered Node ha-220492 in Controller
	  Normal   RegisteredNode           94s    node-controller  Node ha-220492 event: Registered Node ha-220492 in Controller
	  Normal   RegisteredNode           30s    node-controller  Node ha-220492 event: Registered Node ha-220492 in Controller
	
	
	Name:               ha-220492-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-220492-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	                    minikube.k8s.io/name=ha-220492
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_03T12_42_37_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 12:42:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-220492-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 12:54:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 12:53:50 +0000   Mon, 03 Jun 2024 12:53:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 12:53:50 +0000   Mon, 03 Jun 2024 12:53:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 12:53:50 +0000   Mon, 03 Jun 2024 12:53:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 12:53:50 +0000   Mon, 03 Jun 2024 12:53:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.106
	  Hostname:    ha-220492-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1658a5c6e8394d57a265332808e714ab
	  System UUID:                1658a5c6-e839-4d57-a265-332808e714ab
	  Boot ID:                    4a625f5b-1388-411d-8640-464976133bbf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-m229v                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-220492-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-5p8f7                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-220492-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-220492-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-dkzgt                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-220492-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-220492-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 107s                   kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12m                    node-controller  Node ha-220492-m02 event: Registered Node ha-220492-m02 in Controller
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-220492-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-220492-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-220492-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-220492-m02 event: Registered Node ha-220492-m02 in Controller
	  Normal  RegisteredNode           10m                    node-controller  Node ha-220492-m02 event: Registered Node ha-220492-m02 in Controller
	  Normal  NodeNotReady             9m9s                   node-controller  Node ha-220492-m02 status is now: NodeNotReady
	  Normal  Starting                 2m15s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m15s (x8 over 2m15s)  kubelet          Node ha-220492-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m15s (x8 over 2m15s)  kubelet          Node ha-220492-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m15s (x7 over 2m15s)  kubelet          Node ha-220492-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           101s                   node-controller  Node ha-220492-m02 event: Registered Node ha-220492-m02 in Controller
	  Normal  RegisteredNode           94s                    node-controller  Node ha-220492-m02 event: Registered Node ha-220492-m02 in Controller
	  Normal  RegisteredNode           30s                    node-controller  Node ha-220492-m02 event: Registered Node ha-220492-m02 in Controller
	
	
	Name:               ha-220492-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-220492-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	                    minikube.k8s.io/name=ha-220492
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_03T12_43_49_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 12:43:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-220492-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 12:54:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 12:54:29 +0000   Mon, 03 Jun 2024 12:53:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 12:54:29 +0000   Mon, 03 Jun 2024 12:53:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 12:54:29 +0000   Mon, 03 Jun 2024 12:53:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 12:54:29 +0000   Mon, 03 Jun 2024 12:53:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.169
	  Hostname:    ha-220492-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c1055ed032f443e996570a5a0e130a0f
	  System UUID:                c1055ed0-32f4-43e9-9657-0a5a0e130a0f
	  Boot ID:                    6e2f76dc-b256-4ad4-ba50-e2f379a9cd11
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-stmtj                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-220492-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-gkd6p                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-220492-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-220492-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-m5l8r                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-220492-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-220492-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 42s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-220492-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-220492-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-220492-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-220492-m03 event: Registered Node ha-220492-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-220492-m03 event: Registered Node ha-220492-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-220492-m03 event: Registered Node ha-220492-m03 in Controller
	  Normal   RegisteredNode           100s               node-controller  Node ha-220492-m03 event: Registered Node ha-220492-m03 in Controller
	  Normal   RegisteredNode           94s                node-controller  Node ha-220492-m03 event: Registered Node ha-220492-m03 in Controller
	  Normal   NodeNotReady             60s                node-controller  Node ha-220492-m03 status is now: NodeNotReady
	  Normal   Starting                 59s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  59s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  59s (x2 over 59s)  kubelet          Node ha-220492-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s (x2 over 59s)  kubelet          Node ha-220492-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s (x2 over 59s)  kubelet          Node ha-220492-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 59s                kubelet          Node ha-220492-m03 has been rebooted, boot id: 6e2f76dc-b256-4ad4-ba50-e2f379a9cd11
	  Normal   NodeReady                59s                kubelet          Node ha-220492-m03 status is now: NodeReady
	  Normal   RegisteredNode           30s                node-controller  Node ha-220492-m03 event: Registered Node ha-220492-m03 in Controller
	
	
	Name:               ha-220492-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-220492-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	                    minikube.k8s.io/name=ha-220492
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_03T12_44_42_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 12:44:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-220492-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 12:54:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 12:54:49 +0000   Mon, 03 Jun 2024 12:54:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 12:54:49 +0000   Mon, 03 Jun 2024 12:54:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 12:54:49 +0000   Mon, 03 Jun 2024 12:54:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 12:54:49 +0000   Mon, 03 Jun 2024 12:54:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.76
	  Hostname:    ha-220492-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c6e57d6c6ec64017a56a85c3aa55fe71
	  System UUID:                c6e57d6c-6ec6-4017-a56a-85c3aa55fe71
	  Boot ID:                    68c487f3-a668-48f5-a7eb-6eae7251e1af
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-l7rsb       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-ggdgz    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-220492-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-220492-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-220492-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-220492-m04 event: Registered Node ha-220492-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-220492-m04 event: Registered Node ha-220492-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-220492-m04 event: Registered Node ha-220492-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-220492-m04 status is now: NodeReady
	  Normal   RegisteredNode           101s               node-controller  Node ha-220492-m04 event: Registered Node ha-220492-m04 in Controller
	  Normal   RegisteredNode           94s                node-controller  Node ha-220492-m04 event: Registered Node ha-220492-m04 in Controller
	  Normal   NodeNotReady             60s                node-controller  Node ha-220492-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           30s                node-controller  Node ha-220492-m04 event: Registered Node ha-220492-m04 in Controller
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8s (x2 over 8s)    kubelet          Node ha-220492-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x2 over 8s)    kubelet          Node ha-220492-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x2 over 8s)    kubelet          Node ha-220492-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 8s                 kubelet          Node ha-220492-m04 has been rebooted, boot id: 68c487f3-a668-48f5-a7eb-6eae7251e1af
	  Normal   NodeReady                8s                 kubelet          Node ha-220492-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jun 3 12:41] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.058474] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056951] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.164438] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.150241] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.258150] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.221845] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +4.557727] systemd-fstab-generator[946]: Ignoring "noauto" option for root device
	[  +0.059101] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.202766] systemd-fstab-generator[1365]: Ignoring "noauto" option for root device
	[  +0.082984] kauditd_printk_skb: 79 callbacks suppressed
	[ +14.083493] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.330072] kauditd_printk_skb: 68 callbacks suppressed
	[Jun 3 12:52] systemd-fstab-generator[3743]: Ignoring "noauto" option for root device
	[  +0.148118] systemd-fstab-generator[3755]: Ignoring "noauto" option for root device
	[  +0.173822] systemd-fstab-generator[3769]: Ignoring "noauto" option for root device
	[  +0.153196] systemd-fstab-generator[3782]: Ignoring "noauto" option for root device
	[  +0.274299] systemd-fstab-generator[3811]: Ignoring "noauto" option for root device
	[  +9.877419] systemd-fstab-generator[3913]: Ignoring "noauto" option for root device
	[  +0.089964] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.851315] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.277749] kauditd_printk_skb: 98 callbacks suppressed
	[Jun 3 12:53] kauditd_printk_skb: 12 callbacks suppressed
	[ +26.750467] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [2fcebee1743ba61371dce9de34e1b9c613c40f2a1ecca8bfc2308d417f6df457] <==
	{"level":"warn","ts":"2024-06-03T12:53:56.337447Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"1c2a56b0ad40f85f","rtt":"0s","error":"dial tcp 192.168.39.169:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-03T12:53:56.337498Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"1c2a56b0ad40f85f","rtt":"0s","error":"dial tcp 192.168.39.169:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-03T12:53:57.044297Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.169:2380/version","remote-member-id":"1c2a56b0ad40f85f","error":"Get \"https://192.168.39.169:2380/version\": dial tcp 192.168.39.169:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-03T12:53:57.044598Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"1c2a56b0ad40f85f","error":"Get \"https://192.168.39.169:2380/version\": dial tcp 192.168.39.169:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-03T12:54:01.046798Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.169:2380/version","remote-member-id":"1c2a56b0ad40f85f","error":"Get \"https://192.168.39.169:2380/version\": dial tcp 192.168.39.169:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-03T12:54:01.046951Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"1c2a56b0ad40f85f","error":"Get \"https://192.168.39.169:2380/version\": dial tcp 192.168.39.169:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-03T12:54:01.338315Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"1c2a56b0ad40f85f","rtt":"0s","error":"dial tcp 192.168.39.169:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-03T12:54:01.338346Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"1c2a56b0ad40f85f","rtt":"0s","error":"dial tcp 192.168.39.169:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-03T12:54:05.049486Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.169:2380/version","remote-member-id":"1c2a56b0ad40f85f","error":"Get \"https://192.168.39.169:2380/version\": dial tcp 192.168.39.169:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-03T12:54:05.049577Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"1c2a56b0ad40f85f","error":"Get \"https://192.168.39.169:2380/version\": dial tcp 192.168.39.169:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-03T12:54:06.339165Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"1c2a56b0ad40f85f","rtt":"0s","error":"dial tcp 192.168.39.169:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-03T12:54:06.339279Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"1c2a56b0ad40f85f","rtt":"0s","error":"dial tcp 192.168.39.169:2380: connect: connection refused"}
	{"level":"info","ts":"2024-06-03T12:54:08.724156Z","caller":"traceutil/trace.go:171","msg":"trace[1327155346] transaction","detail":"{read_only:false; response_revision:2315; number_of_response:1; }","duration":"143.323742ms","start":"2024-06-03T12:54:08.580788Z","end":"2024-06-03T12:54:08.724112Z","steps":["trace[1327155346] 'process raft request'  (duration: 143.144807ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T12:54:08.727928Z","caller":"traceutil/trace.go:171","msg":"trace[1191624628] transaction","detail":"{read_only:false; response_revision:2316; number_of_response:1; }","duration":"113.817353ms","start":"2024-06-03T12:54:08.614098Z","end":"2024-06-03T12:54:08.727916Z","steps":["trace[1191624628] 'process raft request'  (duration: 113.745911ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T12:54:09.05193Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.169:2380/version","remote-member-id":"1c2a56b0ad40f85f","error":"Get \"https://192.168.39.169:2380/version\": dial tcp 192.168.39.169:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-03T12:54:09.052082Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"1c2a56b0ad40f85f","error":"Get \"https://192.168.39.169:2380/version\": dial tcp 192.168.39.169:2380: connect: connection refused"}
	{"level":"info","ts":"2024-06-03T12:54:09.22911Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"1c2a56b0ad40f85f"}
	{"level":"info","ts":"2024-06-03T12:54:09.229214Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6f26d2d338759d80","remote-peer-id":"1c2a56b0ad40f85f"}
	{"level":"info","ts":"2024-06-03T12:54:09.229439Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"6f26d2d338759d80","remote-peer-id":"1c2a56b0ad40f85f"}
	{"level":"info","ts":"2024-06-03T12:54:09.263088Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6f26d2d338759d80","to":"1c2a56b0ad40f85f","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-06-03T12:54:09.263194Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"6f26d2d338759d80","remote-peer-id":"1c2a56b0ad40f85f"}
	{"level":"info","ts":"2024-06-03T12:54:09.272781Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6f26d2d338759d80","to":"1c2a56b0ad40f85f","stream-type":"stream Message"}
	{"level":"info","ts":"2024-06-03T12:54:09.272837Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"6f26d2d338759d80","remote-peer-id":"1c2a56b0ad40f85f"}
	{"level":"warn","ts":"2024-06-03T12:54:52.431894Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.849761ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/\" range_end:\"/registry/configmaps0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-06-03T12:54:52.432979Z","caller":"traceutil/trace.go:171","msg":"trace[1052840892] range","detail":"{range_begin:/registry/configmaps/; range_end:/registry/configmaps0; response_count:0; response_revision:2481; }","duration":"104.946524ms","start":"2024-06-03T12:54:52.327925Z","end":"2024-06-03T12:54:52.432872Z","steps":["trace[1052840892] 'count revisions from in-memory index tree'  (duration: 102.493341ms)"],"step_count":1}
	
	
	==> etcd [3f1c2bb32752f666af65f18178d8dd09b063abaa5dd50c071c9f8f377fc63156] <==
	2024/06/03 12:50:36 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-06-03T12:50:36.759439Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"543.117856ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings/\" range_end:\"/registry/rolebindings0\" limit:500 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-06-03T12:50:36.759469Z","caller":"traceutil/trace.go:171","msg":"trace[395994199] range","detail":"{range_begin:/registry/rolebindings/; range_end:/registry/rolebindings0; }","duration":"543.169823ms","start":"2024-06-03T12:50:36.216294Z","end":"2024-06-03T12:50:36.759464Z","steps":["trace[395994199] 'agreement among raft nodes before linearized reading'  (duration: 543.139448ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T12:50:36.7595Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-03T12:50:36.21628Z","time spent":"543.215521ms","remote":"127.0.0.1:34736","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":0,"response size":0,"request content":"key:\"/registry/rolebindings/\" range_end:\"/registry/rolebindings0\" limit:500 "}
	2024/06/03 12:50:36 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-06-03T12:50:36.806566Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.6:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-03T12:50:36.8066Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.6:2379: use of closed network connection"}
	{"level":"info","ts":"2024-06-03T12:50:36.806648Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"6f26d2d338759d80","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-06-03T12:50:36.806808Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"1e25e32aec59f45c"}
	{"level":"info","ts":"2024-06-03T12:50:36.806856Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"1e25e32aec59f45c"}
	{"level":"info","ts":"2024-06-03T12:50:36.806903Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"1e25e32aec59f45c"}
	{"level":"info","ts":"2024-06-03T12:50:36.80697Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c"}
	{"level":"info","ts":"2024-06-03T12:50:36.807109Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c"}
	{"level":"info","ts":"2024-06-03T12:50:36.807208Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c"}
	{"level":"info","ts":"2024-06-03T12:50:36.807241Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"1e25e32aec59f45c"}
	{"level":"info","ts":"2024-06-03T12:50:36.807264Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"1c2a56b0ad40f85f"}
	{"level":"info","ts":"2024-06-03T12:50:36.807291Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"1c2a56b0ad40f85f"}
	{"level":"info","ts":"2024-06-03T12:50:36.807344Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"1c2a56b0ad40f85f"}
	{"level":"info","ts":"2024-06-03T12:50:36.807452Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6f26d2d338759d80","remote-peer-id":"1c2a56b0ad40f85f"}
	{"level":"info","ts":"2024-06-03T12:50:36.807517Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6f26d2d338759d80","remote-peer-id":"1c2a56b0ad40f85f"}
	{"level":"info","ts":"2024-06-03T12:50:36.807563Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6f26d2d338759d80","remote-peer-id":"1c2a56b0ad40f85f"}
	{"level":"info","ts":"2024-06-03T12:50:36.807606Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"1c2a56b0ad40f85f"}
	{"level":"info","ts":"2024-06-03T12:50:36.811442Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.6:2380"}
	{"level":"info","ts":"2024-06-03T12:50:36.811619Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.6:2380"}
	{"level":"info","ts":"2024-06-03T12:50:36.811686Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-220492","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.6:2380"],"advertise-client-urls":["https://192.168.39.6:2379"]}
	
	
	==> kernel <==
	 12:54:57 up 14 min,  0 users,  load average: 0.56, 0.49, 0.30
	Linux ha-220492 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6f1dffbb4b704b536b1bcb6e68533fef327f63a3d78ca90949c0e8033c83dd2a] <==
	I0603 12:52:25.898561       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0603 12:52:25.898647       1 main.go:107] hostIP = 192.168.39.6
	podIP = 192.168.39.6
	I0603 12:52:25.898813       1 main.go:116] setting mtu 1500 for CNI 
	I0603 12:52:25.898858       1 main.go:146] kindnetd IP family: "ipv4"
	I0603 12:52:25.898881       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0603 12:52:36.160589       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0603 12:52:38.289487       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0603 12:52:47.505418       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 192.168.122.41:59974->10.96.0.1:443: read: connection reset by peer
	I0603 12:52:50.577500       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0603 12:52:53.649834       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xe3b
	
	
	==> kindnet [ed5b6aa1d959c00513e5e99b6b1c366a721b56bf4296e42444533d15a3d5ea89] <==
	I0603 12:54:24.098631       1 main.go:250] Node ha-220492-m04 has CIDR [10.244.3.0/24] 
	I0603 12:54:34.111896       1 main.go:223] Handling node with IPs: map[192.168.39.6:{}]
	I0603 12:54:34.112054       1 main.go:227] handling current node
	I0603 12:54:34.112135       1 main.go:223] Handling node with IPs: map[192.168.39.106:{}]
	I0603 12:54:34.112164       1 main.go:250] Node ha-220492-m02 has CIDR [10.244.1.0/24] 
	I0603 12:54:34.112275       1 main.go:223] Handling node with IPs: map[192.168.39.169:{}]
	I0603 12:54:34.112294       1 main.go:250] Node ha-220492-m03 has CIDR [10.244.2.0/24] 
	I0603 12:54:34.112372       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0603 12:54:34.112398       1 main.go:250] Node ha-220492-m04 has CIDR [10.244.3.0/24] 
	I0603 12:54:44.128199       1 main.go:223] Handling node with IPs: map[192.168.39.6:{}]
	I0603 12:54:44.128355       1 main.go:227] handling current node
	I0603 12:54:44.128397       1 main.go:223] Handling node with IPs: map[192.168.39.106:{}]
	I0603 12:54:44.128424       1 main.go:250] Node ha-220492-m02 has CIDR [10.244.1.0/24] 
	I0603 12:54:44.128611       1 main.go:223] Handling node with IPs: map[192.168.39.169:{}]
	I0603 12:54:44.128845       1 main.go:250] Node ha-220492-m03 has CIDR [10.244.2.0/24] 
	I0603 12:54:44.129077       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0603 12:54:44.129164       1 main.go:250] Node ha-220492-m04 has CIDR [10.244.3.0/24] 
	I0603 12:54:54.136807       1 main.go:223] Handling node with IPs: map[192.168.39.6:{}]
	I0603 12:54:54.136912       1 main.go:227] handling current node
	I0603 12:54:54.136949       1 main.go:223] Handling node with IPs: map[192.168.39.106:{}]
	I0603 12:54:54.136977       1 main.go:250] Node ha-220492-m02 has CIDR [10.244.1.0/24] 
	I0603 12:54:54.137212       1 main.go:223] Handling node with IPs: map[192.168.39.169:{}]
	I0603 12:54:54.137266       1 main.go:250] Node ha-220492-m03 has CIDR [10.244.2.0/24] 
	I0603 12:54:54.137430       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0603 12:54:54.137481       1 main.go:250] Node ha-220492-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [4e5273b3a26c85e72a54ec34a7be83af05bfc0f3643495817233c3de238c2cdd] <==
	I0603 12:52:25.951283       1 options.go:221] external host was not specified, using 192.168.39.6
	I0603 12:52:25.965686       1 server.go:148] Version: v1.30.1
	I0603 12:52:25.965753       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 12:52:26.415884       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0603 12:52:26.429198       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0603 12:52:26.429237       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0603 12:52:26.429456       1 instance.go:299] Using reconciler: lease
	I0603 12:52:26.441180       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0603 12:52:46.412698       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0603 12:52:46.413940       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0603 12:52:46.430746       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [f3d2b258b246f4a87838c7b594819a6cc46d5a5410924b49d29d57a56ad652be] <==
	I0603 12:53:03.794897       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 12:53:03.794971       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 12:53:03.890224       1 shared_informer.go:320] Caches are synced for configmaps
	I0603 12:53:03.895470       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0603 12:53:03.901812       1 aggregator.go:165] initial CRD sync complete...
	I0603 12:53:03.901928       1 autoregister_controller.go:141] Starting autoregister controller
	I0603 12:53:03.902042       1 cache.go:32] Waiting for caches to sync for autoregister controller
	W0603 12:53:03.943326       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.169]
	I0603 12:53:03.957312       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0603 12:53:03.957351       1 policy_source.go:224] refreshing policies
	I0603 12:53:03.957484       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0603 12:53:03.988532       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0603 12:53:03.988554       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0603 12:53:03.988709       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0603 12:53:03.989169       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0603 12:53:03.989966       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0603 12:53:03.994394       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0603 12:53:04.003956       1 cache.go:39] Caches are synced for autoregister controller
	I0603 12:53:04.018339       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0603 12:53:04.045360       1 controller.go:615] quota admission added evaluator for: endpoints
	I0603 12:53:04.057435       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0603 12:53:04.065487       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0603 12:53:04.793710       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0603 12:53:05.085572       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.169 192.168.39.6]
	W0603 12:53:25.087312       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.106 192.168.39.6]
	
	
	==> kube-controller-manager [07ce13ba943e5cd4286981b654cc48caf310b4baef07c15405714e750e44b1b5] <==
	I0603 12:52:26.666855       1 serving.go:380] Generated self-signed cert in-memory
	I0603 12:52:27.301798       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0603 12:52:27.301836       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 12:52:27.303561       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 12:52:27.303662       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 12:52:27.304162       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0603 12:52:27.304238       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0603 12:52:47.436904       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.6:8443/healthz\": dial tcp 192.168.39.6:8443: connect: connection refused"
	
	
	==> kube-controller-manager [8b5b47bf0b5f628ddb3f76561fa28630a9d7dedc2bb4c83f094255ae244dca8e] <==
	I0603 12:53:23.445243       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0603 12:53:23.445352       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-220492"
	I0603 12:53:23.445402       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-220492-m02"
	I0603 12:53:23.445420       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-220492-m03"
	I0603 12:53:23.445434       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-220492-m04"
	I0603 12:53:23.445780       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0603 12:53:23.448469       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 12:53:23.449642       1 shared_informer.go:320] Caches are synced for persistent volume
	I0603 12:53:23.499556       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 12:53:23.867891       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 12:53:23.867934       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0603 12:53:23.893335       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 12:53:35.594695       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-rh5cx EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-rh5cx\": the object has been modified; please apply your changes to the latest version and try again"
	I0603 12:53:35.595783       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"b4b6a883-5e55-496d-ba1d-c429f359ce96", APIVersion:"v1", ResourceVersion:"237", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-rh5cx EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-rh5cx": the object has been modified; please apply your changes to the latest version and try again
	I0603 12:53:35.642598       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="94.731003ms"
	E0603 12:53:35.642645       1 replica_set.go:557] sync "kube-system/coredns-7db6d8ff4d" failed with Operation cannot be fulfilled on replicasets.apps "coredns-7db6d8ff4d": the object has been modified; please apply your changes to the latest version and try again
	I0603 12:53:35.642744       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="73.156µs"
	I0603 12:53:35.647840       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="94.887µs"
	I0603 12:53:57.262973       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-220492-m04"
	I0603 12:53:57.499591       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.850189ms"
	I0603 12:53:57.500549       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="135.251µs"
	I0603 12:53:59.575868       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="85.012µs"
	I0603 12:54:16.781589       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.624747ms"
	I0603 12:54:16.781718       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.343µs"
	I0603 12:54:49.140677       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-220492-m04"
	
	
	==> kube-proxy [16c93dcdad420f0831a36fd31ab05cb7c3a9fefd9706a928d0b31b781e1cbcb5] <==
	E0603 12:49:23.923124       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-220492&resourceVersion=1784": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 12:49:26.995959       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-220492&resourceVersion=1784": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 12:49:26.996054       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-220492&resourceVersion=1784": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 12:49:26.996147       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1780": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 12:49:26.996175       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1780": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 12:49:26.996321       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1802": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 12:49:26.996446       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1802": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 12:49:33.137981       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-220492&resourceVersion=1784": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 12:49:33.138419       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-220492&resourceVersion=1784": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 12:49:33.138690       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1780": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 12:49:33.138749       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1780": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 12:49:33.138703       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1802": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 12:49:33.138822       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1802": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 12:49:42.353881       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1802": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 12:49:42.354072       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1802": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 12:49:45.426264       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-220492&resourceVersion=1784": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 12:49:45.427368       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-220492&resourceVersion=1784": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 12:49:45.428082       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1780": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 12:49:45.428286       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1780": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 12:50:00.786224       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1780": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 12:50:00.786818       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-220492&resourceVersion=1784": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 12:50:00.786942       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-220492&resourceVersion=1784": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 12:50:00.786944       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1780": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 12:50:06.931564       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1802": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 12:50:06.931798       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1802": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [7c3064afc1c4a526855ad3df05b9a3c438da7b156ff6f30af2f40e926433a359] <==
	I0603 12:52:27.366129       1 server_linux.go:69] "Using iptables proxy"
	E0603 12:52:28.241889       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-220492\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0603 12:52:31.313405       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-220492\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0603 12:52:34.386558       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-220492\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0603 12:52:40.530512       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-220492\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0603 12:52:49.745597       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-220492\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0603 12:53:08.621848       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.6"]
	I0603 12:53:08.668531       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 12:53:08.668614       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 12:53:08.668645       1 server_linux.go:165] "Using iptables Proxier"
	I0603 12:53:08.671211       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 12:53:08.671584       1 server.go:872] "Version info" version="v1.30.1"
	I0603 12:53:08.671832       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 12:53:08.673086       1 config.go:192] "Starting service config controller"
	I0603 12:53:08.673126       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 12:53:08.673154       1 config.go:101] "Starting endpoint slice config controller"
	I0603 12:53:08.673157       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 12:53:08.674942       1 config.go:319] "Starting node config controller"
	I0603 12:53:08.674970       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 12:53:08.773528       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 12:53:08.773598       1 shared_informer.go:320] Caches are synced for service config
	I0603 12:53:08.775097       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [40a382da798af9922fb6ae55c5c75fdaaf42decaca661a820f6e168e79fce883] <==
	W0603 12:52:56.187447       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.6:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.6:8443: connect: connection refused
	E0603 12:52:56.187520       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.6:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.6:8443: connect: connection refused
	W0603 12:52:56.499660       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.6:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.6:8443: connect: connection refused
	E0603 12:52:56.499731       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.6:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.6:8443: connect: connection refused
	W0603 12:52:56.529603       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.6:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.6:8443: connect: connection refused
	E0603 12:52:56.529649       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.6:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.6:8443: connect: connection refused
	W0603 12:52:56.674386       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.6:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.6:8443: connect: connection refused
	E0603 12:52:56.674422       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.6:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.6:8443: connect: connection refused
	W0603 12:53:03.837578       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0603 12:53:03.837647       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0603 12:53:03.837741       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0603 12:53:03.837782       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0603 12:53:03.837847       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0603 12:53:03.837886       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0603 12:53:03.837960       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0603 12:53:03.841133       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0603 12:53:03.841198       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0603 12:53:03.841301       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0603 12:53:03.841336       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0603 12:53:03.841415       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0603 12:53:03.841506       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0603 12:53:03.842061       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0603 12:53:03.898479       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0603 12:53:03.898703       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0603 12:53:04.442741       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [86f8a60e5333435d8ac7bc454e10cecb904b633e2ae00b080728114f5f1b1f35] <==
	W0603 12:50:32.950834       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0603 12:50:32.950940       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0603 12:50:32.961091       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0603 12:50:32.961173       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0603 12:50:32.989986       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0603 12:50:32.990107       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0603 12:50:33.240103       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0603 12:50:33.240160       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0603 12:50:33.521542       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0603 12:50:33.521594       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0603 12:50:33.630980       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0603 12:50:33.631296       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0603 12:50:34.209964       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0603 12:50:34.210127       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0603 12:50:34.441540       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0603 12:50:34.441607       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0603 12:50:34.859116       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0603 12:50:34.859206       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0603 12:50:34.899382       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0603 12:50:34.899642       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0603 12:50:35.060977       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0603 12:50:35.061134       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0603 12:50:35.416346       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0603 12:50:35.416459       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0603 12:50:36.726806       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jun 03 12:53:05 ha-220492 kubelet[1372]: I0603 12:53:05.105527    1372 status_manager.go:853] "Failed to get status for pod" podUID="4620ab680afec05b26612f993071a866" pod="kube-system/kube-apiserver-ha-220492" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-220492\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jun 03 12:53:07 ha-220492 kubelet[1372]: I0603 12:53:07.261785    1372 scope.go:117] "RemoveContainer" containerID="7f1ebe7c407f4bdc7d5296580d428b5ce113f202ffbd23a4e808f0b6bc6b3f3d"
	Jun 03 12:53:07 ha-220492 kubelet[1372]: E0603 12:53:07.261959    1372 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f85b2808-26fa-4608-a208-2c11eaddc293)\"" pod="kube-system/storage-provisioner" podUID="f85b2808-26fa-4608-a208-2c11eaddc293"
	Jun 03 12:53:08 ha-220492 kubelet[1372]: I0603 12:53:08.246460    1372 scope.go:117] "RemoveContainer" containerID="6f1dffbb4b704b536b1bcb6e68533fef327f63a3d78ca90949c0e8033c83dd2a"
	Jun 03 12:53:08 ha-220492 kubelet[1372]: E0603 12:53:08.246761    1372 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kindnet-cni pod=kindnet-hbl6v_kube-system(9f697f13-4a60-4247-bb5e-a8bcdd3336cd)\"" pod="kube-system/kindnet-hbl6v" podUID="9f697f13-4a60-4247-bb5e-a8bcdd3336cd"
	Jun 03 12:53:11 ha-220492 kubelet[1372]: I0603 12:53:11.247422    1372 scope.go:117] "RemoveContainer" containerID="07ce13ba943e5cd4286981b654cc48caf310b4baef07c15405714e750e44b1b5"
	Jun 03 12:53:18 ha-220492 kubelet[1372]: I0603 12:53:18.246978    1372 scope.go:117] "RemoveContainer" containerID="7f1ebe7c407f4bdc7d5296580d428b5ce113f202ffbd23a4e808f0b6bc6b3f3d"
	Jun 03 12:53:18 ha-220492 kubelet[1372]: E0603 12:53:18.247263    1372 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f85b2808-26fa-4608-a208-2c11eaddc293)\"" pod="kube-system/storage-provisioner" podUID="f85b2808-26fa-4608-a208-2c11eaddc293"
	Jun 03 12:53:23 ha-220492 kubelet[1372]: I0603 12:53:23.247443    1372 scope.go:117] "RemoveContainer" containerID="6f1dffbb4b704b536b1bcb6e68533fef327f63a3d78ca90949c0e8033c83dd2a"
	Jun 03 12:53:27 ha-220492 kubelet[1372]: E0603 12:53:27.303109    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 12:53:27 ha-220492 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 12:53:27 ha-220492 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 12:53:27 ha-220492 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 12:53:27 ha-220492 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 12:53:31 ha-220492 kubelet[1372]: I0603 12:53:31.246930    1372 scope.go:117] "RemoveContainer" containerID="7f1ebe7c407f4bdc7d5296580d428b5ce113f202ffbd23a4e808f0b6bc6b3f3d"
	Jun 03 12:53:31 ha-220492 kubelet[1372]: E0603 12:53:31.247187    1372 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f85b2808-26fa-4608-a208-2c11eaddc293)\"" pod="kube-system/storage-provisioner" podUID="f85b2808-26fa-4608-a208-2c11eaddc293"
	Jun 03 12:53:42 ha-220492 kubelet[1372]: I0603 12:53:42.399703    1372 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-5z6j2" podStartSLOduration=573.728936842 podStartE2EDuration="9m34.399667987s" podCreationTimestamp="2024-06-03 12:44:08 +0000 UTC" firstStartedPulling="2024-06-03 12:44:09.141426786 +0000 UTC m=+162.042844489" lastFinishedPulling="2024-06-03 12:44:09.812157928 +0000 UTC m=+162.713575634" observedRunningTime="2024-06-03 12:44:10.012742024 +0000 UTC m=+162.914159746" watchObservedRunningTime="2024-06-03 12:53:42.399667987 +0000 UTC m=+735.301085711"
	Jun 03 12:53:45 ha-220492 kubelet[1372]: I0603 12:53:45.247711    1372 scope.go:117] "RemoveContainer" containerID="7f1ebe7c407f4bdc7d5296580d428b5ce113f202ffbd23a4e808f0b6bc6b3f3d"
	Jun 03 12:53:55 ha-220492 kubelet[1372]: I0603 12:53:55.248255    1372 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-220492" podUID="577ecb1f-e5df-4494-b898-7d2d8e79151d"
	Jun 03 12:53:55 ha-220492 kubelet[1372]: I0603 12:53:55.269355    1372 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-220492"
	Jun 03 12:54:27 ha-220492 kubelet[1372]: E0603 12:54:27.281634    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 12:54:27 ha-220492 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 12:54:27 ha-220492 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 12:54:27 ha-220492 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 12:54:27 ha-220492 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0603 12:54:56.497044 1103992 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19011-1078924/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-220492 -n ha-220492
helpers_test.go:261: (dbg) Run:  kubectl --context ha-220492 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (385.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-220492 stop -v=7 --alsologtostderr: exit status 82 (2m0.476285148s)

                                                
                                                
-- stdout --
	* Stopping node "ha-220492-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0603 12:55:16.120755 1104398 out.go:291] Setting OutFile to fd 1 ...
	I0603 12:55:16.120861 1104398 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:55:16.120869 1104398 out.go:304] Setting ErrFile to fd 2...
	I0603 12:55:16.120873 1104398 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:55:16.121082 1104398 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
	I0603 12:55:16.121297 1104398 out.go:298] Setting JSON to false
	I0603 12:55:16.121364 1104398 mustload.go:65] Loading cluster: ha-220492
	I0603 12:55:16.121751 1104398 config.go:182] Loaded profile config "ha-220492": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:55:16.121840 1104398 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/config.json ...
	I0603 12:55:16.122017 1104398 mustload.go:65] Loading cluster: ha-220492
	I0603 12:55:16.122141 1104398 config.go:182] Loaded profile config "ha-220492": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:55:16.122164 1104398 stop.go:39] StopHost: ha-220492-m04
	I0603 12:55:16.122552 1104398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:55:16.122598 1104398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:55:16.138238 1104398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37919
	I0603 12:55:16.138679 1104398 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:55:16.140305 1104398 main.go:141] libmachine: Using API Version  1
	I0603 12:55:16.140327 1104398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:55:16.140724 1104398 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:55:16.143923 1104398 out.go:177] * Stopping node "ha-220492-m04"  ...
	I0603 12:55:16.145311 1104398 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0603 12:55:16.145336 1104398 main.go:141] libmachine: (ha-220492-m04) Calling .DriverName
	I0603 12:55:16.145604 1104398 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0603 12:55:16.145629 1104398 main.go:141] libmachine: (ha-220492-m04) Calling .GetSSHHostname
	I0603 12:55:16.148397 1104398 main.go:141] libmachine: (ha-220492-m04) DBG | domain ha-220492-m04 has defined MAC address 52:54:00:ce:45:9f in network mk-ha-220492
	I0603 12:55:16.148839 1104398 main.go:141] libmachine: (ha-220492-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:45:9f", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:54:43 +0000 UTC Type:0 Mac:52:54:00:ce:45:9f Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-220492-m04 Clientid:01:52:54:00:ce:45:9f}
	I0603 12:55:16.148870 1104398 main.go:141] libmachine: (ha-220492-m04) DBG | domain ha-220492-m04 has defined IP address 192.168.39.76 and MAC address 52:54:00:ce:45:9f in network mk-ha-220492
	I0603 12:55:16.148984 1104398 main.go:141] libmachine: (ha-220492-m04) Calling .GetSSHPort
	I0603 12:55:16.149175 1104398 main.go:141] libmachine: (ha-220492-m04) Calling .GetSSHKeyPath
	I0603 12:55:16.149333 1104398 main.go:141] libmachine: (ha-220492-m04) Calling .GetSSHUsername
	I0603 12:55:16.149488 1104398 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m04/id_rsa Username:docker}
	I0603 12:55:16.235603 1104398 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0603 12:55:16.288151 1104398 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0603 12:55:16.341153 1104398 main.go:141] libmachine: Stopping "ha-220492-m04"...
	I0603 12:55:16.341202 1104398 main.go:141] libmachine: (ha-220492-m04) Calling .GetState
	I0603 12:55:16.342861 1104398 main.go:141] libmachine: (ha-220492-m04) Calling .Stop
	I0603 12:55:16.346323 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 0/120
	I0603 12:55:17.348150 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 1/120
	I0603 12:55:18.349592 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 2/120
	I0603 12:55:19.350985 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 3/120
	I0603 12:55:20.352389 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 4/120
	I0603 12:55:21.354368 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 5/120
	I0603 12:55:22.356638 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 6/120
	I0603 12:55:23.358523 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 7/120
	I0603 12:55:24.359807 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 8/120
	I0603 12:55:25.361231 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 9/120
	I0603 12:55:26.363387 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 10/120
	I0603 12:55:27.364825 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 11/120
	I0603 12:55:28.366809 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 12/120
	I0603 12:55:29.368230 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 13/120
	I0603 12:55:30.369676 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 14/120
	I0603 12:55:31.371468 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 15/120
	I0603 12:55:32.373245 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 16/120
	I0603 12:55:33.374389 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 17/120
	I0603 12:55:34.375700 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 18/120
	I0603 12:55:35.376985 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 19/120
	I0603 12:55:36.379175 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 20/120
	I0603 12:55:37.380565 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 21/120
	I0603 12:55:38.381766 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 22/120
	I0603 12:55:39.384030 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 23/120
	I0603 12:55:40.385602 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 24/120
	I0603 12:55:41.388246 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 25/120
	I0603 12:55:42.389553 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 26/120
	I0603 12:55:43.390753 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 27/120
	I0603 12:55:44.392075 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 28/120
	I0603 12:55:45.393665 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 29/120
	I0603 12:55:46.395641 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 30/120
	I0603 12:55:47.397187 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 31/120
	I0603 12:55:48.398773 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 32/120
	I0603 12:55:49.400125 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 33/120
	I0603 12:55:50.402387 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 34/120
	I0603 12:55:51.404462 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 35/120
	I0603 12:55:52.405946 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 36/120
	I0603 12:55:53.408019 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 37/120
	I0603 12:55:54.409207 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 38/120
	I0603 12:55:55.411594 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 39/120
	I0603 12:55:56.413695 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 40/120
	I0603 12:55:57.415906 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 41/120
	I0603 12:55:58.417329 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 42/120
	I0603 12:55:59.418722 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 43/120
	I0603 12:56:00.420216 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 44/120
	I0603 12:56:01.422045 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 45/120
	I0603 12:56:02.424125 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 46/120
	I0603 12:56:03.426765 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 47/120
	I0603 12:56:04.428715 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 48/120
	I0603 12:56:05.430538 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 49/120
	I0603 12:56:06.432685 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 50/120
	I0603 12:56:07.434225 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 51/120
	I0603 12:56:08.435964 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 52/120
	I0603 12:56:09.437610 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 53/120
	I0603 12:56:10.438994 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 54/120
	I0603 12:56:11.440859 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 55/120
	I0603 12:56:12.442502 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 56/120
	I0603 12:56:13.443836 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 57/120
	I0603 12:56:14.445039 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 58/120
	I0603 12:56:15.446510 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 59/120
	I0603 12:56:16.448840 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 60/120
	I0603 12:56:17.450545 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 61/120
	I0603 12:56:18.452122 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 62/120
	I0603 12:56:19.453274 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 63/120
	I0603 12:56:20.454676 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 64/120
	I0603 12:56:21.456467 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 65/120
	I0603 12:56:22.458185 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 66/120
	I0603 12:56:23.459547 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 67/120
	I0603 12:56:24.460720 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 68/120
	I0603 12:56:25.462773 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 69/120
	I0603 12:56:26.464983 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 70/120
	I0603 12:56:27.466153 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 71/120
	I0603 12:56:28.467941 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 72/120
	I0603 12:56:29.469132 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 73/120
	I0603 12:56:30.470536 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 74/120
	I0603 12:56:31.472428 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 75/120
	I0603 12:56:32.473947 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 76/120
	I0603 12:56:33.475401 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 77/120
	I0603 12:56:34.476791 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 78/120
	I0603 12:56:35.478105 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 79/120
	I0603 12:56:36.479957 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 80/120
	I0603 12:56:37.481441 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 81/120
	I0603 12:56:38.482768 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 82/120
	I0603 12:56:39.484236 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 83/120
	I0603 12:56:40.486581 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 84/120
	I0603 12:56:41.488810 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 85/120
	I0603 12:56:42.490143 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 86/120
	I0603 12:56:43.491888 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 87/120
	I0603 12:56:44.493211 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 88/120
	I0603 12:56:45.494925 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 89/120
	I0603 12:56:46.497043 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 90/120
	I0603 12:56:47.498260 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 91/120
	I0603 12:56:48.500290 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 92/120
	I0603 12:56:49.501602 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 93/120
	I0603 12:56:50.503048 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 94/120
	I0603 12:56:51.504987 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 95/120
	I0603 12:56:52.506450 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 96/120
	I0603 12:56:53.507675 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 97/120
	I0603 12:56:54.509292 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 98/120
	I0603 12:56:55.511230 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 99/120
	I0603 12:56:56.513726 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 100/120
	I0603 12:56:57.514973 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 101/120
	I0603 12:56:58.516197 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 102/120
	I0603 12:56:59.517505 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 103/120
	I0603 12:57:00.518774 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 104/120
	I0603 12:57:01.520560 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 105/120
	I0603 12:57:02.521781 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 106/120
	I0603 12:57:03.523045 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 107/120
	I0603 12:57:04.524177 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 108/120
	I0603 12:57:05.525694 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 109/120
	I0603 12:57:06.527676 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 110/120
	I0603 12:57:07.529577 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 111/120
	I0603 12:57:08.530925 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 112/120
	I0603 12:57:09.532167 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 113/120
	I0603 12:57:10.533298 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 114/120
	I0603 12:57:11.534935 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 115/120
	I0603 12:57:12.536347 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 116/120
	I0603 12:57:13.537986 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 117/120
	I0603 12:57:14.539960 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 118/120
	I0603 12:57:15.542146 1104398 main.go:141] libmachine: (ha-220492-m04) Waiting for machine to stop 119/120
	I0603 12:57:16.543074 1104398 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0603 12:57:16.543158 1104398 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0603 12:57:16.545026 1104398 out.go:177] 
	W0603 12:57:16.546552 1104398 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0603 12:57:16.546573 1104398 out.go:239] * 
	* 
	W0603 12:57:16.550946 1104398 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0603 12:57:16.552341 1104398 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-220492 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-220492 status -v=7 --alsologtostderr: exit status 3 (18.911654285s)

                                                
                                                
-- stdout --
	ha-220492
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-220492-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-220492-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0603 12:57:16.602463 1104844 out.go:291] Setting OutFile to fd 1 ...
	I0603 12:57:16.602739 1104844 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:57:16.602749 1104844 out.go:304] Setting ErrFile to fd 2...
	I0603 12:57:16.602754 1104844 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:57:16.602933 1104844 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
	I0603 12:57:16.603145 1104844 out.go:298] Setting JSON to false
	I0603 12:57:16.603178 1104844 mustload.go:65] Loading cluster: ha-220492
	I0603 12:57:16.603275 1104844 notify.go:220] Checking for updates...
	I0603 12:57:16.603549 1104844 config.go:182] Loaded profile config "ha-220492": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:57:16.603565 1104844 status.go:255] checking status of ha-220492 ...
	I0603 12:57:16.603893 1104844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:57:16.603964 1104844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:57:16.628104 1104844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45833
	I0603 12:57:16.628574 1104844 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:57:16.629374 1104844 main.go:141] libmachine: Using API Version  1
	I0603 12:57:16.629419 1104844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:57:16.629760 1104844 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:57:16.629932 1104844 main.go:141] libmachine: (ha-220492) Calling .GetState
	I0603 12:57:16.631597 1104844 status.go:330] ha-220492 host status = "Running" (err=<nil>)
	I0603 12:57:16.631617 1104844 host.go:66] Checking if "ha-220492" exists ...
	I0603 12:57:16.631893 1104844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:57:16.631933 1104844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:57:16.646781 1104844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44949
	I0603 12:57:16.647262 1104844 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:57:16.647724 1104844 main.go:141] libmachine: Using API Version  1
	I0603 12:57:16.647746 1104844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:57:16.648124 1104844 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:57:16.648321 1104844 main.go:141] libmachine: (ha-220492) Calling .GetIP
	I0603 12:57:16.651157 1104844 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:57:16.651609 1104844 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:57:16.651637 1104844 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:57:16.651762 1104844 host.go:66] Checking if "ha-220492" exists ...
	I0603 12:57:16.652125 1104844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:57:16.652169 1104844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:57:16.669165 1104844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37693
	I0603 12:57:16.669716 1104844 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:57:16.670222 1104844 main.go:141] libmachine: Using API Version  1
	I0603 12:57:16.670244 1104844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:57:16.670618 1104844 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:57:16.670831 1104844 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:57:16.671023 1104844 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 12:57:16.671065 1104844 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:57:16.674099 1104844 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:57:16.674531 1104844 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:57:16.674560 1104844 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:57:16.674736 1104844 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:57:16.674939 1104844 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:57:16.675118 1104844 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:57:16.675245 1104844 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/id_rsa Username:docker}
	I0603 12:57:16.761963 1104844 ssh_runner.go:195] Run: systemctl --version
	I0603 12:57:16.768325 1104844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:57:16.783817 1104844 kubeconfig.go:125] found "ha-220492" server: "https://192.168.39.254:8443"
	I0603 12:57:16.783874 1104844 api_server.go:166] Checking apiserver status ...
	I0603 12:57:16.783925 1104844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:57:16.799053 1104844 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5096/cgroup
	W0603 12:57:16.808574 1104844 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5096/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 12:57:16.808630 1104844 ssh_runner.go:195] Run: ls
	I0603 12:57:16.813232 1104844 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0603 12:57:16.817278 1104844 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0603 12:57:16.817302 1104844 status.go:422] ha-220492 apiserver status = Running (err=<nil>)
	I0603 12:57:16.817313 1104844 status.go:257] ha-220492 status: &{Name:ha-220492 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 12:57:16.817333 1104844 status.go:255] checking status of ha-220492-m02 ...
	I0603 12:57:16.817680 1104844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:57:16.817731 1104844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:57:16.832659 1104844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38375
	I0603 12:57:16.833170 1104844 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:57:16.833762 1104844 main.go:141] libmachine: Using API Version  1
	I0603 12:57:16.833784 1104844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:57:16.834158 1104844 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:57:16.834360 1104844 main.go:141] libmachine: (ha-220492-m02) Calling .GetState
	I0603 12:57:16.835818 1104844 status.go:330] ha-220492-m02 host status = "Running" (err=<nil>)
	I0603 12:57:16.835836 1104844 host.go:66] Checking if "ha-220492-m02" exists ...
	I0603 12:57:16.836142 1104844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:57:16.836204 1104844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:57:16.850811 1104844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36609
	I0603 12:57:16.851163 1104844 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:57:16.851606 1104844 main.go:141] libmachine: Using API Version  1
	I0603 12:57:16.851623 1104844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:57:16.851941 1104844 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:57:16.852140 1104844 main.go:141] libmachine: (ha-220492-m02) Calling .GetIP
	I0603 12:57:16.854697 1104844 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:57:16.855087 1104844 main.go:141] libmachine: (ha-220492-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:56:2b", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:52:30 +0000 UTC Type:0 Mac:52:54:00:5d:56:2b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-220492-m02 Clientid:01:52:54:00:5d:56:2b}
	I0603 12:57:16.855115 1104844 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:57:16.855259 1104844 host.go:66] Checking if "ha-220492-m02" exists ...
	I0603 12:57:16.855557 1104844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:57:16.855590 1104844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:57:16.870668 1104844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44187
	I0603 12:57:16.871133 1104844 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:57:16.871599 1104844 main.go:141] libmachine: Using API Version  1
	I0603 12:57:16.871627 1104844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:57:16.871926 1104844 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:57:16.872104 1104844 main.go:141] libmachine: (ha-220492-m02) Calling .DriverName
	I0603 12:57:16.872302 1104844 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 12:57:16.872328 1104844 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHHostname
	I0603 12:57:16.874777 1104844 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:57:16.875240 1104844 main.go:141] libmachine: (ha-220492-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:56:2b", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:52:30 +0000 UTC Type:0 Mac:52:54:00:5d:56:2b Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-220492-m02 Clientid:01:52:54:00:5d:56:2b}
	I0603 12:57:16.875263 1104844 main.go:141] libmachine: (ha-220492-m02) DBG | domain ha-220492-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:5d:56:2b in network mk-ha-220492
	I0603 12:57:16.875370 1104844 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHPort
	I0603 12:57:16.875548 1104844 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHKeyPath
	I0603 12:57:16.875687 1104844 main.go:141] libmachine: (ha-220492-m02) Calling .GetSSHUsername
	I0603 12:57:16.875807 1104844 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m02/id_rsa Username:docker}
	I0603 12:57:16.959054 1104844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 12:57:16.977510 1104844 kubeconfig.go:125] found "ha-220492" server: "https://192.168.39.254:8443"
	I0603 12:57:16.977547 1104844 api_server.go:166] Checking apiserver status ...
	I0603 12:57:16.977600 1104844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 12:57:16.996189 1104844 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1391/cgroup
	W0603 12:57:17.006336 1104844 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1391/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 12:57:17.006385 1104844 ssh_runner.go:195] Run: ls
	I0603 12:57:17.010664 1104844 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0603 12:57:17.014999 1104844 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0603 12:57:17.015023 1104844 status.go:422] ha-220492-m02 apiserver status = Running (err=<nil>)
	I0603 12:57:17.015035 1104844 status.go:257] ha-220492-m02 status: &{Name:ha-220492-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 12:57:17.015059 1104844 status.go:255] checking status of ha-220492-m04 ...
	I0603 12:57:17.015432 1104844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:57:17.015475 1104844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:57:17.030872 1104844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35801
	I0603 12:57:17.031335 1104844 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:57:17.031924 1104844 main.go:141] libmachine: Using API Version  1
	I0603 12:57:17.031945 1104844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:57:17.032308 1104844 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:57:17.032559 1104844 main.go:141] libmachine: (ha-220492-m04) Calling .GetState
	I0603 12:57:17.034331 1104844 status.go:330] ha-220492-m04 host status = "Running" (err=<nil>)
	I0603 12:57:17.034351 1104844 host.go:66] Checking if "ha-220492-m04" exists ...
	I0603 12:57:17.034685 1104844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:57:17.034721 1104844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:57:17.050331 1104844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42877
	I0603 12:57:17.050705 1104844 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:57:17.051143 1104844 main.go:141] libmachine: Using API Version  1
	I0603 12:57:17.051161 1104844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:57:17.051507 1104844 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:57:17.051691 1104844 main.go:141] libmachine: (ha-220492-m04) Calling .GetIP
	I0603 12:57:17.054350 1104844 main.go:141] libmachine: (ha-220492-m04) DBG | domain ha-220492-m04 has defined MAC address 52:54:00:ce:45:9f in network mk-ha-220492
	I0603 12:57:17.054766 1104844 main.go:141] libmachine: (ha-220492-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:45:9f", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:54:43 +0000 UTC Type:0 Mac:52:54:00:ce:45:9f Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-220492-m04 Clientid:01:52:54:00:ce:45:9f}
	I0603 12:57:17.054794 1104844 main.go:141] libmachine: (ha-220492-m04) DBG | domain ha-220492-m04 has defined IP address 192.168.39.76 and MAC address 52:54:00:ce:45:9f in network mk-ha-220492
	I0603 12:57:17.054916 1104844 host.go:66] Checking if "ha-220492-m04" exists ...
	I0603 12:57:17.055356 1104844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:57:17.055415 1104844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:57:17.071039 1104844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42747
	I0603 12:57:17.071483 1104844 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:57:17.071956 1104844 main.go:141] libmachine: Using API Version  1
	I0603 12:57:17.071983 1104844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:57:17.072357 1104844 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:57:17.072565 1104844 main.go:141] libmachine: (ha-220492-m04) Calling .DriverName
	I0603 12:57:17.072721 1104844 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 12:57:17.072744 1104844 main.go:141] libmachine: (ha-220492-m04) Calling .GetSSHHostname
	I0603 12:57:17.075428 1104844 main.go:141] libmachine: (ha-220492-m04) DBG | domain ha-220492-m04 has defined MAC address 52:54:00:ce:45:9f in network mk-ha-220492
	I0603 12:57:17.075805 1104844 main.go:141] libmachine: (ha-220492-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:45:9f", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:54:43 +0000 UTC Type:0 Mac:52:54:00:ce:45:9f Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-220492-m04 Clientid:01:52:54:00:ce:45:9f}
	I0603 12:57:17.075834 1104844 main.go:141] libmachine: (ha-220492-m04) DBG | domain ha-220492-m04 has defined IP address 192.168.39.76 and MAC address 52:54:00:ce:45:9f in network mk-ha-220492
	I0603 12:57:17.075980 1104844 main.go:141] libmachine: (ha-220492-m04) Calling .GetSSHPort
	I0603 12:57:17.076153 1104844 main.go:141] libmachine: (ha-220492-m04) Calling .GetSSHKeyPath
	I0603 12:57:17.076304 1104844 main.go:141] libmachine: (ha-220492-m04) Calling .GetSSHUsername
	I0603 12:57:17.076434 1104844 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492-m04/id_rsa Username:docker}
	W0603 12:57:35.465668 1104844 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.76:22: connect: no route to host
	W0603 12:57:35.465818 1104844 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.76:22: connect: no route to host
	E0603 12:57:35.465841 1104844 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.76:22: connect: no route to host
	I0603 12:57:35.465849 1104844 status.go:257] ha-220492-m04 status: &{Name:ha-220492-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0603 12:57:35.465872 1104844 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.76:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-220492 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-220492 -n ha-220492
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-220492 logs -n 25: (1.662914873s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-220492 ssh -n ha-220492-m02 sudo cat                                          | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | /home/docker/cp-test_ha-220492-m03_ha-220492-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-220492 cp ha-220492-m03:/home/docker/cp-test.txt                              | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492-m04:/home/docker/cp-test_ha-220492-m03_ha-220492-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-220492 ssh -n                                                                 | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-220492 ssh -n ha-220492-m04 sudo cat                                          | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | /home/docker/cp-test_ha-220492-m03_ha-220492-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-220492 cp testdata/cp-test.txt                                                | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-220492 ssh -n                                                                 | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-220492 cp ha-220492-m04:/home/docker/cp-test.txt                              | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3428699095/001/cp-test_ha-220492-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-220492 ssh -n                                                                 | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-220492 cp ha-220492-m04:/home/docker/cp-test.txt                              | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492:/home/docker/cp-test_ha-220492-m04_ha-220492.txt                       |           |         |         |                     |                     |
	| ssh     | ha-220492 ssh -n                                                                 | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-220492 ssh -n ha-220492 sudo cat                                              | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | /home/docker/cp-test_ha-220492-m04_ha-220492.txt                                 |           |         |         |                     |                     |
	| cp      | ha-220492 cp ha-220492-m04:/home/docker/cp-test.txt                              | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492-m02:/home/docker/cp-test_ha-220492-m04_ha-220492-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-220492 ssh -n                                                                 | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-220492 ssh -n ha-220492-m02 sudo cat                                          | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | /home/docker/cp-test_ha-220492-m04_ha-220492-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-220492 cp ha-220492-m04:/home/docker/cp-test.txt                              | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492-m03:/home/docker/cp-test_ha-220492-m04_ha-220492-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-220492 ssh -n                                                                 | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | ha-220492-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-220492 ssh -n ha-220492-m03 sudo cat                                          | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC | 03 Jun 24 12:45 UTC |
	|         | /home/docker/cp-test_ha-220492-m04_ha-220492-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-220492 node stop m02 -v=7                                                     | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:45 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-220492 node start m02 -v=7                                                    | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:47 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-220492 -v=7                                                           | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:48 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-220492 -v=7                                                                | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:48 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-220492 --wait=true -v=7                                                    | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:50 UTC | 03 Jun 24 12:54 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-220492                                                                | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:54 UTC |                     |
	| node    | ha-220492 node delete m03 -v=7                                                   | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:54 UTC | 03 Jun 24 12:55 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-220492 stop -v=7                                                              | ha-220492 | jenkins | v1.33.1 | 03 Jun 24 12:55 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 12:50:35
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 12:50:35.514830 1102637 out.go:291] Setting OutFile to fd 1 ...
	I0603 12:50:35.515133 1102637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:50:35.515143 1102637 out.go:304] Setting ErrFile to fd 2...
	I0603 12:50:35.515148 1102637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:50:35.515311 1102637 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
	I0603 12:50:35.515865 1102637 out.go:298] Setting JSON to false
	I0603 12:50:35.516966 1102637 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12782,"bootTime":1717406253,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 12:50:35.517029 1102637 start.go:139] virtualization: kvm guest
	I0603 12:50:35.519522 1102637 out.go:177] * [ha-220492] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 12:50:35.520944 1102637 notify.go:220] Checking for updates...
	I0603 12:50:35.520949 1102637 out.go:177]   - MINIKUBE_LOCATION=19011
	I0603 12:50:35.522446 1102637 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 12:50:35.523880 1102637 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 12:50:35.525245 1102637 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 12:50:35.526538 1102637 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 12:50:35.527652 1102637 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 12:50:35.529206 1102637 config.go:182] Loaded profile config "ha-220492": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:50:35.529361 1102637 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 12:50:35.529833 1102637 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:50:35.529896 1102637 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:50:35.545387 1102637 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46183
	I0603 12:50:35.545847 1102637 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:50:35.546423 1102637 main.go:141] libmachine: Using API Version  1
	I0603 12:50:35.546443 1102637 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:50:35.546886 1102637 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:50:35.547111 1102637 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:50:35.582575 1102637 out.go:177] * Using the kvm2 driver based on existing profile
	I0603 12:50:35.583720 1102637 start.go:297] selected driver: kvm2
	I0603 12:50:35.583742 1102637 start.go:901] validating driver "kvm2" against &{Name:ha-220492 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.1 ClusterName:ha-220492 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.106 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.169 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.76 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:
false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:50:35.583925 1102637 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 12:50:35.584320 1102637 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 12:50:35.584400 1102637 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19011-1078924/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0603 12:50:35.599685 1102637 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0603 12:50:35.600408 1102637 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 12:50:35.600502 1102637 cni.go:84] Creating CNI manager for ""
	I0603 12:50:35.600517 1102637 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0603 12:50:35.600594 1102637 start.go:340] cluster config:
	{Name:ha-220492 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-220492 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.106 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.169 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.76 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tille
r:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:50:35.600742 1102637 iso.go:125] acquiring lock: {Name:mka26d6a83f88b83737ccc78b57cc462fbe70fe1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 12:50:35.602341 1102637 out.go:177] * Starting "ha-220492" primary control-plane node in "ha-220492" cluster
	I0603 12:50:35.603459 1102637 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 12:50:35.603491 1102637 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0603 12:50:35.603506 1102637 cache.go:56] Caching tarball of preloaded images
	I0603 12:50:35.603592 1102637 preload.go:173] Found /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 12:50:35.603607 1102637 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0603 12:50:35.603726 1102637 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/config.json ...
	I0603 12:50:35.603916 1102637 start.go:360] acquireMachinesLock for ha-220492: {Name:mk20baaab39609d00406b78ad309423511e633ec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 12:50:35.603962 1102637 start.go:364] duration metric: took 26.027µs to acquireMachinesLock for "ha-220492"
	I0603 12:50:35.603981 1102637 start.go:96] Skipping create...Using existing machine configuration
	I0603 12:50:35.603992 1102637 fix.go:54] fixHost starting: 
	I0603 12:50:35.604256 1102637 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:50:35.604294 1102637 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:50:35.619312 1102637 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39319
	I0603 12:50:35.619706 1102637 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:50:35.620170 1102637 main.go:141] libmachine: Using API Version  1
	I0603 12:50:35.620186 1102637 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:50:35.620566 1102637 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:50:35.620771 1102637 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:50:35.620914 1102637 main.go:141] libmachine: (ha-220492) Calling .GetState
	I0603 12:50:35.622495 1102637 fix.go:112] recreateIfNeeded on ha-220492: state=Running err=<nil>
	W0603 12:50:35.622515 1102637 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 12:50:35.624360 1102637 out.go:177] * Updating the running kvm2 "ha-220492" VM ...
	I0603 12:50:35.625691 1102637 machine.go:94] provisionDockerMachine start ...
	I0603 12:50:35.625714 1102637 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:50:35.625912 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:50:35.628346 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:50:35.628817 1102637 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:50:35.628848 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:50:35.628947 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:50:35.629151 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:50:35.629350 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:50:35.629521 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:50:35.629695 1102637 main.go:141] libmachine: Using SSH client type: native
	I0603 12:50:35.629919 1102637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0603 12:50:35.629935 1102637 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 12:50:35.746750 1102637 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-220492
	
	I0603 12:50:35.746779 1102637 main.go:141] libmachine: (ha-220492) Calling .GetMachineName
	I0603 12:50:35.747023 1102637 buildroot.go:166] provisioning hostname "ha-220492"
	I0603 12:50:35.747057 1102637 main.go:141] libmachine: (ha-220492) Calling .GetMachineName
	I0603 12:50:35.747265 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:50:35.749695 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:50:35.750078 1102637 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:50:35.750104 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:50:35.750206 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:50:35.750394 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:50:35.750581 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:50:35.750732 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:50:35.750877 1102637 main.go:141] libmachine: Using SSH client type: native
	I0603 12:50:35.751088 1102637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0603 12:50:35.751101 1102637 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-220492 && echo "ha-220492" | sudo tee /etc/hostname
	I0603 12:50:35.887820 1102637 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-220492
	
	I0603 12:50:35.887850 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:50:35.890703 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:50:35.891126 1102637 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:50:35.891155 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:50:35.891374 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:50:35.891586 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:50:35.891748 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:50:35.891917 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:50:35.892088 1102637 main.go:141] libmachine: Using SSH client type: native
	I0603 12:50:35.892307 1102637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0603 12:50:35.892329 1102637 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-220492' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-220492/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-220492' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 12:50:36.002397 1102637 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 12:50:36.002451 1102637 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19011-1078924/.minikube CaCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19011-1078924/.minikube}
	I0603 12:50:36.002477 1102637 buildroot.go:174] setting up certificates
	I0603 12:50:36.002489 1102637 provision.go:84] configureAuth start
	I0603 12:50:36.002504 1102637 main.go:141] libmachine: (ha-220492) Calling .GetMachineName
	I0603 12:50:36.002791 1102637 main.go:141] libmachine: (ha-220492) Calling .GetIP
	I0603 12:50:36.005045 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:50:36.005489 1102637 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:50:36.005534 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:50:36.005720 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:50:36.007979 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:50:36.008371 1102637 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:50:36.008414 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:50:36.008549 1102637 provision.go:143] copyHostCerts
	I0603 12:50:36.008587 1102637 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 12:50:36.008641 1102637 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem, removing ...
	I0603 12:50:36.008655 1102637 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 12:50:36.008715 1102637 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem (1078 bytes)
	I0603 12:50:36.008813 1102637 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 12:50:36.008834 1102637 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem, removing ...
	I0603 12:50:36.008838 1102637 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 12:50:36.008865 1102637 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem (1123 bytes)
	I0603 12:50:36.008931 1102637 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 12:50:36.008947 1102637 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem, removing ...
	I0603 12:50:36.008953 1102637 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 12:50:36.008973 1102637 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem (1675 bytes)
	I0603 12:50:36.009031 1102637 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem org=jenkins.ha-220492 san=[127.0.0.1 192.168.39.6 ha-220492 localhost minikube]
	I0603 12:50:36.426579 1102637 provision.go:177] copyRemoteCerts
	I0603 12:50:36.426645 1102637 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 12:50:36.426673 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:50:36.429050 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:50:36.429506 1102637 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:50:36.429535 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:50:36.429719 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:50:36.429931 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:50:36.430110 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:50:36.430266 1102637 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/id_rsa Username:docker}
	I0603 12:50:36.516114 1102637 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0603 12:50:36.516187 1102637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 12:50:36.542904 1102637 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0603 12:50:36.542968 1102637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0603 12:50:36.567580 1102637 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0603 12:50:36.567634 1102637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 12:50:36.595013 1102637 provision.go:87] duration metric: took 592.509171ms to configureAuth
	I0603 12:50:36.595036 1102637 buildroot.go:189] setting minikube options for container-runtime
	I0603 12:50:36.595236 1102637 config.go:182] Loaded profile config "ha-220492": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:50:36.595332 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:50:36.597935 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:50:36.598319 1102637 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:50:36.598350 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:50:36.598531 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:50:36.598780 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:50:36.598945 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:50:36.599091 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:50:36.599233 1102637 main.go:141] libmachine: Using SSH client type: native
	I0603 12:50:36.599408 1102637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0603 12:50:36.599423 1102637 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 12:52:07.347681 1102637 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 12:52:07.347723 1102637 machine.go:97] duration metric: took 1m31.722013399s to provisionDockerMachine
	I0603 12:52:07.347740 1102637 start.go:293] postStartSetup for "ha-220492" (driver="kvm2")
	I0603 12:52:07.347754 1102637 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 12:52:07.347778 1102637 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:52:07.348150 1102637 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 12:52:07.348197 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:52:07.351364 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:52:07.351779 1102637 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:52:07.351809 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:52:07.351971 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:52:07.352156 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:52:07.352294 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:52:07.352395 1102637 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/id_rsa Username:docker}
	I0603 12:52:07.441051 1102637 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 12:52:07.445489 1102637 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 12:52:07.445521 1102637 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/addons for local assets ...
	I0603 12:52:07.445582 1102637 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/files for local assets ...
	I0603 12:52:07.445653 1102637 filesync.go:149] local asset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> 10862512.pem in /etc/ssl/certs
	I0603 12:52:07.445664 1102637 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> /etc/ssl/certs/10862512.pem
	I0603 12:52:07.445740 1102637 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 12:52:07.455307 1102637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 12:52:07.479112 1102637 start.go:296] duration metric: took 131.353728ms for postStartSetup
	I0603 12:52:07.479169 1102637 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:52:07.479534 1102637 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0603 12:52:07.479563 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:52:07.482050 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:52:07.482452 1102637 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:52:07.482475 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:52:07.482629 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:52:07.482807 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:52:07.482961 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:52:07.483080 1102637 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/id_rsa Username:docker}
	W0603 12:52:07.567189 1102637 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0603 12:52:07.567220 1102637 fix.go:56] duration metric: took 1m31.963229548s for fixHost
	I0603 12:52:07.567251 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:52:07.569872 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:52:07.570344 1102637 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:52:07.570374 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:52:07.570549 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:52:07.570753 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:52:07.570934 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:52:07.571062 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:52:07.571235 1102637 main.go:141] libmachine: Using SSH client type: native
	I0603 12:52:07.571406 1102637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0603 12:52:07.571417 1102637 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 12:52:07.682126 1102637 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717419127.651620369
	
	I0603 12:52:07.682152 1102637 fix.go:216] guest clock: 1717419127.651620369
	I0603 12:52:07.682159 1102637 fix.go:229] Guest: 2024-06-03 12:52:07.651620369 +0000 UTC Remote: 2024-06-03 12:52:07.567236399 +0000 UTC m=+92.091400626 (delta=84.38397ms)
	I0603 12:52:07.682181 1102637 fix.go:200] guest clock delta is within tolerance: 84.38397ms
	I0603 12:52:07.682186 1102637 start.go:83] releasing machines lock for "ha-220492", held for 1m32.078213239s
	I0603 12:52:07.682210 1102637 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:52:07.682493 1102637 main.go:141] libmachine: (ha-220492) Calling .GetIP
	I0603 12:52:07.685004 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:52:07.685375 1102637 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:52:07.685419 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:52:07.685578 1102637 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:52:07.686077 1102637 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:52:07.686283 1102637 main.go:141] libmachine: (ha-220492) Calling .DriverName
	I0603 12:52:07.686376 1102637 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 12:52:07.686443 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:52:07.686532 1102637 ssh_runner.go:195] Run: cat /version.json
	I0603 12:52:07.686550 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHHostname
	I0603 12:52:07.689066 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:52:07.689266 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:52:07.689531 1102637 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:52:07.689564 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:52:07.689729 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:52:07.689869 1102637 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:52:07.689895 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:52:07.689903 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:52:07.690031 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHPort
	I0603 12:52:07.690088 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:52:07.690225 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHKeyPath
	I0603 12:52:07.690289 1102637 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/id_rsa Username:docker}
	I0603 12:52:07.690399 1102637 main.go:141] libmachine: (ha-220492) Calling .GetSSHUsername
	I0603 12:52:07.690562 1102637 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/ha-220492/id_rsa Username:docker}
	I0603 12:52:07.796817 1102637 ssh_runner.go:195] Run: systemctl --version
	I0603 12:52:07.803106 1102637 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 12:52:07.966552 1102637 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 12:52:07.973438 1102637 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 12:52:07.973515 1102637 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 12:52:07.983548 1102637 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0603 12:52:07.983573 1102637 start.go:494] detecting cgroup driver to use...
	I0603 12:52:07.983645 1102637 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 12:52:08.002448 1102637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 12:52:08.017143 1102637 docker.go:217] disabling cri-docker service (if available) ...
	I0603 12:52:08.017196 1102637 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 12:52:08.033497 1102637 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 12:52:08.047156 1102637 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 12:52:08.208328 1102637 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 12:52:08.352367 1102637 docker.go:233] disabling docker service ...
	I0603 12:52:08.352446 1102637 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 12:52:08.368218 1102637 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 12:52:08.381892 1102637 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 12:52:08.529995 1102637 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 12:52:08.676854 1102637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 12:52:08.690303 1102637 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 12:52:08.708663 1102637 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 12:52:08.708740 1102637 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:52:08.718943 1102637 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 12:52:08.719008 1102637 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:52:08.728952 1102637 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:52:08.739045 1102637 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:52:08.749070 1102637 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 12:52:08.759914 1102637 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:52:08.770265 1102637 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:52:08.781587 1102637 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 12:52:08.791987 1102637 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 12:52:08.801365 1102637 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 12:52:08.810615 1102637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:52:08.949844 1102637 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 12:52:18.307826 1102637 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.357935721s)
	I0603 12:52:18.307874 1102637 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 12:52:18.307938 1102637 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 12:52:18.313139 1102637 start.go:562] Will wait 60s for crictl version
	I0603 12:52:18.313206 1102637 ssh_runner.go:195] Run: which crictl
	I0603 12:52:18.317717 1102637 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 12:52:18.369157 1102637 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 12:52:18.369249 1102637 ssh_runner.go:195] Run: crio --version
	I0603 12:52:18.403441 1102637 ssh_runner.go:195] Run: crio --version
	I0603 12:52:18.438271 1102637 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 12:52:18.439459 1102637 main.go:141] libmachine: (ha-220492) Calling .GetIP
	I0603 12:52:18.442111 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:52:18.442466 1102637 main.go:141] libmachine: (ha-220492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:0d:a6", ip: ""} in network mk-ha-220492: {Iface:virbr1 ExpiryTime:2024-06-03 13:40:59 +0000 UTC Type:0 Mac:52:54:00:79:0d:a6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-220492 Clientid:01:52:54:00:79:0d:a6}
	I0603 12:52:18.442492 1102637 main.go:141] libmachine: (ha-220492) DBG | domain ha-220492 has defined IP address 192.168.39.6 and MAC address 52:54:00:79:0d:a6 in network mk-ha-220492
	I0603 12:52:18.442699 1102637 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0603 12:52:18.447687 1102637 kubeadm.go:877] updating cluster {Name:ha-220492 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:ha-220492 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.106 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.169 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.76 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 12:52:18.447857 1102637 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 12:52:18.447924 1102637 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:52:18.491510 1102637 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 12:52:18.491534 1102637 crio.go:433] Images already preloaded, skipping extraction
	I0603 12:52:18.491585 1102637 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 12:52:18.532159 1102637 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 12:52:18.532187 1102637 cache_images.go:84] Images are preloaded, skipping loading
	I0603 12:52:18.532201 1102637 kubeadm.go:928] updating node { 192.168.39.6 8443 v1.30.1 crio true true} ...
	I0603 12:52:18.532361 1102637 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-220492 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-220492 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 12:52:18.532447 1102637 ssh_runner.go:195] Run: crio config
	I0603 12:52:18.589584 1102637 cni.go:84] Creating CNI manager for ""
	I0603 12:52:18.589609 1102637 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0603 12:52:18.589619 1102637 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 12:52:18.589642 1102637 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.6 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-220492 NodeName:ha-220492 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 12:52:18.589855 1102637 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-220492"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 12:52:18.589881 1102637 kube-vip.go:115] generating kube-vip config ...
	I0603 12:52:18.589923 1102637 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0603 12:52:18.601928 1102637 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0603 12:52:18.602121 1102637 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0603 12:52:18.602197 1102637 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 12:52:18.611932 1102637 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 12:52:18.611992 1102637 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0603 12:52:18.621252 1102637 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0603 12:52:18.638068 1102637 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 12:52:18.654346 1102637 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0603 12:52:18.671483 1102637 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0603 12:52:18.689096 1102637 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0603 12:52:18.692932 1102637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 12:52:18.836964 1102637 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 12:52:18.851718 1102637 certs.go:68] Setting up /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492 for IP: 192.168.39.6
	I0603 12:52:18.851750 1102637 certs.go:194] generating shared ca certs ...
	I0603 12:52:18.851775 1102637 certs.go:226] acquiring lock for ca certs: {Name:mkeec5aabce7c9540fcb31b78e4f96c2851d54f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:52:18.851995 1102637 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key
	I0603 12:52:18.852049 1102637 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key
	I0603 12:52:18.852062 1102637 certs.go:256] generating profile certs ...
	I0603 12:52:18.852199 1102637 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/client.key
	I0603 12:52:18.852235 1102637 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key.ebd0a88a
	I0603 12:52:18.852254 1102637 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt.ebd0a88a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.6 192.168.39.106 192.168.39.169 192.168.39.254]
	I0603 12:52:19.038051 1102637 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt.ebd0a88a ...
	I0603 12:52:19.038085 1102637 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt.ebd0a88a: {Name:mk67a70e707ac0a534b1f8641bdf1100f902e28f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:52:19.038266 1102637 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key.ebd0a88a ...
	I0603 12:52:19.038278 1102637 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key.ebd0a88a: {Name:mkf25d39e1cf8c5ffb2c8ddbb157ca55f89f967b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:52:19.038347 1102637 certs.go:381] copying /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt.ebd0a88a -> /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt
	I0603 12:52:19.038498 1102637 certs.go:385] copying /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key.ebd0a88a -> /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key
	I0603 12:52:19.038631 1102637 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/proxy-client.key
	I0603 12:52:19.038646 1102637 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0603 12:52:19.038662 1102637 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0603 12:52:19.038675 1102637 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0603 12:52:19.038688 1102637 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0603 12:52:19.038700 1102637 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0603 12:52:19.038711 1102637 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0603 12:52:19.038725 1102637 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0603 12:52:19.038737 1102637 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0603 12:52:19.038794 1102637 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem (1338 bytes)
	W0603 12:52:19.038822 1102637 certs.go:480] ignoring /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251_empty.pem, impossibly tiny 0 bytes
	I0603 12:52:19.038832 1102637 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 12:52:19.038852 1102637 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem (1078 bytes)
	I0603 12:52:19.038874 1102637 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem (1123 bytes)
	I0603 12:52:19.038896 1102637 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem (1675 bytes)
	I0603 12:52:19.038932 1102637 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 12:52:19.038956 1102637 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:52:19.038970 1102637 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem -> /usr/share/ca-certificates/1086251.pem
	I0603 12:52:19.038979 1102637 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> /usr/share/ca-certificates/10862512.pem
	I0603 12:52:19.039551 1102637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 12:52:19.065283 1102637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 12:52:19.089051 1102637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 12:52:19.114873 1102637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0603 12:52:19.139510 1102637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0603 12:52:19.162567 1102637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 12:52:19.185849 1102637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 12:52:19.209559 1102637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/ha-220492/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 12:52:19.232743 1102637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 12:52:19.256092 1102637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem --> /usr/share/ca-certificates/1086251.pem (1338 bytes)
	I0603 12:52:19.280001 1102637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /usr/share/ca-certificates/10862512.pem (1708 bytes)
	I0603 12:52:19.303208 1102637 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 12:52:19.320154 1102637 ssh_runner.go:195] Run: openssl version
	I0603 12:52:19.326131 1102637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1086251.pem && ln -fs /usr/share/ca-certificates/1086251.pem /etc/ssl/certs/1086251.pem"
	I0603 12:52:19.336911 1102637 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1086251.pem
	I0603 12:52:19.341242 1102637 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:37 /usr/share/ca-certificates/1086251.pem
	I0603 12:52:19.341301 1102637 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1086251.pem
	I0603 12:52:19.346946 1102637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1086251.pem /etc/ssl/certs/51391683.0"
	I0603 12:52:19.356719 1102637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10862512.pem && ln -fs /usr/share/ca-certificates/10862512.pem /etc/ssl/certs/10862512.pem"
	I0603 12:52:19.367870 1102637 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10862512.pem
	I0603 12:52:19.372463 1102637 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:37 /usr/share/ca-certificates/10862512.pem
	I0603 12:52:19.372519 1102637 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10862512.pem
	I0603 12:52:19.378237 1102637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10862512.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 12:52:19.388171 1102637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 12:52:19.399393 1102637 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:52:19.403735 1102637 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:24 /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:52:19.403782 1102637 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 12:52:19.409342 1102637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 12:52:19.418843 1102637 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 12:52:19.423188 1102637 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 12:52:19.428716 1102637 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 12:52:19.434182 1102637 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 12:52:19.439688 1102637 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 12:52:19.445061 1102637 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 12:52:19.450518 1102637 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 12:52:19.455940 1102637 kubeadm.go:391] StartCluster: {Name:ha-220492 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-220492 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.106 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.169 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.76 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod
:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:52:19.456055 1102637 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 12:52:19.456092 1102637 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 12:52:19.497257 1102637 cri.go:89] found id: "8284f0e2d92cb3b5e720af6b59495e2dc6938032cc42de3a067fb58acf0d7e2b"
	I0603 12:52:19.497287 1102637 cri.go:89] found id: "bdd7def84a8632d39ffccfb334eaa29f667c14ace1db051c37cefe35b2acda2c"
	I0603 12:52:19.497291 1102637 cri.go:89] found id: "8ad33e865c65e667b1c3a3abe78d044c6e9acdaf073c00b7bace9095deb02715"
	I0603 12:52:19.497294 1102637 cri.go:89] found id: "e6ce4d724e4d0c279447ac2f5973f49a09a32638d6fbcbe53b93079a62ed667e"
	I0603 12:52:19.497296 1102637 cri.go:89] found id: "50f524d71cd1f2697116e7f21f2de4dce2f9e5561c46a64f6c24713c3a56514e"
	I0603 12:52:19.497305 1102637 cri.go:89] found id: "7c67da4b30c5f444556405b8a25da6fbb0b38f383d298669f9f21785ed464934"
	I0603 12:52:19.497308 1102637 cri.go:89] found id: "1b000c5164ef9debe3c82089d543b68405e7ae72c0f46e233daab8b658621dac"
	I0603 12:52:19.497310 1102637 cri.go:89] found id: "e802c94fbf7b652f64d20242a16e1a092bc293af274ffda5f7da2cdb3726110f"
	I0603 12:52:19.497313 1102637 cri.go:89] found id: "16c93dcdad420f0831a36fd31ab05cb7c3a9fefd9706a928d0b31b781e1cbcb5"
	I0603 12:52:19.497319 1102637 cri.go:89] found id: "1fe31d7dcb7c4bece73cdae47d1e4f870a32eb28d62d5b5be6ba47c7aebeef6b"
	I0603 12:52:19.497321 1102637 cri.go:89] found id: "f2c6a50d20a2f169936062c7c4c41810fed1d7c1fbfd8db5b78066436668c44c"
	I0603 12:52:19.497324 1102637 cri.go:89] found id: "3f1c2bb32752f666af65f18178d8dd09b063abaa5dd50c071c9f8f377fc63156"
	I0603 12:52:19.497326 1102637 cri.go:89] found id: "24aa5625e9a8ad09c021e567710cafe54b2645a693d4daeb7b4e26ef9afea15b"
	I0603 12:52:19.497328 1102637 cri.go:89] found id: "86f8a60e5333435d8ac7bc454e10cecb904b633e2ae00b080728114f5f1b1f35"
	I0603 12:52:19.497333 1102637 cri.go:89] found id: ""
	I0603 12:52:19.497376 1102637 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jun 03 12:57:36 ha-220492 crio[3827]: time="2024-06-03 12:57:36.075984113Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717419456075957040,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b61476ae-9dea-4b80-bfeb-a5e487013b77 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:57:36 ha-220492 crio[3827]: time="2024-06-03 12:57:36.077304150Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bc2af390-4c6b-4622-a72e-0b43db983f34 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:57:36 ha-220492 crio[3827]: time="2024-06-03 12:57:36.077379716Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bc2af390-4c6b-4622-a72e-0b43db983f34 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:57:36 ha-220492 crio[3827]: time="2024-06-03 12:57:36.077810377Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3bdbad040139c6bcb4ecf79851d910f1956ae7d70ddb13a89f98e6f364d182cc,PodSandboxId:6fc8d1bbe01374efe9d05407c11d59e8f13779c8d401a0f6641cdb919f0d6a31,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717419225277983339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f85b2808-26fa-4608-a208-2c11eaddc293,},Annotations:map[string]string{io.kubernetes.container.hash: 90de17f7,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed5b6aa1d959c00513e5e99b6b1c366a721b56bf4296e42444533d15a3d5ea89,PodSandboxId:486909c094af46ad1d93db33bc54ae123e60d8222931ab1e608c83e878ba5fa4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1717419203265462929,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hbl6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f697f13-4a60-4247-bb5e-a8bcdd3336cd,},Annotations:map[string]string{io.kubernetes.container.hash: be10c8dc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b5b47bf0b5f628ddb3f76561fa28630a9d7dedc2bb4c83f094255ae244dca8e,PodSandboxId:453f86c770842027c9ff0c29f1ccebad7218dbfd77885056a022ac8cb4bc43c2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717419191260651439,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39cb6903f3b5aa40ef8bd7e72aabe415,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:814ee2909ea750a0b5ad76b4b2083b3c8847b7a38ab14da4d756db2e147a7bf5,PodSandboxId:6d8da102188476dd055ffac241996f145d4a29e4e78cb13e080e8a4dee1d8ad5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717419178550858553,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5z6j2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 776fef6b-c7d6-4793-a168-5102737dd302,},Annotations:map[string]string{io.kubernetes.container.hash: be6db159,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3d2b258b246f4a87838c7b594819a6cc46d5a5410924b49d29d57a56ad652be,PodSandboxId:878e194eb4c5d65a6da7fc151f4dca4ffc321bd5b9e8c2df7be3e1bcffd1d07c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717419176588367856,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4620ab680afec05b26612f993071a866,},Annotations:map[string]string{io.kubernetes.container.hash: 4b374434,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f1ebe7c407f4bdc7d5296580d428b5ce113f202ffbd23a4e808f0b6bc6b3f3d,PodSandboxId:6fc8d1bbe01374efe9d05407c11d59e8f13779c8d401a0f6641cdb919f0d6a31,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717419175255850915,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f85b2808-26fa-4608-a208-2c11eaddc293,},Annotations:map[string]string{io.kubernetes.container.hash: 90de17f7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdf7fec087647f8578e25827a877d7e975efb0c1d754cbd1eeee97e5ce09fa80,PodSandboxId:92b8d227c6d1ed130f22356a680e6f7b9c919e83912a9ae9508debd5183f2caf,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1717419157557932569,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61fd699b695dc1beae4468f8fa8e57e9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:7c3064afc1c4a526855ad3df05b9a3c438da7b156ff6f30af2f40e926433a359,PodSandboxId:498fb53617c69c314b9a3f9014055e57574506035ab66bb13a4f46960f0c2223,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717419146600723623,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w2hpg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a52e47-6a1e-4f9c-ba1b-feb3e362531a,},Annotations:map[string]string{io.kubernetes.container.hash: c1dc988b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f1dffbb
4b704b536b1bcb6e68533fef327f63a3d78ca90949c0e8033c83dd2a,PodSandboxId:486909c094af46ad1d93db33bc54ae123e60d8222931ab1e608c83e878ba5fa4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717419145401912650,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hbl6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f697f13-4a60-4247-bb5e-a8bcdd3336cd,},Annotations:map[string]string{io.kubernetes.container.hash: be10c8dc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c80e06763a2b7bb46c689ad6c8fef0f893f1765af29
1317b666a68ab2bbbc8ec,PodSandboxId:e2131cfde7d9492dab997655ed6ea3d6bc4fe67b5ba4becca7259f73e3c5aa5e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717419145422308062,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-q7687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e78d9e6-8feb-44ef-b44d-ed6039ab00ee,},Annotations:map[string]string{io.kubernetes.container.hash: 62ef9a49,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2dd659fd5934802b2bfca420abad6aaf55ea7fbb840dce1459f7090b32a7618,PodSandboxId:7b970b2197689db12520ccfc228623fa21aff90847dab91f9a332f6ea866e828,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717419145375975426,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-d2tgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534e15ed-2e68-4275-8725-099d7240c25d,},Annotations:map[string]string{io.kubernetes.container.hash: 5fcebe2b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e5273b3a26c85e72a54ec34a7be83af05bfc0f3643495817233c3de238c2cdd,PodSandboxId:878e194eb4c5d65a6da7fc151f4dca4ffc321bd5b9e8c2df7be3e1bcffd1d07c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717419145231319917,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4620ab
680afec05b26612f993071a866,},Annotations:map[string]string{io.kubernetes.container.hash: 4b374434,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40a382da798af9922fb6ae55c5c75fdaaf42decaca661a820f6e168e79fce883,PodSandboxId:02cc9c7643ae1d60c77512d0aeddf63e8f21db58a64f13cd32f2de0e6f5846de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717419145208721134,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7cb138fa0228e501515fc55611
3859c,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fcebee1743ba61371dce9de34e1b9c613c40f2a1ecca8bfc2308d417f6df457,PodSandboxId:7f89ba975704b62f04831d12706488d1e68f340ca348752766da5bcf3979e5cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717419145145376217,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4887038ca9eb66694db5e7bd6f010727,},Annotations:map[string]string{io.kubernetes
.container.hash: d01e1cae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07ce13ba943e5cd4286981b654cc48caf310b4baef07c15405714e750e44b1b5,PodSandboxId:453f86c770842027c9ff0c29f1ccebad7218dbfd77885056a022ac8cb4bc43c2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717419145058310427,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39cb6903f3b5aa40ef8bd7e72aabe415,},Annotations:map[string]string{io.kuber
netes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76c9e115804f2ef0da9fc274ec7485309eb70b62384b05254dfa5ac8c6728e13,PodSandboxId:c73634cd0ed838aca7c928c176507e2dfa568ab421aa9e03f90ddc76ca3f89e3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1717418649829835451,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5z6j2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 776fef6b-c7d6-4793-a168-5102737dd302,},Annotations:map[string]string{io.kuberne
tes.container.hash: be6db159,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c67da4b30c5f444556405b8a25da6fbb0b38f383d298669f9f21785ed464934,PodSandboxId:1b5bd65416e85f6689ee15ecb3ab55e907fbd1077ac5e5439bec68eb110a6f2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717418505831123170,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-q7687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e78d9e6-8feb-44ef-b44d-ed6039ab00ee,},Annotations:map[string]string{io.kubernetes.container.hash: 62ef9a49,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50f524d71cd1f2697116e7f21f2de4dce2f9e5561c46a64f6c24713c3a56514e,PodSandboxId:6d9c5f1a45b9ec2700f63795dc3d92103fc5b6472ac9f6d2a638e50fb379eb54,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717418505831526984,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-d2tgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534e15ed-2e68-4275-8725-099d7240c25d,},Annotations:map[string]string{io.kubernetes.container.hash: 5fcebe2b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16c93dcdad420f0831a36fd31ab05cb7c3a9fefd9706a928d0b31b781e1cbcb5,PodSandboxId:4d41713a63ac581773f2729e379c68f79cb014627aff280bda59d0e8a7cf22e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717418501295990343,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w2hpg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a52e47-6a1e-4f9c-ba1b-feb3e362531a,},Annotations:map[string]string{io.kubernetes.container.hash: c1dc988b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1c2bb32752f666af65f18178d8dd09b063abaa5dd50c071c9f8f377fc63156,PodSandboxId:ba8b6aec50011e9fe8c42d7e92a51e5a0907e4e61a030192574cfa79773380e4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1717418481055378274,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4887038ca9eb66694db5e7bd6f010727,},Annotations:map[string]string{io.kubernetes.container.hash: d01e1cae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86f8a60e5333435d8ac7bc454e10cecb904b633e2ae00b080728114f5f1b1f35,PodSandboxId:b96e7f287499d7304c1d1aa216ee6aea5b51e6bbe5bfda82d347772d73f33297,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedA
t:1717418480896883638,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7cb138fa0228e501515fc556113859c,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bc2af390-4c6b-4622-a72e-0b43db983f34 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:57:36 ha-220492 crio[3827]: time="2024-06-03 12:57:36.129921470Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bf3760f7-c547-4e1d-82a1-37306f23297f name=/runtime.v1.RuntimeService/Version
	Jun 03 12:57:36 ha-220492 crio[3827]: time="2024-06-03 12:57:36.129997744Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bf3760f7-c547-4e1d-82a1-37306f23297f name=/runtime.v1.RuntimeService/Version
	Jun 03 12:57:36 ha-220492 crio[3827]: time="2024-06-03 12:57:36.132454409Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0217f7f7-b086-4485-bc41-3f2edc0bc071 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:57:36 ha-220492 crio[3827]: time="2024-06-03 12:57:36.132919309Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717419456132896292,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0217f7f7-b086-4485-bc41-3f2edc0bc071 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:57:36 ha-220492 crio[3827]: time="2024-06-03 12:57:36.133490111Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=466282b6-da78-4961-aae8-54dbf7768db0 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:57:36 ha-220492 crio[3827]: time="2024-06-03 12:57:36.133571861Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=466282b6-da78-4961-aae8-54dbf7768db0 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:57:36 ha-220492 crio[3827]: time="2024-06-03 12:57:36.134488182Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3bdbad040139c6bcb4ecf79851d910f1956ae7d70ddb13a89f98e6f364d182cc,PodSandboxId:6fc8d1bbe01374efe9d05407c11d59e8f13779c8d401a0f6641cdb919f0d6a31,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717419225277983339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f85b2808-26fa-4608-a208-2c11eaddc293,},Annotations:map[string]string{io.kubernetes.container.hash: 90de17f7,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed5b6aa1d959c00513e5e99b6b1c366a721b56bf4296e42444533d15a3d5ea89,PodSandboxId:486909c094af46ad1d93db33bc54ae123e60d8222931ab1e608c83e878ba5fa4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1717419203265462929,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hbl6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f697f13-4a60-4247-bb5e-a8bcdd3336cd,},Annotations:map[string]string{io.kubernetes.container.hash: be10c8dc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b5b47bf0b5f628ddb3f76561fa28630a9d7dedc2bb4c83f094255ae244dca8e,PodSandboxId:453f86c770842027c9ff0c29f1ccebad7218dbfd77885056a022ac8cb4bc43c2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717419191260651439,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39cb6903f3b5aa40ef8bd7e72aabe415,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:814ee2909ea750a0b5ad76b4b2083b3c8847b7a38ab14da4d756db2e147a7bf5,PodSandboxId:6d8da102188476dd055ffac241996f145d4a29e4e78cb13e080e8a4dee1d8ad5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717419178550858553,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5z6j2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 776fef6b-c7d6-4793-a168-5102737dd302,},Annotations:map[string]string{io.kubernetes.container.hash: be6db159,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3d2b258b246f4a87838c7b594819a6cc46d5a5410924b49d29d57a56ad652be,PodSandboxId:878e194eb4c5d65a6da7fc151f4dca4ffc321bd5b9e8c2df7be3e1bcffd1d07c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717419176588367856,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4620ab680afec05b26612f993071a866,},Annotations:map[string]string{io.kubernetes.container.hash: 4b374434,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f1ebe7c407f4bdc7d5296580d428b5ce113f202ffbd23a4e808f0b6bc6b3f3d,PodSandboxId:6fc8d1bbe01374efe9d05407c11d59e8f13779c8d401a0f6641cdb919f0d6a31,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717419175255850915,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f85b2808-26fa-4608-a208-2c11eaddc293,},Annotations:map[string]string{io.kubernetes.container.hash: 90de17f7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdf7fec087647f8578e25827a877d7e975efb0c1d754cbd1eeee97e5ce09fa80,PodSandboxId:92b8d227c6d1ed130f22356a680e6f7b9c919e83912a9ae9508debd5183f2caf,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1717419157557932569,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61fd699b695dc1beae4468f8fa8e57e9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:7c3064afc1c4a526855ad3df05b9a3c438da7b156ff6f30af2f40e926433a359,PodSandboxId:498fb53617c69c314b9a3f9014055e57574506035ab66bb13a4f46960f0c2223,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717419146600723623,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w2hpg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a52e47-6a1e-4f9c-ba1b-feb3e362531a,},Annotations:map[string]string{io.kubernetes.container.hash: c1dc988b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f1dffbb
4b704b536b1bcb6e68533fef327f63a3d78ca90949c0e8033c83dd2a,PodSandboxId:486909c094af46ad1d93db33bc54ae123e60d8222931ab1e608c83e878ba5fa4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717419145401912650,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hbl6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f697f13-4a60-4247-bb5e-a8bcdd3336cd,},Annotations:map[string]string{io.kubernetes.container.hash: be10c8dc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c80e06763a2b7bb46c689ad6c8fef0f893f1765af29
1317b666a68ab2bbbc8ec,PodSandboxId:e2131cfde7d9492dab997655ed6ea3d6bc4fe67b5ba4becca7259f73e3c5aa5e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717419145422308062,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-q7687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e78d9e6-8feb-44ef-b44d-ed6039ab00ee,},Annotations:map[string]string{io.kubernetes.container.hash: 62ef9a49,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2dd659fd5934802b2bfca420abad6aaf55ea7fbb840dce1459f7090b32a7618,PodSandboxId:7b970b2197689db12520ccfc228623fa21aff90847dab91f9a332f6ea866e828,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717419145375975426,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-d2tgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534e15ed-2e68-4275-8725-099d7240c25d,},Annotations:map[string]string{io.kubernetes.container.hash: 5fcebe2b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e5273b3a26c85e72a54ec34a7be83af05bfc0f3643495817233c3de238c2cdd,PodSandboxId:878e194eb4c5d65a6da7fc151f4dca4ffc321bd5b9e8c2df7be3e1bcffd1d07c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717419145231319917,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4620ab
680afec05b26612f993071a866,},Annotations:map[string]string{io.kubernetes.container.hash: 4b374434,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40a382da798af9922fb6ae55c5c75fdaaf42decaca661a820f6e168e79fce883,PodSandboxId:02cc9c7643ae1d60c77512d0aeddf63e8f21db58a64f13cd32f2de0e6f5846de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717419145208721134,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7cb138fa0228e501515fc55611
3859c,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fcebee1743ba61371dce9de34e1b9c613c40f2a1ecca8bfc2308d417f6df457,PodSandboxId:7f89ba975704b62f04831d12706488d1e68f340ca348752766da5bcf3979e5cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717419145145376217,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4887038ca9eb66694db5e7bd6f010727,},Annotations:map[string]string{io.kubernetes
.container.hash: d01e1cae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07ce13ba943e5cd4286981b654cc48caf310b4baef07c15405714e750e44b1b5,PodSandboxId:453f86c770842027c9ff0c29f1ccebad7218dbfd77885056a022ac8cb4bc43c2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717419145058310427,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39cb6903f3b5aa40ef8bd7e72aabe415,},Annotations:map[string]string{io.kuber
netes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76c9e115804f2ef0da9fc274ec7485309eb70b62384b05254dfa5ac8c6728e13,PodSandboxId:c73634cd0ed838aca7c928c176507e2dfa568ab421aa9e03f90ddc76ca3f89e3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1717418649829835451,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5z6j2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 776fef6b-c7d6-4793-a168-5102737dd302,},Annotations:map[string]string{io.kuberne
tes.container.hash: be6db159,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c67da4b30c5f444556405b8a25da6fbb0b38f383d298669f9f21785ed464934,PodSandboxId:1b5bd65416e85f6689ee15ecb3ab55e907fbd1077ac5e5439bec68eb110a6f2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717418505831123170,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-q7687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e78d9e6-8feb-44ef-b44d-ed6039ab00ee,},Annotations:map[string]string{io.kubernetes.container.hash: 62ef9a49,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50f524d71cd1f2697116e7f21f2de4dce2f9e5561c46a64f6c24713c3a56514e,PodSandboxId:6d9c5f1a45b9ec2700f63795dc3d92103fc5b6472ac9f6d2a638e50fb379eb54,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717418505831526984,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-d2tgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534e15ed-2e68-4275-8725-099d7240c25d,},Annotations:map[string]string{io.kubernetes.container.hash: 5fcebe2b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16c93dcdad420f0831a36fd31ab05cb7c3a9fefd9706a928d0b31b781e1cbcb5,PodSandboxId:4d41713a63ac581773f2729e379c68f79cb014627aff280bda59d0e8a7cf22e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717418501295990343,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w2hpg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a52e47-6a1e-4f9c-ba1b-feb3e362531a,},Annotations:map[string]string{io.kubernetes.container.hash: c1dc988b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1c2bb32752f666af65f18178d8dd09b063abaa5dd50c071c9f8f377fc63156,PodSandboxId:ba8b6aec50011e9fe8c42d7e92a51e5a0907e4e61a030192574cfa79773380e4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1717418481055378274,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4887038ca9eb66694db5e7bd6f010727,},Annotations:map[string]string{io.kubernetes.container.hash: d01e1cae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86f8a60e5333435d8ac7bc454e10cecb904b633e2ae00b080728114f5f1b1f35,PodSandboxId:b96e7f287499d7304c1d1aa216ee6aea5b51e6bbe5bfda82d347772d73f33297,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedA
t:1717418480896883638,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7cb138fa0228e501515fc556113859c,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=466282b6-da78-4961-aae8-54dbf7768db0 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:57:36 ha-220492 crio[3827]: time="2024-06-03 12:57:36.180635740Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=435668dc-06ca-4ebd-a415-1cda035b7494 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:57:36 ha-220492 crio[3827]: time="2024-06-03 12:57:36.180711764Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=435668dc-06ca-4ebd-a415-1cda035b7494 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:57:36 ha-220492 crio[3827]: time="2024-06-03 12:57:36.181841229Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=20e15a1b-ef15-4c8a-a3c1-9c3526ea0923 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:57:36 ha-220492 crio[3827]: time="2024-06-03 12:57:36.182652109Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717419456182626817,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=20e15a1b-ef15-4c8a-a3c1-9c3526ea0923 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:57:36 ha-220492 crio[3827]: time="2024-06-03 12:57:36.183324475Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=33ec6cdd-7f0f-42fd-abdd-ee8b43514128 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:57:36 ha-220492 crio[3827]: time="2024-06-03 12:57:36.183421009Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=33ec6cdd-7f0f-42fd-abdd-ee8b43514128 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:57:36 ha-220492 crio[3827]: time="2024-06-03 12:57:36.183894662Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3bdbad040139c6bcb4ecf79851d910f1956ae7d70ddb13a89f98e6f364d182cc,PodSandboxId:6fc8d1bbe01374efe9d05407c11d59e8f13779c8d401a0f6641cdb919f0d6a31,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717419225277983339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f85b2808-26fa-4608-a208-2c11eaddc293,},Annotations:map[string]string{io.kubernetes.container.hash: 90de17f7,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed5b6aa1d959c00513e5e99b6b1c366a721b56bf4296e42444533d15a3d5ea89,PodSandboxId:486909c094af46ad1d93db33bc54ae123e60d8222931ab1e608c83e878ba5fa4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1717419203265462929,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hbl6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f697f13-4a60-4247-bb5e-a8bcdd3336cd,},Annotations:map[string]string{io.kubernetes.container.hash: be10c8dc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b5b47bf0b5f628ddb3f76561fa28630a9d7dedc2bb4c83f094255ae244dca8e,PodSandboxId:453f86c770842027c9ff0c29f1ccebad7218dbfd77885056a022ac8cb4bc43c2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717419191260651439,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39cb6903f3b5aa40ef8bd7e72aabe415,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:814ee2909ea750a0b5ad76b4b2083b3c8847b7a38ab14da4d756db2e147a7bf5,PodSandboxId:6d8da102188476dd055ffac241996f145d4a29e4e78cb13e080e8a4dee1d8ad5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717419178550858553,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5z6j2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 776fef6b-c7d6-4793-a168-5102737dd302,},Annotations:map[string]string{io.kubernetes.container.hash: be6db159,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3d2b258b246f4a87838c7b594819a6cc46d5a5410924b49d29d57a56ad652be,PodSandboxId:878e194eb4c5d65a6da7fc151f4dca4ffc321bd5b9e8c2df7be3e1bcffd1d07c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717419176588367856,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4620ab680afec05b26612f993071a866,},Annotations:map[string]string{io.kubernetes.container.hash: 4b374434,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f1ebe7c407f4bdc7d5296580d428b5ce113f202ffbd23a4e808f0b6bc6b3f3d,PodSandboxId:6fc8d1bbe01374efe9d05407c11d59e8f13779c8d401a0f6641cdb919f0d6a31,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717419175255850915,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f85b2808-26fa-4608-a208-2c11eaddc293,},Annotations:map[string]string{io.kubernetes.container.hash: 90de17f7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdf7fec087647f8578e25827a877d7e975efb0c1d754cbd1eeee97e5ce09fa80,PodSandboxId:92b8d227c6d1ed130f22356a680e6f7b9c919e83912a9ae9508debd5183f2caf,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1717419157557932569,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61fd699b695dc1beae4468f8fa8e57e9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:7c3064afc1c4a526855ad3df05b9a3c438da7b156ff6f30af2f40e926433a359,PodSandboxId:498fb53617c69c314b9a3f9014055e57574506035ab66bb13a4f46960f0c2223,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717419146600723623,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w2hpg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a52e47-6a1e-4f9c-ba1b-feb3e362531a,},Annotations:map[string]string{io.kubernetes.container.hash: c1dc988b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f1dffbb
4b704b536b1bcb6e68533fef327f63a3d78ca90949c0e8033c83dd2a,PodSandboxId:486909c094af46ad1d93db33bc54ae123e60d8222931ab1e608c83e878ba5fa4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717419145401912650,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hbl6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f697f13-4a60-4247-bb5e-a8bcdd3336cd,},Annotations:map[string]string{io.kubernetes.container.hash: be10c8dc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c80e06763a2b7bb46c689ad6c8fef0f893f1765af29
1317b666a68ab2bbbc8ec,PodSandboxId:e2131cfde7d9492dab997655ed6ea3d6bc4fe67b5ba4becca7259f73e3c5aa5e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717419145422308062,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-q7687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e78d9e6-8feb-44ef-b44d-ed6039ab00ee,},Annotations:map[string]string{io.kubernetes.container.hash: 62ef9a49,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2dd659fd5934802b2bfca420abad6aaf55ea7fbb840dce1459f7090b32a7618,PodSandboxId:7b970b2197689db12520ccfc228623fa21aff90847dab91f9a332f6ea866e828,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717419145375975426,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-d2tgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534e15ed-2e68-4275-8725-099d7240c25d,},Annotations:map[string]string{io.kubernetes.container.hash: 5fcebe2b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e5273b3a26c85e72a54ec34a7be83af05bfc0f3643495817233c3de238c2cdd,PodSandboxId:878e194eb4c5d65a6da7fc151f4dca4ffc321bd5b9e8c2df7be3e1bcffd1d07c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717419145231319917,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4620ab
680afec05b26612f993071a866,},Annotations:map[string]string{io.kubernetes.container.hash: 4b374434,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40a382da798af9922fb6ae55c5c75fdaaf42decaca661a820f6e168e79fce883,PodSandboxId:02cc9c7643ae1d60c77512d0aeddf63e8f21db58a64f13cd32f2de0e6f5846de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717419145208721134,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7cb138fa0228e501515fc55611
3859c,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fcebee1743ba61371dce9de34e1b9c613c40f2a1ecca8bfc2308d417f6df457,PodSandboxId:7f89ba975704b62f04831d12706488d1e68f340ca348752766da5bcf3979e5cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717419145145376217,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4887038ca9eb66694db5e7bd6f010727,},Annotations:map[string]string{io.kubernetes
.container.hash: d01e1cae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07ce13ba943e5cd4286981b654cc48caf310b4baef07c15405714e750e44b1b5,PodSandboxId:453f86c770842027c9ff0c29f1ccebad7218dbfd77885056a022ac8cb4bc43c2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717419145058310427,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39cb6903f3b5aa40ef8bd7e72aabe415,},Annotations:map[string]string{io.kuber
netes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76c9e115804f2ef0da9fc274ec7485309eb70b62384b05254dfa5ac8c6728e13,PodSandboxId:c73634cd0ed838aca7c928c176507e2dfa568ab421aa9e03f90ddc76ca3f89e3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1717418649829835451,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5z6j2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 776fef6b-c7d6-4793-a168-5102737dd302,},Annotations:map[string]string{io.kuberne
tes.container.hash: be6db159,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c67da4b30c5f444556405b8a25da6fbb0b38f383d298669f9f21785ed464934,PodSandboxId:1b5bd65416e85f6689ee15ecb3ab55e907fbd1077ac5e5439bec68eb110a6f2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717418505831123170,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-q7687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e78d9e6-8feb-44ef-b44d-ed6039ab00ee,},Annotations:map[string]string{io.kubernetes.container.hash: 62ef9a49,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50f524d71cd1f2697116e7f21f2de4dce2f9e5561c46a64f6c24713c3a56514e,PodSandboxId:6d9c5f1a45b9ec2700f63795dc3d92103fc5b6472ac9f6d2a638e50fb379eb54,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717418505831526984,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-d2tgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534e15ed-2e68-4275-8725-099d7240c25d,},Annotations:map[string]string{io.kubernetes.container.hash: 5fcebe2b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16c93dcdad420f0831a36fd31ab05cb7c3a9fefd9706a928d0b31b781e1cbcb5,PodSandboxId:4d41713a63ac581773f2729e379c68f79cb014627aff280bda59d0e8a7cf22e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717418501295990343,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w2hpg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a52e47-6a1e-4f9c-ba1b-feb3e362531a,},Annotations:map[string]string{io.kubernetes.container.hash: c1dc988b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1c2bb32752f666af65f18178d8dd09b063abaa5dd50c071c9f8f377fc63156,PodSandboxId:ba8b6aec50011e9fe8c42d7e92a51e5a0907e4e61a030192574cfa79773380e4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1717418481055378274,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4887038ca9eb66694db5e7bd6f010727,},Annotations:map[string]string{io.kubernetes.container.hash: d01e1cae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86f8a60e5333435d8ac7bc454e10cecb904b633e2ae00b080728114f5f1b1f35,PodSandboxId:b96e7f287499d7304c1d1aa216ee6aea5b51e6bbe5bfda82d347772d73f33297,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedA
t:1717418480896883638,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7cb138fa0228e501515fc556113859c,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=33ec6cdd-7f0f-42fd-abdd-ee8b43514128 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:57:36 ha-220492 crio[3827]: time="2024-06-03 12:57:36.228276470Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f50aa62f-bbf2-4144-8686-39cc6e95eca1 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:57:36 ha-220492 crio[3827]: time="2024-06-03 12:57:36.228352850Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f50aa62f-bbf2-4144-8686-39cc6e95eca1 name=/runtime.v1.RuntimeService/Version
	Jun 03 12:57:36 ha-220492 crio[3827]: time="2024-06-03 12:57:36.229471950Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=25593b28-8a6a-427c-9364-41e7a4b37c06 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:57:36 ha-220492 crio[3827]: time="2024-06-03 12:57:36.229898162Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717419456229875416,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=25593b28-8a6a-427c-9364-41e7a4b37c06 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 12:57:36 ha-220492 crio[3827]: time="2024-06-03 12:57:36.230405441Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a8b90e48-305a-4c14-add9-58b9913d9e36 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:57:36 ha-220492 crio[3827]: time="2024-06-03 12:57:36.230460623Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a8b90e48-305a-4c14-add9-58b9913d9e36 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 12:57:36 ha-220492 crio[3827]: time="2024-06-03 12:57:36.230864047Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3bdbad040139c6bcb4ecf79851d910f1956ae7d70ddb13a89f98e6f364d182cc,PodSandboxId:6fc8d1bbe01374efe9d05407c11d59e8f13779c8d401a0f6641cdb919f0d6a31,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717419225277983339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f85b2808-26fa-4608-a208-2c11eaddc293,},Annotations:map[string]string{io.kubernetes.container.hash: 90de17f7,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed5b6aa1d959c00513e5e99b6b1c366a721b56bf4296e42444533d15a3d5ea89,PodSandboxId:486909c094af46ad1d93db33bc54ae123e60d8222931ab1e608c83e878ba5fa4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1717419203265462929,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hbl6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f697f13-4a60-4247-bb5e-a8bcdd3336cd,},Annotations:map[string]string{io.kubernetes.container.hash: be10c8dc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b5b47bf0b5f628ddb3f76561fa28630a9d7dedc2bb4c83f094255ae244dca8e,PodSandboxId:453f86c770842027c9ff0c29f1ccebad7218dbfd77885056a022ac8cb4bc43c2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717419191260651439,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39cb6903f3b5aa40ef8bd7e72aabe415,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:814ee2909ea750a0b5ad76b4b2083b3c8847b7a38ab14da4d756db2e147a7bf5,PodSandboxId:6d8da102188476dd055ffac241996f145d4a29e4e78cb13e080e8a4dee1d8ad5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717419178550858553,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5z6j2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 776fef6b-c7d6-4793-a168-5102737dd302,},Annotations:map[string]string{io.kubernetes.container.hash: be6db159,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3d2b258b246f4a87838c7b594819a6cc46d5a5410924b49d29d57a56ad652be,PodSandboxId:878e194eb4c5d65a6da7fc151f4dca4ffc321bd5b9e8c2df7be3e1bcffd1d07c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717419176588367856,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4620ab680afec05b26612f993071a866,},Annotations:map[string]string{io.kubernetes.container.hash: 4b374434,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f1ebe7c407f4bdc7d5296580d428b5ce113f202ffbd23a4e808f0b6bc6b3f3d,PodSandboxId:6fc8d1bbe01374efe9d05407c11d59e8f13779c8d401a0f6641cdb919f0d6a31,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717419175255850915,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f85b2808-26fa-4608-a208-2c11eaddc293,},Annotations:map[string]string{io.kubernetes.container.hash: 90de17f7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdf7fec087647f8578e25827a877d7e975efb0c1d754cbd1eeee97e5ce09fa80,PodSandboxId:92b8d227c6d1ed130f22356a680e6f7b9c919e83912a9ae9508debd5183f2caf,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1717419157557932569,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61fd699b695dc1beae4468f8fa8e57e9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:7c3064afc1c4a526855ad3df05b9a3c438da7b156ff6f30af2f40e926433a359,PodSandboxId:498fb53617c69c314b9a3f9014055e57574506035ab66bb13a4f46960f0c2223,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717419146600723623,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w2hpg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a52e47-6a1e-4f9c-ba1b-feb3e362531a,},Annotations:map[string]string{io.kubernetes.container.hash: c1dc988b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f1dffbb
4b704b536b1bcb6e68533fef327f63a3d78ca90949c0e8033c83dd2a,PodSandboxId:486909c094af46ad1d93db33bc54ae123e60d8222931ab1e608c83e878ba5fa4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717419145401912650,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hbl6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f697f13-4a60-4247-bb5e-a8bcdd3336cd,},Annotations:map[string]string{io.kubernetes.container.hash: be10c8dc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c80e06763a2b7bb46c689ad6c8fef0f893f1765af29
1317b666a68ab2bbbc8ec,PodSandboxId:e2131cfde7d9492dab997655ed6ea3d6bc4fe67b5ba4becca7259f73e3c5aa5e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717419145422308062,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-q7687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e78d9e6-8feb-44ef-b44d-ed6039ab00ee,},Annotations:map[string]string{io.kubernetes.container.hash: 62ef9a49,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2dd659fd5934802b2bfca420abad6aaf55ea7fbb840dce1459f7090b32a7618,PodSandboxId:7b970b2197689db12520ccfc228623fa21aff90847dab91f9a332f6ea866e828,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717419145375975426,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-d2tgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534e15ed-2e68-4275-8725-099d7240c25d,},Annotations:map[string]string{io.kubernetes.container.hash: 5fcebe2b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e5273b3a26c85e72a54ec34a7be83af05bfc0f3643495817233c3de238c2cdd,PodSandboxId:878e194eb4c5d65a6da7fc151f4dca4ffc321bd5b9e8c2df7be3e1bcffd1d07c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717419145231319917,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4620ab
680afec05b26612f993071a866,},Annotations:map[string]string{io.kubernetes.container.hash: 4b374434,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40a382da798af9922fb6ae55c5c75fdaaf42decaca661a820f6e168e79fce883,PodSandboxId:02cc9c7643ae1d60c77512d0aeddf63e8f21db58a64f13cd32f2de0e6f5846de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717419145208721134,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7cb138fa0228e501515fc55611
3859c,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fcebee1743ba61371dce9de34e1b9c613c40f2a1ecca8bfc2308d417f6df457,PodSandboxId:7f89ba975704b62f04831d12706488d1e68f340ca348752766da5bcf3979e5cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717419145145376217,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4887038ca9eb66694db5e7bd6f010727,},Annotations:map[string]string{io.kubernetes
.container.hash: d01e1cae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07ce13ba943e5cd4286981b654cc48caf310b4baef07c15405714e750e44b1b5,PodSandboxId:453f86c770842027c9ff0c29f1ccebad7218dbfd77885056a022ac8cb4bc43c2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717419145058310427,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39cb6903f3b5aa40ef8bd7e72aabe415,},Annotations:map[string]string{io.kuber
netes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76c9e115804f2ef0da9fc274ec7485309eb70b62384b05254dfa5ac8c6728e13,PodSandboxId:c73634cd0ed838aca7c928c176507e2dfa568ab421aa9e03f90ddc76ca3f89e3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1717418649829835451,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5z6j2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 776fef6b-c7d6-4793-a168-5102737dd302,},Annotations:map[string]string{io.kuberne
tes.container.hash: be6db159,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c67da4b30c5f444556405b8a25da6fbb0b38f383d298669f9f21785ed464934,PodSandboxId:1b5bd65416e85f6689ee15ecb3ab55e907fbd1077ac5e5439bec68eb110a6f2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717418505831123170,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-q7687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e78d9e6-8feb-44ef-b44d-ed6039ab00ee,},Annotations:map[string]string{io.kubernetes.container.hash: 62ef9a49,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50f524d71cd1f2697116e7f21f2de4dce2f9e5561c46a64f6c24713c3a56514e,PodSandboxId:6d9c5f1a45b9ec2700f63795dc3d92103fc5b6472ac9f6d2a638e50fb379eb54,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717418505831526984,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-d2tgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534e15ed-2e68-4275-8725-099d7240c25d,},Annotations:map[string]string{io.kubernetes.container.hash: 5fcebe2b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16c93dcdad420f0831a36fd31ab05cb7c3a9fefd9706a928d0b31b781e1cbcb5,PodSandboxId:4d41713a63ac581773f2729e379c68f79cb014627aff280bda59d0e8a7cf22e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717418501295990343,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w2hpg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a52e47-6a1e-4f9c-ba1b-feb3e362531a,},Annotations:map[string]string{io.kubernetes.container.hash: c1dc988b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1c2bb32752f666af65f18178d8dd09b063abaa5dd50c071c9f8f377fc63156,PodSandboxId:ba8b6aec50011e9fe8c42d7e92a51e5a0907e4e61a030192574cfa79773380e4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1717418481055378274,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4887038ca9eb66694db5e7bd6f010727,},Annotations:map[string]string{io.kubernetes.container.hash: d01e1cae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86f8a60e5333435d8ac7bc454e10cecb904b633e2ae00b080728114f5f1b1f35,PodSandboxId:b96e7f287499d7304c1d1aa216ee6aea5b51e6bbe5bfda82d347772d73f33297,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedA
t:1717418480896883638,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-220492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7cb138fa0228e501515fc556113859c,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a8b90e48-305a-4c14-add9-58b9913d9e36 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3bdbad040139c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       4                   6fc8d1bbe0137       storage-provisioner
	ed5b6aa1d959c       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      4 minutes ago       Running             kindnet-cni               3                   486909c094af4       kindnet-hbl6v
	8b5b47bf0b5f6       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      4 minutes ago       Running             kube-controller-manager   2                   453f86c770842       kube-controller-manager-ha-220492
	814ee2909ea75       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   6d8da10218847       busybox-fc5497c4f-5z6j2
	f3d2b258b246f       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      4 minutes ago       Running             kube-apiserver            3                   878e194eb4c5d       kube-apiserver-ha-220492
	7f1ebe7c407f4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       3                   6fc8d1bbe0137       storage-provisioner
	fdf7fec087647       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago       Running             kube-vip                  0                   92b8d227c6d1e       kube-vip-ha-220492
	7c3064afc1c4a       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      5 minutes ago       Running             kube-proxy                1                   498fb53617c69       kube-proxy-w2hpg
	c80e06763a2b7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   e2131cfde7d94       coredns-7db6d8ff4d-q7687
	6f1dffbb4b704       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      5 minutes ago       Exited              kindnet-cni               2                   486909c094af4       kindnet-hbl6v
	f2dd659fd5934       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   7b970b2197689       coredns-7db6d8ff4d-d2tgp
	4e5273b3a26c8       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      5 minutes ago       Exited              kube-apiserver            2                   878e194eb4c5d       kube-apiserver-ha-220492
	40a382da798af       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      5 minutes ago       Running             kube-scheduler            1                   02cc9c7643ae1       kube-scheduler-ha-220492
	2fcebee1743ba       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      5 minutes ago       Running             etcd                      1                   7f89ba975704b       etcd-ha-220492
	07ce13ba943e5       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      5 minutes ago       Exited              kube-controller-manager   1                   453f86c770842       kube-controller-manager-ha-220492
	76c9e115804f2       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   c73634cd0ed83       busybox-fc5497c4f-5z6j2
	50f524d71cd1f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago      Exited              coredns                   0                   6d9c5f1a45b9e       coredns-7db6d8ff4d-d2tgp
	7c67da4b30c5f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago      Exited              coredns                   0                   1b5bd65416e85       coredns-7db6d8ff4d-q7687
	16c93dcdad420       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      15 minutes ago      Exited              kube-proxy                0                   4d41713a63ac5       kube-proxy-w2hpg
	3f1c2bb32752f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      16 minutes ago      Exited              etcd                      0                   ba8b6aec50011       etcd-ha-220492
	86f8a60e53334       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      16 minutes ago      Exited              kube-scheduler            0                   b96e7f287499d       kube-scheduler-ha-220492
	
	
	==> coredns [50f524d71cd1f2697116e7f21f2de4dce2f9e5561c46a64f6c24713c3a56514e] <==
	[INFO] 10.244.1.2:53322 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114547s
	[INFO] 10.244.2.2:47547 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145066s
	[INFO] 10.244.2.2:43785 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000094815s
	[INFO] 10.244.2.2:54501 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001319495s
	[INFO] 10.244.2.2:55983 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000086973s
	[INFO] 10.244.2.2:56195 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069334s
	[INFO] 10.244.0.4:42110 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000064533s
	[INFO] 10.244.0.4:48697 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000058629s
	[INFO] 10.244.1.2:42865 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168668s
	[INFO] 10.244.1.2:56794 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000111494s
	[INFO] 10.244.1.2:58581 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000084125s
	[INFO] 10.244.1.2:50954 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099179s
	[INFO] 10.244.2.2:42915 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142235s
	[INFO] 10.244.2.2:49410 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000102812s
	[INFO] 10.244.0.4:51178 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00019093s
	[INFO] 10.244.1.2:40502 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168017s
	[INFO] 10.244.1.2:35921 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000180824s
	[INFO] 10.244.1.2:40369 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000155572s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1812&timeout=5m43s&timeoutSeconds=343&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [7c67da4b30c5f444556405b8a25da6fbb0b38f383d298669f9f21785ed464934] <==
	[INFO] 10.244.1.2:36984 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001651953s
	[INFO] 10.244.1.2:59707 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00013257s
	[INFO] 10.244.1.2:43132 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001294041s
	[INFO] 10.244.1.2:50044 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000148444s
	[INFO] 10.244.1.2:46108 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000262338s
	[INFO] 10.244.2.2:59857 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001619455s
	[INFO] 10.244.2.2:37703 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098955s
	[INFO] 10.244.2.2:51044 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000180769s
	[INFO] 10.244.0.4:56245 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000077571s
	[INFO] 10.244.0.4:40429 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00005283s
	[INFO] 10.244.2.2:55900 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000100265s
	[INFO] 10.244.2.2:57003 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000955s
	[INFO] 10.244.0.4:39653 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107486s
	[INFO] 10.244.0.4:50505 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000152153s
	[INFO] 10.244.0.4:40598 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000156098s
	[INFO] 10.244.1.2:37651 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000154868s
	[INFO] 10.244.2.2:47903 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111761s
	[INFO] 10.244.2.2:55067 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000076585s
	[INFO] 10.244.2.2:39348 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000123715s
	[INFO] 10.244.2.2:33705 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000109704s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c80e06763a2b7bb46c689ad6c8fef0f893f1765af291317b666a68ab2bbbc8ec] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[390088725]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (03-Jun-2024 12:52:30.478) (total time: 10001ms):
	Trace[390088725]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (12:52:40.479)
	Trace[390088725]: [10.001230462s] [10.001230462s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:43708->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:43708->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:35810->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:35810->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:35796->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:35796->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [f2dd659fd5934802b2bfca420abad6aaf55ea7fbb840dce1459f7090b32a7618] <==
	[INFO] plugin/kubernetes: Trace[747306496]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (03-Jun-2024 12:52:30.135) (total time: 10001ms):
	Trace[747306496]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (12:52:40.136)
	Trace[747306496]: [10.001195314s] [10.001195314s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:55372->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:55372->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:50886->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:50886->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:50876->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:50876->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-220492
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-220492
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	                    minikube.k8s.io/name=ha-220492
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_03T12_41_28_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 12:41:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-220492
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 12:57:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 12:53:06 +0000   Mon, 03 Jun 2024 12:41:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 12:53:06 +0000   Mon, 03 Jun 2024 12:41:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 12:53:06 +0000   Mon, 03 Jun 2024 12:41:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 12:53:06 +0000   Mon, 03 Jun 2024 12:41:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.6
	  Hostname:    ha-220492
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bebf6ef8229e4a0498f737d165a96550
	  System UUID:                bebf6ef8-229e-4a04-98f7-37d165a96550
	  Boot ID:                    38c7d220-f8e0-4890-a7e1-09c3bc826d0b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-5z6j2              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-7db6d8ff4d-d2tgp             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-7db6d8ff4d-q7687             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-ha-220492                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-hbl6v                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-220492             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-ha-220492    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-w2hpg                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-220492             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-vip-ha-220492                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m41s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 15m    kube-proxy       
	  Normal   Starting                 4m28s  kube-proxy       
	  Normal   NodeHasNoDiskPressure    16m    kubelet          Node ha-220492 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 16m    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  16m    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  16m    kubelet          Node ha-220492 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     16m    kubelet          Node ha-220492 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           15m    node-controller  Node ha-220492 event: Registered Node ha-220492 in Controller
	  Normal   NodeReady                15m    kubelet          Node ha-220492 status is now: NodeReady
	  Normal   RegisteredNode           14m    node-controller  Node ha-220492 event: Registered Node ha-220492 in Controller
	  Normal   RegisteredNode           13m    node-controller  Node ha-220492 event: Registered Node ha-220492 in Controller
	  Warning  ContainerGCFailed        6m9s   kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m20s  node-controller  Node ha-220492 event: Registered Node ha-220492 in Controller
	  Normal   RegisteredNode           4m13s  node-controller  Node ha-220492 event: Registered Node ha-220492 in Controller
	  Normal   RegisteredNode           3m9s   node-controller  Node ha-220492 event: Registered Node ha-220492 in Controller
	
	
	Name:               ha-220492-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-220492-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	                    minikube.k8s.io/name=ha-220492
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_03T12_42_37_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 12:42:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-220492-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 12:57:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 12:53:50 +0000   Mon, 03 Jun 2024 12:53:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 12:53:50 +0000   Mon, 03 Jun 2024 12:53:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 12:53:50 +0000   Mon, 03 Jun 2024 12:53:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 12:53:50 +0000   Mon, 03 Jun 2024 12:53:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.106
	  Hostname:    ha-220492-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1658a5c6e8394d57a265332808e714ab
	  System UUID:                1658a5c6-e839-4d57-a265-332808e714ab
	  Boot ID:                    4a625f5b-1388-411d-8640-464976133bbf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-m229v                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-220492-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-5p8f7                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-220492-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-220492-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-dkzgt                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-220492-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-220492-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m26s                  kube-proxy       
	  Normal  Starting                 14m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15m                    node-controller  Node ha-220492-m02 event: Registered Node ha-220492-m02 in Controller
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-220492-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-220492-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-220492-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                    node-controller  Node ha-220492-m02 event: Registered Node ha-220492-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-220492-m02 event: Registered Node ha-220492-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-220492-m02 status is now: NodeNotReady
	  Normal  Starting                 4m54s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m54s (x8 over 4m54s)  kubelet          Node ha-220492-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m54s (x8 over 4m54s)  kubelet          Node ha-220492-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m54s (x7 over 4m54s)  kubelet          Node ha-220492-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m20s                  node-controller  Node ha-220492-m02 event: Registered Node ha-220492-m02 in Controller
	  Normal  RegisteredNode           4m13s                  node-controller  Node ha-220492-m02 event: Registered Node ha-220492-m02 in Controller
	  Normal  RegisteredNode           3m9s                   node-controller  Node ha-220492-m02 event: Registered Node ha-220492-m02 in Controller
	
	
	Name:               ha-220492-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-220492-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	                    minikube.k8s.io/name=ha-220492
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_03T12_44_42_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 12:44:42 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-220492-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 12:55:09 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 03 Jun 2024 12:54:49 +0000   Mon, 03 Jun 2024 12:55:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 03 Jun 2024 12:54:49 +0000   Mon, 03 Jun 2024 12:55:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 03 Jun 2024 12:54:49 +0000   Mon, 03 Jun 2024 12:55:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 03 Jun 2024 12:54:49 +0000   Mon, 03 Jun 2024 12:55:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.76
	  Hostname:    ha-220492-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c6e57d6c6ec64017a56a85c3aa55fe71
	  System UUID:                c6e57d6c-6ec6-4017-a56a-85c3aa55fe71
	  Boot ID:                    68c487f3-a668-48f5-a7eb-6eae7251e1af
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-pbk4b    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m37s
	  kube-system                 kindnet-l7rsb              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-ggdgz           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m44s                  kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-220492-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-220492-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-220492-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                    node-controller  Node ha-220492-m04 event: Registered Node ha-220492-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-220492-m04 event: Registered Node ha-220492-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-220492-m04 event: Registered Node ha-220492-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-220492-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m20s                  node-controller  Node ha-220492-m04 event: Registered Node ha-220492-m04 in Controller
	  Normal   RegisteredNode           4m13s                  node-controller  Node ha-220492-m04 event: Registered Node ha-220492-m04 in Controller
	  Normal   RegisteredNode           3m9s                   node-controller  Node ha-220492-m04 event: Registered Node ha-220492-m04 in Controller
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  2m47s (x2 over 2m47s)  kubelet          Node ha-220492-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  2m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    2m47s (x2 over 2m47s)  kubelet          Node ha-220492-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m47s (x2 over 2m47s)  kubelet          Node ha-220492-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m47s                  kubelet          Node ha-220492-m04 has been rebooted, boot id: 68c487f3-a668-48f5-a7eb-6eae7251e1af
	  Normal   NodeReady                2m47s                  kubelet          Node ha-220492-m04 status is now: NodeReady
	  Normal   NodeNotReady             104s (x2 over 3m39s)   node-controller  Node ha-220492-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jun 3 12:41] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.058474] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056951] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.164438] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.150241] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.258150] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.221845] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +4.557727] systemd-fstab-generator[946]: Ignoring "noauto" option for root device
	[  +0.059101] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.202766] systemd-fstab-generator[1365]: Ignoring "noauto" option for root device
	[  +0.082984] kauditd_printk_skb: 79 callbacks suppressed
	[ +14.083493] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.330072] kauditd_printk_skb: 68 callbacks suppressed
	[Jun 3 12:52] systemd-fstab-generator[3743]: Ignoring "noauto" option for root device
	[  +0.148118] systemd-fstab-generator[3755]: Ignoring "noauto" option for root device
	[  +0.173822] systemd-fstab-generator[3769]: Ignoring "noauto" option for root device
	[  +0.153196] systemd-fstab-generator[3782]: Ignoring "noauto" option for root device
	[  +0.274299] systemd-fstab-generator[3811]: Ignoring "noauto" option for root device
	[  +9.877419] systemd-fstab-generator[3913]: Ignoring "noauto" option for root device
	[  +0.089964] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.851315] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.277749] kauditd_printk_skb: 98 callbacks suppressed
	[Jun 3 12:53] kauditd_printk_skb: 12 callbacks suppressed
	[ +26.750467] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [2fcebee1743ba61371dce9de34e1b9c613c40f2a1ecca8bfc2308d417f6df457] <==
	{"level":"info","ts":"2024-06-03T12:54:09.229439Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"6f26d2d338759d80","remote-peer-id":"1c2a56b0ad40f85f"}
	{"level":"info","ts":"2024-06-03T12:54:09.263088Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6f26d2d338759d80","to":"1c2a56b0ad40f85f","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-06-03T12:54:09.263194Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"6f26d2d338759d80","remote-peer-id":"1c2a56b0ad40f85f"}
	{"level":"info","ts":"2024-06-03T12:54:09.272781Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6f26d2d338759d80","to":"1c2a56b0ad40f85f","stream-type":"stream Message"}
	{"level":"info","ts":"2024-06-03T12:54:09.272837Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"6f26d2d338759d80","remote-peer-id":"1c2a56b0ad40f85f"}
	{"level":"warn","ts":"2024-06-03T12:54:52.431894Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.849761ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/\" range_end:\"/registry/configmaps0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-06-03T12:54:52.432979Z","caller":"traceutil/trace.go:171","msg":"trace[1052840892] range","detail":"{range_begin:/registry/configmaps/; range_end:/registry/configmaps0; response_count:0; response_revision:2481; }","duration":"104.946524ms","start":"2024-06-03T12:54:52.327925Z","end":"2024-06-03T12:54:52.432872Z","steps":["trace[1052840892] 'count revisions from in-memory index tree'  (duration: 102.493341ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T12:55:02.689741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f26d2d338759d80 switched to configuration voters=(2172392168769582172 8009320791952170368)"}
	{"level":"info","ts":"2024-06-03T12:55:02.691947Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"1a1020f766a5ac01","local-member-id":"6f26d2d338759d80","removed-remote-peer-id":"1c2a56b0ad40f85f","removed-remote-peer-urls":["https://192.168.39.169:2380"]}
	{"level":"info","ts":"2024-06-03T12:55:02.692119Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"1c2a56b0ad40f85f"}
	{"level":"warn","ts":"2024-06-03T12:55:02.692516Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"1c2a56b0ad40f85f"}
	{"level":"info","ts":"2024-06-03T12:55:02.692593Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"1c2a56b0ad40f85f"}
	{"level":"warn","ts":"2024-06-03T12:55:02.693148Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"1c2a56b0ad40f85f"}
	{"level":"info","ts":"2024-06-03T12:55:02.693452Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"1c2a56b0ad40f85f"}
	{"level":"info","ts":"2024-06-03T12:55:02.693589Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6f26d2d338759d80","remote-peer-id":"1c2a56b0ad40f85f"}
	{"level":"warn","ts":"2024-06-03T12:55:02.694388Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6f26d2d338759d80","remote-peer-id":"1c2a56b0ad40f85f","error":"context canceled"}
	{"level":"warn","ts":"2024-06-03T12:55:02.694466Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"1c2a56b0ad40f85f","error":"failed to read 1c2a56b0ad40f85f on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-06-03T12:55:02.694528Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6f26d2d338759d80","remote-peer-id":"1c2a56b0ad40f85f"}
	{"level":"warn","ts":"2024-06-03T12:55:02.694694Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"6f26d2d338759d80","remote-peer-id":"1c2a56b0ad40f85f","error":"context canceled"}
	{"level":"info","ts":"2024-06-03T12:55:02.694743Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6f26d2d338759d80","remote-peer-id":"1c2a56b0ad40f85f"}
	{"level":"info","ts":"2024-06-03T12:55:02.694784Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"1c2a56b0ad40f85f"}
	{"level":"info","ts":"2024-06-03T12:55:02.694949Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"6f26d2d338759d80","removed-remote-peer-id":"1c2a56b0ad40f85f"}
	{"level":"info","ts":"2024-06-03T12:55:02.695184Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"6f26d2d338759d80","raft-conf-change":"ConfChangeRemoveNode","raft-conf-change-node-id":"1c2a56b0ad40f85f"}
	{"level":"warn","ts":"2024-06-03T12:55:02.708867Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"6f26d2d338759d80","remote-peer-id-stream-handler":"6f26d2d338759d80","remote-peer-id-from":"1c2a56b0ad40f85f"}
	{"level":"warn","ts":"2024-06-03T12:55:02.715433Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.169:52052","server-name":"","error":"EOF"}
	
	
	==> etcd [3f1c2bb32752f666af65f18178d8dd09b063abaa5dd50c071c9f8f377fc63156] <==
	2024/06/03 12:50:36 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-06-03T12:50:36.759439Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"543.117856ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings/\" range_end:\"/registry/rolebindings0\" limit:500 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-06-03T12:50:36.759469Z","caller":"traceutil/trace.go:171","msg":"trace[395994199] range","detail":"{range_begin:/registry/rolebindings/; range_end:/registry/rolebindings0; }","duration":"543.169823ms","start":"2024-06-03T12:50:36.216294Z","end":"2024-06-03T12:50:36.759464Z","steps":["trace[395994199] 'agreement among raft nodes before linearized reading'  (duration: 543.139448ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T12:50:36.7595Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-03T12:50:36.21628Z","time spent":"543.215521ms","remote":"127.0.0.1:34736","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":0,"response size":0,"request content":"key:\"/registry/rolebindings/\" range_end:\"/registry/rolebindings0\" limit:500 "}
	2024/06/03 12:50:36 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-06-03T12:50:36.806566Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.6:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-03T12:50:36.8066Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.6:2379: use of closed network connection"}
	{"level":"info","ts":"2024-06-03T12:50:36.806648Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"6f26d2d338759d80","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-06-03T12:50:36.806808Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"1e25e32aec59f45c"}
	{"level":"info","ts":"2024-06-03T12:50:36.806856Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"1e25e32aec59f45c"}
	{"level":"info","ts":"2024-06-03T12:50:36.806903Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"1e25e32aec59f45c"}
	{"level":"info","ts":"2024-06-03T12:50:36.80697Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c"}
	{"level":"info","ts":"2024-06-03T12:50:36.807109Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c"}
	{"level":"info","ts":"2024-06-03T12:50:36.807208Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6f26d2d338759d80","remote-peer-id":"1e25e32aec59f45c"}
	{"level":"info","ts":"2024-06-03T12:50:36.807241Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"1e25e32aec59f45c"}
	{"level":"info","ts":"2024-06-03T12:50:36.807264Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"1c2a56b0ad40f85f"}
	{"level":"info","ts":"2024-06-03T12:50:36.807291Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"1c2a56b0ad40f85f"}
	{"level":"info","ts":"2024-06-03T12:50:36.807344Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"1c2a56b0ad40f85f"}
	{"level":"info","ts":"2024-06-03T12:50:36.807452Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6f26d2d338759d80","remote-peer-id":"1c2a56b0ad40f85f"}
	{"level":"info","ts":"2024-06-03T12:50:36.807517Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6f26d2d338759d80","remote-peer-id":"1c2a56b0ad40f85f"}
	{"level":"info","ts":"2024-06-03T12:50:36.807563Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6f26d2d338759d80","remote-peer-id":"1c2a56b0ad40f85f"}
	{"level":"info","ts":"2024-06-03T12:50:36.807606Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"1c2a56b0ad40f85f"}
	{"level":"info","ts":"2024-06-03T12:50:36.811442Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.6:2380"}
	{"level":"info","ts":"2024-06-03T12:50:36.811619Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.6:2380"}
	{"level":"info","ts":"2024-06-03T12:50:36.811686Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-220492","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.6:2380"],"advertise-client-urls":["https://192.168.39.6:2379"]}
	
	
	==> kernel <==
	 12:57:36 up 16 min,  0 users,  load average: 0.12, 0.34, 0.27
	Linux ha-220492 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6f1dffbb4b704b536b1bcb6e68533fef327f63a3d78ca90949c0e8033c83dd2a] <==
	I0603 12:52:25.898561       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0603 12:52:25.898647       1 main.go:107] hostIP = 192.168.39.6
	podIP = 192.168.39.6
	I0603 12:52:25.898813       1 main.go:116] setting mtu 1500 for CNI 
	I0603 12:52:25.898858       1 main.go:146] kindnetd IP family: "ipv4"
	I0603 12:52:25.898881       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0603 12:52:36.160589       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0603 12:52:38.289487       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0603 12:52:47.505418       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 192.168.122.41:59974->10.96.0.1:443: read: connection reset by peer
	I0603 12:52:50.577500       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0603 12:52:53.649834       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xe3b
	
	
	==> kindnet [ed5b6aa1d959c00513e5e99b6b1c366a721b56bf4296e42444533d15a3d5ea89] <==
	I0603 12:56:54.317825       1 main.go:250] Node ha-220492-m04 has CIDR [10.244.3.0/24] 
	I0603 12:57:04.332990       1 main.go:223] Handling node with IPs: map[192.168.39.6:{}]
	I0603 12:57:04.333088       1 main.go:227] handling current node
	I0603 12:57:04.333100       1 main.go:223] Handling node with IPs: map[192.168.39.106:{}]
	I0603 12:57:04.333105       1 main.go:250] Node ha-220492-m02 has CIDR [10.244.1.0/24] 
	I0603 12:57:04.333211       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0603 12:57:04.333235       1 main.go:250] Node ha-220492-m04 has CIDR [10.244.3.0/24] 
	I0603 12:57:14.343549       1 main.go:223] Handling node with IPs: map[192.168.39.6:{}]
	I0603 12:57:14.343596       1 main.go:227] handling current node
	I0603 12:57:14.343606       1 main.go:223] Handling node with IPs: map[192.168.39.106:{}]
	I0603 12:57:14.343615       1 main.go:250] Node ha-220492-m02 has CIDR [10.244.1.0/24] 
	I0603 12:57:14.343728       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0603 12:57:14.343778       1 main.go:250] Node ha-220492-m04 has CIDR [10.244.3.0/24] 
	I0603 12:57:24.349937       1 main.go:223] Handling node with IPs: map[192.168.39.6:{}]
	I0603 12:57:24.349981       1 main.go:227] handling current node
	I0603 12:57:24.349991       1 main.go:223] Handling node with IPs: map[192.168.39.106:{}]
	I0603 12:57:24.349996       1 main.go:250] Node ha-220492-m02 has CIDR [10.244.1.0/24] 
	I0603 12:57:24.350159       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0603 12:57:24.350186       1 main.go:250] Node ha-220492-m04 has CIDR [10.244.3.0/24] 
	I0603 12:57:34.364305       1 main.go:223] Handling node with IPs: map[192.168.39.6:{}]
	I0603 12:57:34.364345       1 main.go:227] handling current node
	I0603 12:57:34.364355       1 main.go:223] Handling node with IPs: map[192.168.39.106:{}]
	I0603 12:57:34.364360       1 main.go:250] Node ha-220492-m02 has CIDR [10.244.1.0/24] 
	I0603 12:57:34.364479       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0603 12:57:34.364501       1 main.go:250] Node ha-220492-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [4e5273b3a26c85e72a54ec34a7be83af05bfc0f3643495817233c3de238c2cdd] <==
	I0603 12:52:25.951283       1 options.go:221] external host was not specified, using 192.168.39.6
	I0603 12:52:25.965686       1 server.go:148] Version: v1.30.1
	I0603 12:52:25.965753       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 12:52:26.415884       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0603 12:52:26.429198       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0603 12:52:26.429237       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0603 12:52:26.429456       1 instance.go:299] Using reconciler: lease
	I0603 12:52:26.441180       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0603 12:52:46.412698       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0603 12:52:46.413940       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0603 12:52:46.430746       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [f3d2b258b246f4a87838c7b594819a6cc46d5a5410924b49d29d57a56ad652be] <==
	I0603 12:53:03.794971       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 12:53:03.890224       1 shared_informer.go:320] Caches are synced for configmaps
	I0603 12:53:03.895470       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0603 12:53:03.901812       1 aggregator.go:165] initial CRD sync complete...
	I0603 12:53:03.901928       1 autoregister_controller.go:141] Starting autoregister controller
	I0603 12:53:03.902042       1 cache.go:32] Waiting for caches to sync for autoregister controller
	W0603 12:53:03.943326       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.169]
	I0603 12:53:03.957312       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0603 12:53:03.957351       1 policy_source.go:224] refreshing policies
	I0603 12:53:03.957484       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0603 12:53:03.988532       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0603 12:53:03.988554       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0603 12:53:03.988709       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0603 12:53:03.989169       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0603 12:53:03.989966       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0603 12:53:03.994394       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0603 12:53:04.003956       1 cache.go:39] Caches are synced for autoregister controller
	I0603 12:53:04.018339       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0603 12:53:04.045360       1 controller.go:615] quota admission added evaluator for: endpoints
	I0603 12:53:04.057435       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0603 12:53:04.065487       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0603 12:53:04.793710       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0603 12:53:05.085572       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.169 192.168.39.6]
	W0603 12:53:25.087312       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.106 192.168.39.6]
	W0603 12:55:15.096750       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.106 192.168.39.6]
	
	
	==> kube-controller-manager [07ce13ba943e5cd4286981b654cc48caf310b4baef07c15405714e750e44b1b5] <==
	I0603 12:52:26.666855       1 serving.go:380] Generated self-signed cert in-memory
	I0603 12:52:27.301798       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0603 12:52:27.301836       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 12:52:27.303561       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0603 12:52:27.303662       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0603 12:52:27.304162       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0603 12:52:27.304238       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0603 12:52:47.436904       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.6:8443/healthz\": dial tcp 192.168.39.6:8443: connect: connection refused"
	
	
	==> kube-controller-manager [8b5b47bf0b5f628ddb3f76561fa28630a9d7dedc2bb4c83f094255ae244dca8e] <==
	I0603 12:54:59.428561       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="79.919639ms"
	I0603 12:54:59.493370       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="64.599464ms"
	E0603 12:54:59.493433       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0603 12:54:59.543823       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.295095ms"
	I0603 12:54:59.564204       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.094007ms"
	I0603 12:54:59.564507       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="104.117µs"
	I0603 12:55:01.355463       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.013507ms"
	I0603 12:55:01.356594       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="131.588µs"
	I0603 12:55:01.486683       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.885µs"
	I0603 12:55:01.914692       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.201µs"
	I0603 12:55:01.931631       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="116.724µs"
	I0603 12:55:01.935386       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.457µs"
	I0603 12:55:14.203471       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-220492-m04"
	E0603 12:55:23.406530       1 gc_controller.go:153] "Failed to get node" err="node \"ha-220492-m03\" not found" logger="pod-garbage-collector-controller" node="ha-220492-m03"
	E0603 12:55:23.406583       1 gc_controller.go:153] "Failed to get node" err="node \"ha-220492-m03\" not found" logger="pod-garbage-collector-controller" node="ha-220492-m03"
	E0603 12:55:23.406591       1 gc_controller.go:153] "Failed to get node" err="node \"ha-220492-m03\" not found" logger="pod-garbage-collector-controller" node="ha-220492-m03"
	E0603 12:55:23.406596       1 gc_controller.go:153] "Failed to get node" err="node \"ha-220492-m03\" not found" logger="pod-garbage-collector-controller" node="ha-220492-m03"
	E0603 12:55:23.406601       1 gc_controller.go:153] "Failed to get node" err="node \"ha-220492-m03\" not found" logger="pod-garbage-collector-controller" node="ha-220492-m03"
	E0603 12:55:43.407582       1 gc_controller.go:153] "Failed to get node" err="node \"ha-220492-m03\" not found" logger="pod-garbage-collector-controller" node="ha-220492-m03"
	E0603 12:55:43.407682       1 gc_controller.go:153] "Failed to get node" err="node \"ha-220492-m03\" not found" logger="pod-garbage-collector-controller" node="ha-220492-m03"
	E0603 12:55:43.407695       1 gc_controller.go:153] "Failed to get node" err="node \"ha-220492-m03\" not found" logger="pod-garbage-collector-controller" node="ha-220492-m03"
	E0603 12:55:43.407715       1 gc_controller.go:153] "Failed to get node" err="node \"ha-220492-m03\" not found" logger="pod-garbage-collector-controller" node="ha-220492-m03"
	E0603 12:55:43.407724       1 gc_controller.go:153] "Failed to get node" err="node \"ha-220492-m03\" not found" logger="pod-garbage-collector-controller" node="ha-220492-m03"
	I0603 12:55:52.643128       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.525657ms"
	I0603 12:55:52.643239       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.182µs"
	
	
	==> kube-proxy [16c93dcdad420f0831a36fd31ab05cb7c3a9fefd9706a928d0b31b781e1cbcb5] <==
	E0603 12:49:23.923124       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-220492&resourceVersion=1784": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 12:49:26.995959       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-220492&resourceVersion=1784": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 12:49:26.996054       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-220492&resourceVersion=1784": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 12:49:26.996147       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1780": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 12:49:26.996175       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1780": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 12:49:26.996321       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1802": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 12:49:26.996446       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1802": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 12:49:33.137981       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-220492&resourceVersion=1784": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 12:49:33.138419       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-220492&resourceVersion=1784": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 12:49:33.138690       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1780": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 12:49:33.138749       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1780": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 12:49:33.138703       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1802": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 12:49:33.138822       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1802": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 12:49:42.353881       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1802": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 12:49:42.354072       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1802": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 12:49:45.426264       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-220492&resourceVersion=1784": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 12:49:45.427368       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-220492&resourceVersion=1784": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 12:49:45.428082       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1780": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 12:49:45.428286       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1780": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 12:50:00.786224       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1780": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 12:50:00.786818       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-220492&resourceVersion=1784": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 12:50:00.786942       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-220492&resourceVersion=1784": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 12:50:00.786944       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1780": dial tcp 192.168.39.254:8443: connect: no route to host
	W0603 12:50:06.931564       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1802": dial tcp 192.168.39.254:8443: connect: no route to host
	E0603 12:50:06.931798       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1802": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [7c3064afc1c4a526855ad3df05b9a3c438da7b156ff6f30af2f40e926433a359] <==
	I0603 12:52:27.366129       1 server_linux.go:69] "Using iptables proxy"
	E0603 12:52:28.241889       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-220492\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0603 12:52:31.313405       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-220492\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0603 12:52:34.386558       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-220492\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0603 12:52:40.530512       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-220492\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0603 12:52:49.745597       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-220492\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0603 12:53:08.621848       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.6"]
	I0603 12:53:08.668531       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 12:53:08.668614       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 12:53:08.668645       1 server_linux.go:165] "Using iptables Proxier"
	I0603 12:53:08.671211       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 12:53:08.671584       1 server.go:872] "Version info" version="v1.30.1"
	I0603 12:53:08.671832       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 12:53:08.673086       1 config.go:192] "Starting service config controller"
	I0603 12:53:08.673126       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 12:53:08.673154       1 config.go:101] "Starting endpoint slice config controller"
	I0603 12:53:08.673157       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 12:53:08.674942       1 config.go:319] "Starting node config controller"
	I0603 12:53:08.674970       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 12:53:08.773528       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 12:53:08.773598       1 shared_informer.go:320] Caches are synced for service config
	I0603 12:53:08.775097       1 shared_informer.go:320] Caches are synced for node config
	W0603 12:56:00.112170       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0603 12:56:00.112294       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0603 12:56:00.112367       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	
	
	==> kube-scheduler [40a382da798af9922fb6ae55c5c75fdaaf42decaca661a820f6e168e79fce883] <==
	W0603 12:52:56.187447       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.6:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.6:8443: connect: connection refused
	E0603 12:52:56.187520       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.6:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.6:8443: connect: connection refused
	W0603 12:52:56.499660       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.6:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.6:8443: connect: connection refused
	E0603 12:52:56.499731       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.6:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.6:8443: connect: connection refused
	W0603 12:52:56.529603       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.6:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.6:8443: connect: connection refused
	E0603 12:52:56.529649       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.6:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.6:8443: connect: connection refused
	W0603 12:52:56.674386       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.6:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.6:8443: connect: connection refused
	E0603 12:52:56.674422       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.6:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.6:8443: connect: connection refused
	W0603 12:53:03.837578       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0603 12:53:03.837647       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0603 12:53:03.837741       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0603 12:53:03.837782       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0603 12:53:03.837847       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0603 12:53:03.837886       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0603 12:53:03.837960       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0603 12:53:03.841133       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0603 12:53:03.841198       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0603 12:53:03.841301       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0603 12:53:03.841336       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0603 12:53:03.841415       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0603 12:53:03.841506       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0603 12:53:03.842061       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0603 12:53:03.898479       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0603 12:53:03.898703       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0603 12:53:04.442741       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [86f8a60e5333435d8ac7bc454e10cecb904b633e2ae00b080728114f5f1b1f35] <==
	W0603 12:50:32.950834       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0603 12:50:32.950940       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0603 12:50:32.961091       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0603 12:50:32.961173       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0603 12:50:32.989986       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0603 12:50:32.990107       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0603 12:50:33.240103       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0603 12:50:33.240160       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0603 12:50:33.521542       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0603 12:50:33.521594       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0603 12:50:33.630980       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0603 12:50:33.631296       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0603 12:50:34.209964       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0603 12:50:34.210127       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0603 12:50:34.441540       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0603 12:50:34.441607       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0603 12:50:34.859116       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0603 12:50:34.859206       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0603 12:50:34.899382       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0603 12:50:34.899642       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0603 12:50:35.060977       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0603 12:50:35.061134       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0603 12:50:35.416346       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0603 12:50:35.416459       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0603 12:50:36.726806       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jun 03 12:53:31 ha-220492 kubelet[1372]: E0603 12:53:31.247187    1372 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f85b2808-26fa-4608-a208-2c11eaddc293)\"" pod="kube-system/storage-provisioner" podUID="f85b2808-26fa-4608-a208-2c11eaddc293"
	Jun 03 12:53:42 ha-220492 kubelet[1372]: I0603 12:53:42.399703    1372 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-5z6j2" podStartSLOduration=573.728936842 podStartE2EDuration="9m34.399667987s" podCreationTimestamp="2024-06-03 12:44:08 +0000 UTC" firstStartedPulling="2024-06-03 12:44:09.141426786 +0000 UTC m=+162.042844489" lastFinishedPulling="2024-06-03 12:44:09.812157928 +0000 UTC m=+162.713575634" observedRunningTime="2024-06-03 12:44:10.012742024 +0000 UTC m=+162.914159746" watchObservedRunningTime="2024-06-03 12:53:42.399667987 +0000 UTC m=+735.301085711"
	Jun 03 12:53:45 ha-220492 kubelet[1372]: I0603 12:53:45.247711    1372 scope.go:117] "RemoveContainer" containerID="7f1ebe7c407f4bdc7d5296580d428b5ce113f202ffbd23a4e808f0b6bc6b3f3d"
	Jun 03 12:53:55 ha-220492 kubelet[1372]: I0603 12:53:55.248255    1372 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-220492" podUID="577ecb1f-e5df-4494-b898-7d2d8e79151d"
	Jun 03 12:53:55 ha-220492 kubelet[1372]: I0603 12:53:55.269355    1372 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-220492"
	Jun 03 12:54:27 ha-220492 kubelet[1372]: E0603 12:54:27.281634    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 12:54:27 ha-220492 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 12:54:27 ha-220492 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 12:54:27 ha-220492 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 12:54:27 ha-220492 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 12:55:27 ha-220492 kubelet[1372]: E0603 12:55:27.281075    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 12:55:27 ha-220492 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 12:55:27 ha-220492 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 12:55:27 ha-220492 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 12:55:27 ha-220492 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 12:56:27 ha-220492 kubelet[1372]: E0603 12:56:27.280266    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 12:56:27 ha-220492 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 12:56:27 ha-220492 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 12:56:27 ha-220492 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 12:56:27 ha-220492 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 12:57:27 ha-220492 kubelet[1372]: E0603 12:57:27.291149    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 12:57:27 ha-220492 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 12:57:27 ha-220492 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 12:57:27 ha-220492 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 12:57:27 ha-220492 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0603 12:57:35.794903 1105007 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19011-1078924/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-220492 -n ha-220492
helpers_test.go:261: (dbg) Run:  kubectl --context ha-220492 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.72s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (299.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-101468
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-101468
E0603 13:12:45.542225 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-101468: exit status 82 (2m2.687768527s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-101468-m03"  ...
	* Stopping node "multinode-101468-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-101468" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-101468 --wait=true -v=8 --alsologtostderr
E0603 13:14:58.228261 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/functional-093300/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-101468 --wait=true -v=8 --alsologtostderr: (2m54.386223786s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-101468
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-101468 -n multinode-101468
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-101468 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-101468 logs -n 25: (1.549709053s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-101468 ssh -n                                                                 | multinode-101468 | jenkins | v1.33.1 | 03 Jun 24 13:11 UTC | 03 Jun 24 13:11 UTC |
	|         | multinode-101468-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-101468 cp multinode-101468-m02:/home/docker/cp-test.txt                       | multinode-101468 | jenkins | v1.33.1 | 03 Jun 24 13:11 UTC | 03 Jun 24 13:11 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2236251675/001/cp-test_multinode-101468-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-101468 ssh -n                                                                 | multinode-101468 | jenkins | v1.33.1 | 03 Jun 24 13:11 UTC | 03 Jun 24 13:11 UTC |
	|         | multinode-101468-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-101468 cp multinode-101468-m02:/home/docker/cp-test.txt                       | multinode-101468 | jenkins | v1.33.1 | 03 Jun 24 13:11 UTC | 03 Jun 24 13:11 UTC |
	|         | multinode-101468:/home/docker/cp-test_multinode-101468-m02_multinode-101468.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-101468 ssh -n                                                                 | multinode-101468 | jenkins | v1.33.1 | 03 Jun 24 13:11 UTC | 03 Jun 24 13:11 UTC |
	|         | multinode-101468-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-101468 ssh -n multinode-101468 sudo cat                                       | multinode-101468 | jenkins | v1.33.1 | 03 Jun 24 13:11 UTC | 03 Jun 24 13:11 UTC |
	|         | /home/docker/cp-test_multinode-101468-m02_multinode-101468.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-101468 cp multinode-101468-m02:/home/docker/cp-test.txt                       | multinode-101468 | jenkins | v1.33.1 | 03 Jun 24 13:11 UTC | 03 Jun 24 13:11 UTC |
	|         | multinode-101468-m03:/home/docker/cp-test_multinode-101468-m02_multinode-101468-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-101468 ssh -n                                                                 | multinode-101468 | jenkins | v1.33.1 | 03 Jun 24 13:11 UTC | 03 Jun 24 13:11 UTC |
	|         | multinode-101468-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-101468 ssh -n multinode-101468-m03 sudo cat                                   | multinode-101468 | jenkins | v1.33.1 | 03 Jun 24 13:11 UTC | 03 Jun 24 13:11 UTC |
	|         | /home/docker/cp-test_multinode-101468-m02_multinode-101468-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-101468 cp testdata/cp-test.txt                                                | multinode-101468 | jenkins | v1.33.1 | 03 Jun 24 13:11 UTC | 03 Jun 24 13:11 UTC |
	|         | multinode-101468-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-101468 ssh -n                                                                 | multinode-101468 | jenkins | v1.33.1 | 03 Jun 24 13:11 UTC | 03 Jun 24 13:11 UTC |
	|         | multinode-101468-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-101468 cp multinode-101468-m03:/home/docker/cp-test.txt                       | multinode-101468 | jenkins | v1.33.1 | 03 Jun 24 13:11 UTC | 03 Jun 24 13:11 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2236251675/001/cp-test_multinode-101468-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-101468 ssh -n                                                                 | multinode-101468 | jenkins | v1.33.1 | 03 Jun 24 13:11 UTC | 03 Jun 24 13:11 UTC |
	|         | multinode-101468-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-101468 cp multinode-101468-m03:/home/docker/cp-test.txt                       | multinode-101468 | jenkins | v1.33.1 | 03 Jun 24 13:11 UTC | 03 Jun 24 13:11 UTC |
	|         | multinode-101468:/home/docker/cp-test_multinode-101468-m03_multinode-101468.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-101468 ssh -n                                                                 | multinode-101468 | jenkins | v1.33.1 | 03 Jun 24 13:11 UTC | 03 Jun 24 13:11 UTC |
	|         | multinode-101468-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-101468 ssh -n multinode-101468 sudo cat                                       | multinode-101468 | jenkins | v1.33.1 | 03 Jun 24 13:11 UTC | 03 Jun 24 13:11 UTC |
	|         | /home/docker/cp-test_multinode-101468-m03_multinode-101468.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-101468 cp multinode-101468-m03:/home/docker/cp-test.txt                       | multinode-101468 | jenkins | v1.33.1 | 03 Jun 24 13:11 UTC | 03 Jun 24 13:11 UTC |
	|         | multinode-101468-m02:/home/docker/cp-test_multinode-101468-m03_multinode-101468-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-101468 ssh -n                                                                 | multinode-101468 | jenkins | v1.33.1 | 03 Jun 24 13:11 UTC | 03 Jun 24 13:11 UTC |
	|         | multinode-101468-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-101468 ssh -n multinode-101468-m02 sudo cat                                   | multinode-101468 | jenkins | v1.33.1 | 03 Jun 24 13:11 UTC | 03 Jun 24 13:11 UTC |
	|         | /home/docker/cp-test_multinode-101468-m03_multinode-101468-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-101468 node stop m03                                                          | multinode-101468 | jenkins | v1.33.1 | 03 Jun 24 13:11 UTC | 03 Jun 24 13:11 UTC |
	| node    | multinode-101468 node start                                                             | multinode-101468 | jenkins | v1.33.1 | 03 Jun 24 13:11 UTC | 03 Jun 24 13:11 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-101468                                                                | multinode-101468 | jenkins | v1.33.1 | 03 Jun 24 13:11 UTC |                     |
	| stop    | -p multinode-101468                                                                     | multinode-101468 | jenkins | v1.33.1 | 03 Jun 24 13:11 UTC |                     |
	| start   | -p multinode-101468                                                                     | multinode-101468 | jenkins | v1.33.1 | 03 Jun 24 13:13 UTC | 03 Jun 24 13:16 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-101468                                                                | multinode-101468 | jenkins | v1.33.1 | 03 Jun 24 13:16 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 13:13:49
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 13:13:49.241626 1114112 out.go:291] Setting OutFile to fd 1 ...
	I0603 13:13:49.242056 1114112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:13:49.242073 1114112 out.go:304] Setting ErrFile to fd 2...
	I0603 13:13:49.242080 1114112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:13:49.242724 1114112 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
	I0603 13:13:49.243501 1114112 out.go:298] Setting JSON to false
	I0603 13:13:49.244527 1114112 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":14176,"bootTime":1717406253,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 13:13:49.244591 1114112 start.go:139] virtualization: kvm guest
	I0603 13:13:49.247675 1114112 out.go:177] * [multinode-101468] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 13:13:49.249373 1114112 out.go:177]   - MINIKUBE_LOCATION=19011
	I0603 13:13:49.250850 1114112 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 13:13:49.249393 1114112 notify.go:220] Checking for updates...
	I0603 13:13:49.253725 1114112 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 13:13:49.254931 1114112 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 13:13:49.256121 1114112 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 13:13:49.257381 1114112 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 13:13:49.259123 1114112 config.go:182] Loaded profile config "multinode-101468": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:13:49.259251 1114112 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 13:13:49.259665 1114112 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:13:49.259724 1114112 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:13:49.278236 1114112 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34157
	I0603 13:13:49.278667 1114112 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:13:49.279210 1114112 main.go:141] libmachine: Using API Version  1
	I0603 13:13:49.279245 1114112 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:13:49.279600 1114112 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:13:49.279808 1114112 main.go:141] libmachine: (multinode-101468) Calling .DriverName
	I0603 13:13:49.315322 1114112 out.go:177] * Using the kvm2 driver based on existing profile
	I0603 13:13:49.316706 1114112 start.go:297] selected driver: kvm2
	I0603 13:13:49.316728 1114112 start.go:901] validating driver "kvm2" against &{Name:multinode-101468 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.1 ClusterName:multinode-101468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.203 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:13:49.316871 1114112 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 13:13:49.317237 1114112 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 13:13:49.317323 1114112 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19011-1078924/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0603 13:13:49.332599 1114112 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0603 13:13:49.333299 1114112 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 13:13:49.333376 1114112 cni.go:84] Creating CNI manager for ""
	I0603 13:13:49.333393 1114112 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0603 13:13:49.333514 1114112 start.go:340] cluster config:
	{Name:multinode-101468 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-101468 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.203 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:13:49.333658 1114112 iso.go:125] acquiring lock: {Name:mka26d6a83f88b83737ccc78b57cc462fbe70fe1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 13:13:49.335654 1114112 out.go:177] * Starting "multinode-101468" primary control-plane node in "multinode-101468" cluster
	I0603 13:13:49.337011 1114112 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 13:13:49.337050 1114112 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0603 13:13:49.337068 1114112 cache.go:56] Caching tarball of preloaded images
	I0603 13:13:49.337194 1114112 preload.go:173] Found /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 13:13:49.337210 1114112 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0603 13:13:49.337363 1114112 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/multinode-101468/config.json ...
	I0603 13:13:49.337626 1114112 start.go:360] acquireMachinesLock for multinode-101468: {Name:mk20baaab39609d00406b78ad309423511e633ec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 13:13:49.337674 1114112 start.go:364] duration metric: took 25.744µs to acquireMachinesLock for "multinode-101468"
	I0603 13:13:49.337695 1114112 start.go:96] Skipping create...Using existing machine configuration
	I0603 13:13:49.337704 1114112 fix.go:54] fixHost starting: 
	I0603 13:13:49.338011 1114112 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:13:49.338046 1114112 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:13:49.352642 1114112 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40653
	I0603 13:13:49.353099 1114112 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:13:49.353695 1114112 main.go:141] libmachine: Using API Version  1
	I0603 13:13:49.353725 1114112 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:13:49.354070 1114112 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:13:49.354303 1114112 main.go:141] libmachine: (multinode-101468) Calling .DriverName
	I0603 13:13:49.354474 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetState
	I0603 13:13:49.356129 1114112 fix.go:112] recreateIfNeeded on multinode-101468: state=Running err=<nil>
	W0603 13:13:49.356147 1114112 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 13:13:49.359152 1114112 out.go:177] * Updating the running kvm2 "multinode-101468" VM ...
	I0603 13:13:49.360573 1114112 machine.go:94] provisionDockerMachine start ...
	I0603 13:13:49.360602 1114112 main.go:141] libmachine: (multinode-101468) Calling .DriverName
	I0603 13:13:49.360842 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHHostname
	I0603 13:13:49.363589 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:13:49.363944 1114112 main.go:141] libmachine: (multinode-101468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:8e:40", ip: ""} in network mk-multinode-101468: {Iface:virbr1 ExpiryTime:2024-06-03 14:09:07 +0000 UTC Type:0 Mac:52:54:00:ab:8e:40 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-101468 Clientid:01:52:54:00:ab:8e:40}
	I0603 13:13:49.363980 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined IP address 192.168.39.141 and MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:13:49.364107 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHPort
	I0603 13:13:49.364307 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHKeyPath
	I0603 13:13:49.364495 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHKeyPath
	I0603 13:13:49.364653 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHUsername
	I0603 13:13:49.364836 1114112 main.go:141] libmachine: Using SSH client type: native
	I0603 13:13:49.365068 1114112 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I0603 13:13:49.365081 1114112 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 13:13:49.483591 1114112 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-101468
	
	I0603 13:13:49.483619 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetMachineName
	I0603 13:13:49.483885 1114112 buildroot.go:166] provisioning hostname "multinode-101468"
	I0603 13:13:49.483915 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetMachineName
	I0603 13:13:49.484112 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHHostname
	I0603 13:13:49.486811 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:13:49.487205 1114112 main.go:141] libmachine: (multinode-101468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:8e:40", ip: ""} in network mk-multinode-101468: {Iface:virbr1 ExpiryTime:2024-06-03 14:09:07 +0000 UTC Type:0 Mac:52:54:00:ab:8e:40 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-101468 Clientid:01:52:54:00:ab:8e:40}
	I0603 13:13:49.487256 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined IP address 192.168.39.141 and MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:13:49.487396 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHPort
	I0603 13:13:49.487577 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHKeyPath
	I0603 13:13:49.487737 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHKeyPath
	I0603 13:13:49.487898 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHUsername
	I0603 13:13:49.488063 1114112 main.go:141] libmachine: Using SSH client type: native
	I0603 13:13:49.488277 1114112 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I0603 13:13:49.488294 1114112 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-101468 && echo "multinode-101468" | sudo tee /etc/hostname
	I0603 13:13:49.614930 1114112 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-101468
	
	I0603 13:13:49.614976 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHHostname
	I0603 13:13:49.617743 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:13:49.618071 1114112 main.go:141] libmachine: (multinode-101468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:8e:40", ip: ""} in network mk-multinode-101468: {Iface:virbr1 ExpiryTime:2024-06-03 14:09:07 +0000 UTC Type:0 Mac:52:54:00:ab:8e:40 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-101468 Clientid:01:52:54:00:ab:8e:40}
	I0603 13:13:49.618101 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined IP address 192.168.39.141 and MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:13:49.618262 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHPort
	I0603 13:13:49.618490 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHKeyPath
	I0603 13:13:49.618714 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHKeyPath
	I0603 13:13:49.618884 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHUsername
	I0603 13:13:49.619067 1114112 main.go:141] libmachine: Using SSH client type: native
	I0603 13:13:49.619250 1114112 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I0603 13:13:49.619267 1114112 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-101468' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-101468/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-101468' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 13:13:49.726553 1114112 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 13:13:49.726582 1114112 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19011-1078924/.minikube CaCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19011-1078924/.minikube}
	I0603 13:13:49.726617 1114112 buildroot.go:174] setting up certificates
	I0603 13:13:49.726628 1114112 provision.go:84] configureAuth start
	I0603 13:13:49.726640 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetMachineName
	I0603 13:13:49.726957 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetIP
	I0603 13:13:49.729838 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:13:49.730194 1114112 main.go:141] libmachine: (multinode-101468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:8e:40", ip: ""} in network mk-multinode-101468: {Iface:virbr1 ExpiryTime:2024-06-03 14:09:07 +0000 UTC Type:0 Mac:52:54:00:ab:8e:40 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-101468 Clientid:01:52:54:00:ab:8e:40}
	I0603 13:13:49.730222 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined IP address 192.168.39.141 and MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:13:49.730408 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHHostname
	I0603 13:13:49.732614 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:13:49.733007 1114112 main.go:141] libmachine: (multinode-101468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:8e:40", ip: ""} in network mk-multinode-101468: {Iface:virbr1 ExpiryTime:2024-06-03 14:09:07 +0000 UTC Type:0 Mac:52:54:00:ab:8e:40 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-101468 Clientid:01:52:54:00:ab:8e:40}
	I0603 13:13:49.733045 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined IP address 192.168.39.141 and MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:13:49.733197 1114112 provision.go:143] copyHostCerts
	I0603 13:13:49.733233 1114112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 13:13:49.733285 1114112 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem, removing ...
	I0603 13:13:49.733296 1114112 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 13:13:49.733386 1114112 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem (1078 bytes)
	I0603 13:13:49.733494 1114112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 13:13:49.733517 1114112 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem, removing ...
	I0603 13:13:49.733526 1114112 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 13:13:49.733557 1114112 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem (1123 bytes)
	I0603 13:13:49.733606 1114112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 13:13:49.733630 1114112 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem, removing ...
	I0603 13:13:49.733639 1114112 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 13:13:49.733669 1114112 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem (1675 bytes)
	I0603 13:13:49.733724 1114112 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem org=jenkins.multinode-101468 san=[127.0.0.1 192.168.39.141 localhost minikube multinode-101468]
	I0603 13:13:49.853554 1114112 provision.go:177] copyRemoteCerts
	I0603 13:13:49.853617 1114112 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 13:13:49.853642 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHHostname
	I0603 13:13:49.856178 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:13:49.856509 1114112 main.go:141] libmachine: (multinode-101468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:8e:40", ip: ""} in network mk-multinode-101468: {Iface:virbr1 ExpiryTime:2024-06-03 14:09:07 +0000 UTC Type:0 Mac:52:54:00:ab:8e:40 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-101468 Clientid:01:52:54:00:ab:8e:40}
	I0603 13:13:49.856532 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined IP address 192.168.39.141 and MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:13:49.856667 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHPort
	I0603 13:13:49.856893 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHKeyPath
	I0603 13:13:49.857024 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHUsername
	I0603 13:13:49.857137 1114112 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/multinode-101468/id_rsa Username:docker}
	I0603 13:13:49.944861 1114112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0603 13:13:49.944945 1114112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 13:13:49.972068 1114112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0603 13:13:49.972139 1114112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 13:13:49.997167 1114112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0603 13:13:49.997253 1114112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0603 13:13:50.022768 1114112 provision.go:87] duration metric: took 296.124569ms to configureAuth
	I0603 13:13:50.022798 1114112 buildroot.go:189] setting minikube options for container-runtime
	I0603 13:13:50.023030 1114112 config.go:182] Loaded profile config "multinode-101468": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:13:50.023126 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHHostname
	I0603 13:13:50.025510 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:13:50.025895 1114112 main.go:141] libmachine: (multinode-101468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:8e:40", ip: ""} in network mk-multinode-101468: {Iface:virbr1 ExpiryTime:2024-06-03 14:09:07 +0000 UTC Type:0 Mac:52:54:00:ab:8e:40 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-101468 Clientid:01:52:54:00:ab:8e:40}
	I0603 13:13:50.025932 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined IP address 192.168.39.141 and MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:13:50.026066 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHPort
	I0603 13:13:50.026301 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHKeyPath
	I0603 13:13:50.026485 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHKeyPath
	I0603 13:13:50.026639 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHUsername
	I0603 13:13:50.026777 1114112 main.go:141] libmachine: Using SSH client type: native
	I0603 13:13:50.026962 1114112 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I0603 13:13:50.026982 1114112 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 13:15:20.805943 1114112 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 13:15:20.805978 1114112 machine.go:97] duration metric: took 1m31.445384273s to provisionDockerMachine
	I0603 13:15:20.805999 1114112 start.go:293] postStartSetup for "multinode-101468" (driver="kvm2")
	I0603 13:15:20.806010 1114112 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 13:15:20.806028 1114112 main.go:141] libmachine: (multinode-101468) Calling .DriverName
	I0603 13:15:20.806413 1114112 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 13:15:20.806461 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHHostname
	I0603 13:15:20.809916 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:15:20.810338 1114112 main.go:141] libmachine: (multinode-101468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:8e:40", ip: ""} in network mk-multinode-101468: {Iface:virbr1 ExpiryTime:2024-06-03 14:09:07 +0000 UTC Type:0 Mac:52:54:00:ab:8e:40 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-101468 Clientid:01:52:54:00:ab:8e:40}
	I0603 13:15:20.810371 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined IP address 192.168.39.141 and MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:15:20.810491 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHPort
	I0603 13:15:20.810728 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHKeyPath
	I0603 13:15:20.810961 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHUsername
	I0603 13:15:20.811187 1114112 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/multinode-101468/id_rsa Username:docker}
	I0603 13:15:20.898141 1114112 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 13:15:20.903428 1114112 command_runner.go:130] > NAME=Buildroot
	I0603 13:15:20.903455 1114112 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0603 13:15:20.903461 1114112 command_runner.go:130] > ID=buildroot
	I0603 13:15:20.903469 1114112 command_runner.go:130] > VERSION_ID=2023.02.9
	I0603 13:15:20.903477 1114112 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0603 13:15:20.903519 1114112 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 13:15:20.903540 1114112 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/addons for local assets ...
	I0603 13:15:20.903610 1114112 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/files for local assets ...
	I0603 13:15:20.903703 1114112 filesync.go:149] local asset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> 10862512.pem in /etc/ssl/certs
	I0603 13:15:20.903726 1114112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> /etc/ssl/certs/10862512.pem
	I0603 13:15:20.903943 1114112 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 13:15:20.914818 1114112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:15:20.940954 1114112 start.go:296] duration metric: took 134.940351ms for postStartSetup
	I0603 13:15:20.941012 1114112 fix.go:56] duration metric: took 1m31.603308631s for fixHost
	I0603 13:15:20.941056 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHHostname
	I0603 13:15:20.943765 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:15:20.944140 1114112 main.go:141] libmachine: (multinode-101468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:8e:40", ip: ""} in network mk-multinode-101468: {Iface:virbr1 ExpiryTime:2024-06-03 14:09:07 +0000 UTC Type:0 Mac:52:54:00:ab:8e:40 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-101468 Clientid:01:52:54:00:ab:8e:40}
	I0603 13:15:20.944175 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined IP address 192.168.39.141 and MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:15:20.944375 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHPort
	I0603 13:15:20.944655 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHKeyPath
	I0603 13:15:20.944833 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHKeyPath
	I0603 13:15:20.944978 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHUsername
	I0603 13:15:20.945231 1114112 main.go:141] libmachine: Using SSH client type: native
	I0603 13:15:20.945451 1114112 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I0603 13:15:20.945463 1114112 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 13:15:21.054327 1114112 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717420521.031295798
	
	I0603 13:15:21.054353 1114112 fix.go:216] guest clock: 1717420521.031295798
	I0603 13:15:21.054360 1114112 fix.go:229] Guest: 2024-06-03 13:15:21.031295798 +0000 UTC Remote: 2024-06-03 13:15:20.941029963 +0000 UTC m=+91.737593437 (delta=90.265835ms)
	I0603 13:15:21.054400 1114112 fix.go:200] guest clock delta is within tolerance: 90.265835ms
	I0603 13:15:21.054405 1114112 start.go:83] releasing machines lock for "multinode-101468", held for 1m31.716719224s
	I0603 13:15:21.054439 1114112 main.go:141] libmachine: (multinode-101468) Calling .DriverName
	I0603 13:15:21.054759 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetIP
	I0603 13:15:21.057471 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:15:21.057773 1114112 main.go:141] libmachine: (multinode-101468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:8e:40", ip: ""} in network mk-multinode-101468: {Iface:virbr1 ExpiryTime:2024-06-03 14:09:07 +0000 UTC Type:0 Mac:52:54:00:ab:8e:40 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-101468 Clientid:01:52:54:00:ab:8e:40}
	I0603 13:15:21.057809 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined IP address 192.168.39.141 and MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:15:21.058012 1114112 main.go:141] libmachine: (multinode-101468) Calling .DriverName
	I0603 13:15:21.058578 1114112 main.go:141] libmachine: (multinode-101468) Calling .DriverName
	I0603 13:15:21.058775 1114112 main.go:141] libmachine: (multinode-101468) Calling .DriverName
	I0603 13:15:21.058889 1114112 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 13:15:21.058937 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHHostname
	I0603 13:15:21.059050 1114112 ssh_runner.go:195] Run: cat /version.json
	I0603 13:15:21.059076 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHHostname
	I0603 13:15:21.061615 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:15:21.061656 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:15:21.061966 1114112 main.go:141] libmachine: (multinode-101468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:8e:40", ip: ""} in network mk-multinode-101468: {Iface:virbr1 ExpiryTime:2024-06-03 14:09:07 +0000 UTC Type:0 Mac:52:54:00:ab:8e:40 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-101468 Clientid:01:52:54:00:ab:8e:40}
	I0603 13:15:21.061994 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined IP address 192.168.39.141 and MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:15:21.062043 1114112 main.go:141] libmachine: (multinode-101468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:8e:40", ip: ""} in network mk-multinode-101468: {Iface:virbr1 ExpiryTime:2024-06-03 14:09:07 +0000 UTC Type:0 Mac:52:54:00:ab:8e:40 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-101468 Clientid:01:52:54:00:ab:8e:40}
	I0603 13:15:21.062068 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined IP address 192.168.39.141 and MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:15:21.062156 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHPort
	I0603 13:15:21.062242 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHPort
	I0603 13:15:21.062320 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHKeyPath
	I0603 13:15:21.062399 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHKeyPath
	I0603 13:15:21.062453 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHUsername
	I0603 13:15:21.062515 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHUsername
	I0603 13:15:21.062580 1114112 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/multinode-101468/id_rsa Username:docker}
	I0603 13:15:21.062684 1114112 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/multinode-101468/id_rsa Username:docker}
	I0603 13:15:21.142402 1114112 command_runner.go:130] > {"iso_version": "v1.33.1-1716398070-18934", "kicbase_version": "v0.0.44-1716228441-18934", "minikube_version": "v1.33.1", "commit": "7bc64cce06153f72c1bf9cbcf2114663ad5af3b7"}
	I0603 13:15:21.166262 1114112 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0603 13:15:21.167118 1114112 ssh_runner.go:195] Run: systemctl --version
	I0603 13:15:21.173308 1114112 command_runner.go:130] > systemd 252 (252)
	I0603 13:15:21.173346 1114112 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0603 13:15:21.173597 1114112 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 13:15:21.334506 1114112 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0603 13:15:21.340968 1114112 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0603 13:15:21.341174 1114112 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 13:15:21.341258 1114112 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 13:15:21.351729 1114112 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0603 13:15:21.351758 1114112 start.go:494] detecting cgroup driver to use...
	I0603 13:15:21.351827 1114112 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 13:15:21.370332 1114112 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 13:15:21.384472 1114112 docker.go:217] disabling cri-docker service (if available) ...
	I0603 13:15:21.384536 1114112 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 13:15:21.399293 1114112 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 13:15:21.413821 1114112 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 13:15:21.557557 1114112 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 13:15:21.702349 1114112 docker.go:233] disabling docker service ...
	I0603 13:15:21.702450 1114112 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 13:15:21.721882 1114112 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 13:15:21.737507 1114112 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 13:15:21.881991 1114112 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 13:15:22.046126 1114112 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 13:15:22.060366 1114112 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 13:15:22.080090 1114112 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0603 13:15:22.080149 1114112 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 13:15:22.080205 1114112 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:15:22.091567 1114112 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 13:15:22.091671 1114112 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:15:22.102508 1114112 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:15:22.113794 1114112 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:15:22.124559 1114112 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 13:15:22.135819 1114112 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:15:22.146808 1114112 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:15:22.158392 1114112 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:15:22.169246 1114112 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 13:15:22.179027 1114112 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0603 13:15:22.179158 1114112 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 13:15:22.189154 1114112 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:15:22.357469 1114112 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 13:15:22.609111 1114112 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 13:15:22.609208 1114112 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 13:15:22.614503 1114112 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0603 13:15:22.614525 1114112 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0603 13:15:22.614532 1114112 command_runner.go:130] > Device: 0,22	Inode: 1338        Links: 1
	I0603 13:15:22.614539 1114112 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0603 13:15:22.614545 1114112 command_runner.go:130] > Access: 2024-06-03 13:15:22.475642304 +0000
	I0603 13:15:22.614551 1114112 command_runner.go:130] > Modify: 2024-06-03 13:15:22.475642304 +0000
	I0603 13:15:22.614555 1114112 command_runner.go:130] > Change: 2024-06-03 13:15:22.475642304 +0000
	I0603 13:15:22.614559 1114112 command_runner.go:130] >  Birth: -
	I0603 13:15:22.614577 1114112 start.go:562] Will wait 60s for crictl version
	I0603 13:15:22.614619 1114112 ssh_runner.go:195] Run: which crictl
	I0603 13:15:22.618402 1114112 command_runner.go:130] > /usr/bin/crictl
	I0603 13:15:22.618478 1114112 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 13:15:22.655001 1114112 command_runner.go:130] > Version:  0.1.0
	I0603 13:15:22.655035 1114112 command_runner.go:130] > RuntimeName:  cri-o
	I0603 13:15:22.655055 1114112 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0603 13:15:22.655063 1114112 command_runner.go:130] > RuntimeApiVersion:  v1
	I0603 13:15:22.655142 1114112 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 13:15:22.655261 1114112 ssh_runner.go:195] Run: crio --version
	I0603 13:15:22.688541 1114112 command_runner.go:130] > crio version 1.29.1
	I0603 13:15:22.688575 1114112 command_runner.go:130] > Version:        1.29.1
	I0603 13:15:22.688585 1114112 command_runner.go:130] > GitCommit:      unknown
	I0603 13:15:22.688592 1114112 command_runner.go:130] > GitCommitDate:  unknown
	I0603 13:15:22.688598 1114112 command_runner.go:130] > GitTreeState:   clean
	I0603 13:15:22.688607 1114112 command_runner.go:130] > BuildDate:      2024-05-22T23:02:45Z
	I0603 13:15:22.688613 1114112 command_runner.go:130] > GoVersion:      go1.21.6
	I0603 13:15:22.688619 1114112 command_runner.go:130] > Compiler:       gc
	I0603 13:15:22.688625 1114112 command_runner.go:130] > Platform:       linux/amd64
	I0603 13:15:22.688632 1114112 command_runner.go:130] > Linkmode:       dynamic
	I0603 13:15:22.688639 1114112 command_runner.go:130] > BuildTags:      
	I0603 13:15:22.688649 1114112 command_runner.go:130] >   containers_image_ostree_stub
	I0603 13:15:22.688657 1114112 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0603 13:15:22.688666 1114112 command_runner.go:130] >   btrfs_noversion
	I0603 13:15:22.688676 1114112 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0603 13:15:22.688685 1114112 command_runner.go:130] >   libdm_no_deferred_remove
	I0603 13:15:22.688693 1114112 command_runner.go:130] >   seccomp
	I0603 13:15:22.688702 1114112 command_runner.go:130] > LDFlags:          unknown
	I0603 13:15:22.688743 1114112 command_runner.go:130] > SeccompEnabled:   true
	I0603 13:15:22.688763 1114112 command_runner.go:130] > AppArmorEnabled:  false
	I0603 13:15:22.688853 1114112 ssh_runner.go:195] Run: crio --version
	I0603 13:15:22.717784 1114112 command_runner.go:130] > crio version 1.29.1
	I0603 13:15:22.717814 1114112 command_runner.go:130] > Version:        1.29.1
	I0603 13:15:22.717823 1114112 command_runner.go:130] > GitCommit:      unknown
	I0603 13:15:22.717831 1114112 command_runner.go:130] > GitCommitDate:  unknown
	I0603 13:15:22.717837 1114112 command_runner.go:130] > GitTreeState:   clean
	I0603 13:15:22.717846 1114112 command_runner.go:130] > BuildDate:      2024-05-22T23:02:45Z
	I0603 13:15:22.717878 1114112 command_runner.go:130] > GoVersion:      go1.21.6
	I0603 13:15:22.717887 1114112 command_runner.go:130] > Compiler:       gc
	I0603 13:15:22.717895 1114112 command_runner.go:130] > Platform:       linux/amd64
	I0603 13:15:22.717903 1114112 command_runner.go:130] > Linkmode:       dynamic
	I0603 13:15:22.717911 1114112 command_runner.go:130] > BuildTags:      
	I0603 13:15:22.717919 1114112 command_runner.go:130] >   containers_image_ostree_stub
	I0603 13:15:22.717928 1114112 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0603 13:15:22.717935 1114112 command_runner.go:130] >   btrfs_noversion
	I0603 13:15:22.717961 1114112 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0603 13:15:22.717968 1114112 command_runner.go:130] >   libdm_no_deferred_remove
	I0603 13:15:22.717974 1114112 command_runner.go:130] >   seccomp
	I0603 13:15:22.717982 1114112 command_runner.go:130] > LDFlags:          unknown
	I0603 13:15:22.717988 1114112 command_runner.go:130] > SeccompEnabled:   true
	I0603 13:15:22.717995 1114112 command_runner.go:130] > AppArmorEnabled:  false
	I0603 13:15:22.721480 1114112 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 13:15:22.723274 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetIP
	I0603 13:15:22.726508 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:15:22.726928 1114112 main.go:141] libmachine: (multinode-101468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:8e:40", ip: ""} in network mk-multinode-101468: {Iface:virbr1 ExpiryTime:2024-06-03 14:09:07 +0000 UTC Type:0 Mac:52:54:00:ab:8e:40 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-101468 Clientid:01:52:54:00:ab:8e:40}
	I0603 13:15:22.726959 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined IP address 192.168.39.141 and MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:15:22.727177 1114112 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0603 13:15:22.731630 1114112 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0603 13:15:22.731764 1114112 kubeadm.go:877] updating cluster {Name:multinode-101468 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.1 ClusterName:multinode-101468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.203 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 13:15:22.731922 1114112 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 13:15:22.731966 1114112 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:15:22.777010 1114112 command_runner.go:130] > {
	I0603 13:15:22.777037 1114112 command_runner.go:130] >   "images": [
	I0603 13:15:22.777041 1114112 command_runner.go:130] >     {
	I0603 13:15:22.777049 1114112 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0603 13:15:22.777054 1114112 command_runner.go:130] >       "repoTags": [
	I0603 13:15:22.777060 1114112 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0603 13:15:22.777063 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.777067 1114112 command_runner.go:130] >       "repoDigests": [
	I0603 13:15:22.777075 1114112 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0603 13:15:22.777083 1114112 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0603 13:15:22.777087 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.777091 1114112 command_runner.go:130] >       "size": "65291810",
	I0603 13:15:22.777095 1114112 command_runner.go:130] >       "uid": null,
	I0603 13:15:22.777100 1114112 command_runner.go:130] >       "username": "",
	I0603 13:15:22.777109 1114112 command_runner.go:130] >       "spec": null,
	I0603 13:15:22.777114 1114112 command_runner.go:130] >       "pinned": false
	I0603 13:15:22.777117 1114112 command_runner.go:130] >     },
	I0603 13:15:22.777120 1114112 command_runner.go:130] >     {
	I0603 13:15:22.777127 1114112 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0603 13:15:22.777138 1114112 command_runner.go:130] >       "repoTags": [
	I0603 13:15:22.777144 1114112 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0603 13:15:22.777148 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.777152 1114112 command_runner.go:130] >       "repoDigests": [
	I0603 13:15:22.777160 1114112 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0603 13:15:22.777169 1114112 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0603 13:15:22.777173 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.777180 1114112 command_runner.go:130] >       "size": "65908273",
	I0603 13:15:22.777183 1114112 command_runner.go:130] >       "uid": null,
	I0603 13:15:22.777191 1114112 command_runner.go:130] >       "username": "",
	I0603 13:15:22.777197 1114112 command_runner.go:130] >       "spec": null,
	I0603 13:15:22.777201 1114112 command_runner.go:130] >       "pinned": false
	I0603 13:15:22.777207 1114112 command_runner.go:130] >     },
	I0603 13:15:22.777210 1114112 command_runner.go:130] >     {
	I0603 13:15:22.777216 1114112 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0603 13:15:22.777220 1114112 command_runner.go:130] >       "repoTags": [
	I0603 13:15:22.777231 1114112 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0603 13:15:22.777247 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.777251 1114112 command_runner.go:130] >       "repoDigests": [
	I0603 13:15:22.777257 1114112 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0603 13:15:22.777264 1114112 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0603 13:15:22.777270 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.777274 1114112 command_runner.go:130] >       "size": "1363676",
	I0603 13:15:22.777280 1114112 command_runner.go:130] >       "uid": null,
	I0603 13:15:22.777284 1114112 command_runner.go:130] >       "username": "",
	I0603 13:15:22.777289 1114112 command_runner.go:130] >       "spec": null,
	I0603 13:15:22.777293 1114112 command_runner.go:130] >       "pinned": false
	I0603 13:15:22.777299 1114112 command_runner.go:130] >     },
	I0603 13:15:22.777302 1114112 command_runner.go:130] >     {
	I0603 13:15:22.777310 1114112 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0603 13:15:22.777317 1114112 command_runner.go:130] >       "repoTags": [
	I0603 13:15:22.777322 1114112 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0603 13:15:22.777328 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.777332 1114112 command_runner.go:130] >       "repoDigests": [
	I0603 13:15:22.777342 1114112 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0603 13:15:22.777357 1114112 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0603 13:15:22.777364 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.777368 1114112 command_runner.go:130] >       "size": "31470524",
	I0603 13:15:22.777375 1114112 command_runner.go:130] >       "uid": null,
	I0603 13:15:22.777379 1114112 command_runner.go:130] >       "username": "",
	I0603 13:15:22.777395 1114112 command_runner.go:130] >       "spec": null,
	I0603 13:15:22.777418 1114112 command_runner.go:130] >       "pinned": false
	I0603 13:15:22.777428 1114112 command_runner.go:130] >     },
	I0603 13:15:22.777436 1114112 command_runner.go:130] >     {
	I0603 13:15:22.777449 1114112 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0603 13:15:22.777458 1114112 command_runner.go:130] >       "repoTags": [
	I0603 13:15:22.777466 1114112 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0603 13:15:22.777469 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.777473 1114112 command_runner.go:130] >       "repoDigests": [
	I0603 13:15:22.777480 1114112 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0603 13:15:22.777492 1114112 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0603 13:15:22.777497 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.777511 1114112 command_runner.go:130] >       "size": "61245718",
	I0603 13:15:22.777518 1114112 command_runner.go:130] >       "uid": null,
	I0603 13:15:22.777522 1114112 command_runner.go:130] >       "username": "nonroot",
	I0603 13:15:22.777526 1114112 command_runner.go:130] >       "spec": null,
	I0603 13:15:22.777537 1114112 command_runner.go:130] >       "pinned": false
	I0603 13:15:22.777543 1114112 command_runner.go:130] >     },
	I0603 13:15:22.777547 1114112 command_runner.go:130] >     {
	I0603 13:15:22.777555 1114112 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0603 13:15:22.777560 1114112 command_runner.go:130] >       "repoTags": [
	I0603 13:15:22.777564 1114112 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0603 13:15:22.777570 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.777575 1114112 command_runner.go:130] >       "repoDigests": [
	I0603 13:15:22.777584 1114112 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0603 13:15:22.777592 1114112 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0603 13:15:22.777598 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.777603 1114112 command_runner.go:130] >       "size": "150779692",
	I0603 13:15:22.777609 1114112 command_runner.go:130] >       "uid": {
	I0603 13:15:22.777613 1114112 command_runner.go:130] >         "value": "0"
	I0603 13:15:22.777616 1114112 command_runner.go:130] >       },
	I0603 13:15:22.777620 1114112 command_runner.go:130] >       "username": "",
	I0603 13:15:22.777626 1114112 command_runner.go:130] >       "spec": null,
	I0603 13:15:22.777630 1114112 command_runner.go:130] >       "pinned": false
	I0603 13:15:22.777636 1114112 command_runner.go:130] >     },
	I0603 13:15:22.777639 1114112 command_runner.go:130] >     {
	I0603 13:15:22.777647 1114112 command_runner.go:130] >       "id": "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a",
	I0603 13:15:22.777651 1114112 command_runner.go:130] >       "repoTags": [
	I0603 13:15:22.777656 1114112 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.1"
	I0603 13:15:22.777661 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.777665 1114112 command_runner.go:130] >       "repoDigests": [
	I0603 13:15:22.777679 1114112 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea",
	I0603 13:15:22.777694 1114112 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"
	I0603 13:15:22.777703 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.777709 1114112 command_runner.go:130] >       "size": "117601759",
	I0603 13:15:22.777718 1114112 command_runner.go:130] >       "uid": {
	I0603 13:15:22.777727 1114112 command_runner.go:130] >         "value": "0"
	I0603 13:15:22.777736 1114112 command_runner.go:130] >       },
	I0603 13:15:22.777751 1114112 command_runner.go:130] >       "username": "",
	I0603 13:15:22.777758 1114112 command_runner.go:130] >       "spec": null,
	I0603 13:15:22.777762 1114112 command_runner.go:130] >       "pinned": false
	I0603 13:15:22.777767 1114112 command_runner.go:130] >     },
	I0603 13:15:22.777771 1114112 command_runner.go:130] >     {
	I0603 13:15:22.777779 1114112 command_runner.go:130] >       "id": "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c",
	I0603 13:15:22.777786 1114112 command_runner.go:130] >       "repoTags": [
	I0603 13:15:22.777791 1114112 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.1"
	I0603 13:15:22.777797 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.777801 1114112 command_runner.go:130] >       "repoDigests": [
	I0603 13:15:22.777829 1114112 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52",
	I0603 13:15:22.777840 1114112 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"
	I0603 13:15:22.777843 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.777846 1114112 command_runner.go:130] >       "size": "112170310",
	I0603 13:15:22.777850 1114112 command_runner.go:130] >       "uid": {
	I0603 13:15:22.777859 1114112 command_runner.go:130] >         "value": "0"
	I0603 13:15:22.777867 1114112 command_runner.go:130] >       },
	I0603 13:15:22.777874 1114112 command_runner.go:130] >       "username": "",
	I0603 13:15:22.777880 1114112 command_runner.go:130] >       "spec": null,
	I0603 13:15:22.777885 1114112 command_runner.go:130] >       "pinned": false
	I0603 13:15:22.777889 1114112 command_runner.go:130] >     },
	I0603 13:15:22.777893 1114112 command_runner.go:130] >     {
	I0603 13:15:22.777902 1114112 command_runner.go:130] >       "id": "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd",
	I0603 13:15:22.777907 1114112 command_runner.go:130] >       "repoTags": [
	I0603 13:15:22.777914 1114112 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.1"
	I0603 13:15:22.777920 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.777926 1114112 command_runner.go:130] >       "repoDigests": [
	I0603 13:15:22.777953 1114112 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa",
	I0603 13:15:22.777964 1114112 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"
	I0603 13:15:22.777973 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.777979 1114112 command_runner.go:130] >       "size": "85933465",
	I0603 13:15:22.777988 1114112 command_runner.go:130] >       "uid": null,
	I0603 13:15:22.777996 1114112 command_runner.go:130] >       "username": "",
	I0603 13:15:22.778006 1114112 command_runner.go:130] >       "spec": null,
	I0603 13:15:22.778015 1114112 command_runner.go:130] >       "pinned": false
	I0603 13:15:22.778021 1114112 command_runner.go:130] >     },
	I0603 13:15:22.778037 1114112 command_runner.go:130] >     {
	I0603 13:15:22.778051 1114112 command_runner.go:130] >       "id": "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035",
	I0603 13:15:22.778060 1114112 command_runner.go:130] >       "repoTags": [
	I0603 13:15:22.778068 1114112 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.1"
	I0603 13:15:22.778077 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.778084 1114112 command_runner.go:130] >       "repoDigests": [
	I0603 13:15:22.778099 1114112 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036",
	I0603 13:15:22.778113 1114112 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"
	I0603 13:15:22.778123 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.778133 1114112 command_runner.go:130] >       "size": "63026504",
	I0603 13:15:22.778143 1114112 command_runner.go:130] >       "uid": {
	I0603 13:15:22.778152 1114112 command_runner.go:130] >         "value": "0"
	I0603 13:15:22.778158 1114112 command_runner.go:130] >       },
	I0603 13:15:22.778167 1114112 command_runner.go:130] >       "username": "",
	I0603 13:15:22.778173 1114112 command_runner.go:130] >       "spec": null,
	I0603 13:15:22.778182 1114112 command_runner.go:130] >       "pinned": false
	I0603 13:15:22.778187 1114112 command_runner.go:130] >     },
	I0603 13:15:22.778195 1114112 command_runner.go:130] >     {
	I0603 13:15:22.778204 1114112 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0603 13:15:22.778214 1114112 command_runner.go:130] >       "repoTags": [
	I0603 13:15:22.778225 1114112 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0603 13:15:22.778237 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.778246 1114112 command_runner.go:130] >       "repoDigests": [
	I0603 13:15:22.778256 1114112 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0603 13:15:22.778270 1114112 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0603 13:15:22.778279 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.778286 1114112 command_runner.go:130] >       "size": "750414",
	I0603 13:15:22.778295 1114112 command_runner.go:130] >       "uid": {
	I0603 13:15:22.778301 1114112 command_runner.go:130] >         "value": "65535"
	I0603 13:15:22.778308 1114112 command_runner.go:130] >       },
	I0603 13:15:22.778315 1114112 command_runner.go:130] >       "username": "",
	I0603 13:15:22.778323 1114112 command_runner.go:130] >       "spec": null,
	I0603 13:15:22.778327 1114112 command_runner.go:130] >       "pinned": true
	I0603 13:15:22.778333 1114112 command_runner.go:130] >     }
	I0603 13:15:22.778336 1114112 command_runner.go:130] >   ]
	I0603 13:15:22.778341 1114112 command_runner.go:130] > }
	I0603 13:15:22.778632 1114112 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 13:15:22.778650 1114112 crio.go:433] Images already preloaded, skipping extraction
	I0603 13:15:22.778734 1114112 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:15:22.812942 1114112 command_runner.go:130] > {
	I0603 13:15:22.812973 1114112 command_runner.go:130] >   "images": [
	I0603 13:15:22.812979 1114112 command_runner.go:130] >     {
	I0603 13:15:22.812991 1114112 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0603 13:15:22.812998 1114112 command_runner.go:130] >       "repoTags": [
	I0603 13:15:22.813010 1114112 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0603 13:15:22.813014 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.813018 1114112 command_runner.go:130] >       "repoDigests": [
	I0603 13:15:22.813027 1114112 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0603 13:15:22.813034 1114112 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0603 13:15:22.813038 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.813042 1114112 command_runner.go:130] >       "size": "65291810",
	I0603 13:15:22.813049 1114112 command_runner.go:130] >       "uid": null,
	I0603 13:15:22.813053 1114112 command_runner.go:130] >       "username": "",
	I0603 13:15:22.813062 1114112 command_runner.go:130] >       "spec": null,
	I0603 13:15:22.813068 1114112 command_runner.go:130] >       "pinned": false
	I0603 13:15:22.813072 1114112 command_runner.go:130] >     },
	I0603 13:15:22.813075 1114112 command_runner.go:130] >     {
	I0603 13:15:22.813081 1114112 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0603 13:15:22.813086 1114112 command_runner.go:130] >       "repoTags": [
	I0603 13:15:22.813096 1114112 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0603 13:15:22.813103 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.813106 1114112 command_runner.go:130] >       "repoDigests": [
	I0603 13:15:22.813115 1114112 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0603 13:15:22.813124 1114112 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0603 13:15:22.813130 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.813134 1114112 command_runner.go:130] >       "size": "65908273",
	I0603 13:15:22.813138 1114112 command_runner.go:130] >       "uid": null,
	I0603 13:15:22.813148 1114112 command_runner.go:130] >       "username": "",
	I0603 13:15:22.813154 1114112 command_runner.go:130] >       "spec": null,
	I0603 13:15:22.813158 1114112 command_runner.go:130] >       "pinned": false
	I0603 13:15:22.813163 1114112 command_runner.go:130] >     },
	I0603 13:15:22.813166 1114112 command_runner.go:130] >     {
	I0603 13:15:22.813174 1114112 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0603 13:15:22.813180 1114112 command_runner.go:130] >       "repoTags": [
	I0603 13:15:22.813185 1114112 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0603 13:15:22.813191 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.813195 1114112 command_runner.go:130] >       "repoDigests": [
	I0603 13:15:22.813204 1114112 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0603 13:15:22.813213 1114112 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0603 13:15:22.813219 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.813223 1114112 command_runner.go:130] >       "size": "1363676",
	I0603 13:15:22.813227 1114112 command_runner.go:130] >       "uid": null,
	I0603 13:15:22.813243 1114112 command_runner.go:130] >       "username": "",
	I0603 13:15:22.813250 1114112 command_runner.go:130] >       "spec": null,
	I0603 13:15:22.813254 1114112 command_runner.go:130] >       "pinned": false
	I0603 13:15:22.813258 1114112 command_runner.go:130] >     },
	I0603 13:15:22.813262 1114112 command_runner.go:130] >     {
	I0603 13:15:22.813272 1114112 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0603 13:15:22.813281 1114112 command_runner.go:130] >       "repoTags": [
	I0603 13:15:22.813292 1114112 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0603 13:15:22.813301 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.813311 1114112 command_runner.go:130] >       "repoDigests": [
	I0603 13:15:22.813325 1114112 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0603 13:15:22.813343 1114112 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0603 13:15:22.813350 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.813362 1114112 command_runner.go:130] >       "size": "31470524",
	I0603 13:15:22.813368 1114112 command_runner.go:130] >       "uid": null,
	I0603 13:15:22.813373 1114112 command_runner.go:130] >       "username": "",
	I0603 13:15:22.813379 1114112 command_runner.go:130] >       "spec": null,
	I0603 13:15:22.813383 1114112 command_runner.go:130] >       "pinned": false
	I0603 13:15:22.813388 1114112 command_runner.go:130] >     },
	I0603 13:15:22.813391 1114112 command_runner.go:130] >     {
	I0603 13:15:22.813400 1114112 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0603 13:15:22.813421 1114112 command_runner.go:130] >       "repoTags": [
	I0603 13:15:22.813429 1114112 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0603 13:15:22.813435 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.813443 1114112 command_runner.go:130] >       "repoDigests": [
	I0603 13:15:22.813452 1114112 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0603 13:15:22.813462 1114112 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0603 13:15:22.813467 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.813472 1114112 command_runner.go:130] >       "size": "61245718",
	I0603 13:15:22.813478 1114112 command_runner.go:130] >       "uid": null,
	I0603 13:15:22.813483 1114112 command_runner.go:130] >       "username": "nonroot",
	I0603 13:15:22.813491 1114112 command_runner.go:130] >       "spec": null,
	I0603 13:15:22.813495 1114112 command_runner.go:130] >       "pinned": false
	I0603 13:15:22.813501 1114112 command_runner.go:130] >     },
	I0603 13:15:22.813504 1114112 command_runner.go:130] >     {
	I0603 13:15:22.813513 1114112 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0603 13:15:22.813519 1114112 command_runner.go:130] >       "repoTags": [
	I0603 13:15:22.813524 1114112 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0603 13:15:22.813530 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.813534 1114112 command_runner.go:130] >       "repoDigests": [
	I0603 13:15:22.813540 1114112 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0603 13:15:22.813549 1114112 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0603 13:15:22.813555 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.813560 1114112 command_runner.go:130] >       "size": "150779692",
	I0603 13:15:22.813566 1114112 command_runner.go:130] >       "uid": {
	I0603 13:15:22.813570 1114112 command_runner.go:130] >         "value": "0"
	I0603 13:15:22.813576 1114112 command_runner.go:130] >       },
	I0603 13:15:22.813580 1114112 command_runner.go:130] >       "username": "",
	I0603 13:15:22.813587 1114112 command_runner.go:130] >       "spec": null,
	I0603 13:15:22.813597 1114112 command_runner.go:130] >       "pinned": false
	I0603 13:15:22.813603 1114112 command_runner.go:130] >     },
	I0603 13:15:22.813607 1114112 command_runner.go:130] >     {
	I0603 13:15:22.813615 1114112 command_runner.go:130] >       "id": "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a",
	I0603 13:15:22.813619 1114112 command_runner.go:130] >       "repoTags": [
	I0603 13:15:22.813624 1114112 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.1"
	I0603 13:15:22.813630 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.813633 1114112 command_runner.go:130] >       "repoDigests": [
	I0603 13:15:22.813643 1114112 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea",
	I0603 13:15:22.813652 1114112 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"
	I0603 13:15:22.813657 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.813662 1114112 command_runner.go:130] >       "size": "117601759",
	I0603 13:15:22.813670 1114112 command_runner.go:130] >       "uid": {
	I0603 13:15:22.813680 1114112 command_runner.go:130] >         "value": "0"
	I0603 13:15:22.813688 1114112 command_runner.go:130] >       },
	I0603 13:15:22.813698 1114112 command_runner.go:130] >       "username": "",
	I0603 13:15:22.813707 1114112 command_runner.go:130] >       "spec": null,
	I0603 13:15:22.813716 1114112 command_runner.go:130] >       "pinned": false
	I0603 13:15:22.813724 1114112 command_runner.go:130] >     },
	I0603 13:15:22.813733 1114112 command_runner.go:130] >     {
	I0603 13:15:22.813745 1114112 command_runner.go:130] >       "id": "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c",
	I0603 13:15:22.813754 1114112 command_runner.go:130] >       "repoTags": [
	I0603 13:15:22.813766 1114112 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.1"
	I0603 13:15:22.813775 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.813784 1114112 command_runner.go:130] >       "repoDigests": [
	I0603 13:15:22.813821 1114112 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52",
	I0603 13:15:22.813836 1114112 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"
	I0603 13:15:22.813842 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.813852 1114112 command_runner.go:130] >       "size": "112170310",
	I0603 13:15:22.813860 1114112 command_runner.go:130] >       "uid": {
	I0603 13:15:22.813866 1114112 command_runner.go:130] >         "value": "0"
	I0603 13:15:22.813874 1114112 command_runner.go:130] >       },
	I0603 13:15:22.813881 1114112 command_runner.go:130] >       "username": "",
	I0603 13:15:22.813890 1114112 command_runner.go:130] >       "spec": null,
	I0603 13:15:22.813896 1114112 command_runner.go:130] >       "pinned": false
	I0603 13:15:22.813901 1114112 command_runner.go:130] >     },
	I0603 13:15:22.813919 1114112 command_runner.go:130] >     {
	I0603 13:15:22.813933 1114112 command_runner.go:130] >       "id": "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd",
	I0603 13:15:22.813942 1114112 command_runner.go:130] >       "repoTags": [
	I0603 13:15:22.813951 1114112 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.1"
	I0603 13:15:22.813959 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.813966 1114112 command_runner.go:130] >       "repoDigests": [
	I0603 13:15:22.813981 1114112 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa",
	I0603 13:15:22.814001 1114112 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"
	I0603 13:15:22.814011 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.814017 1114112 command_runner.go:130] >       "size": "85933465",
	I0603 13:15:22.814026 1114112 command_runner.go:130] >       "uid": null,
	I0603 13:15:22.814033 1114112 command_runner.go:130] >       "username": "",
	I0603 13:15:22.814041 1114112 command_runner.go:130] >       "spec": null,
	I0603 13:15:22.814045 1114112 command_runner.go:130] >       "pinned": false
	I0603 13:15:22.814051 1114112 command_runner.go:130] >     },
	I0603 13:15:22.814054 1114112 command_runner.go:130] >     {
	I0603 13:15:22.814061 1114112 command_runner.go:130] >       "id": "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035",
	I0603 13:15:22.814067 1114112 command_runner.go:130] >       "repoTags": [
	I0603 13:15:22.814074 1114112 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.1"
	I0603 13:15:22.814082 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.814088 1114112 command_runner.go:130] >       "repoDigests": [
	I0603 13:15:22.814101 1114112 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036",
	I0603 13:15:22.814117 1114112 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"
	I0603 13:15:22.814125 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.814130 1114112 command_runner.go:130] >       "size": "63026504",
	I0603 13:15:22.814135 1114112 command_runner.go:130] >       "uid": {
	I0603 13:15:22.814139 1114112 command_runner.go:130] >         "value": "0"
	I0603 13:15:22.814143 1114112 command_runner.go:130] >       },
	I0603 13:15:22.814147 1114112 command_runner.go:130] >       "username": "",
	I0603 13:15:22.814151 1114112 command_runner.go:130] >       "spec": null,
	I0603 13:15:22.814155 1114112 command_runner.go:130] >       "pinned": false
	I0603 13:15:22.814161 1114112 command_runner.go:130] >     },
	I0603 13:15:22.814165 1114112 command_runner.go:130] >     {
	I0603 13:15:22.814170 1114112 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0603 13:15:22.814177 1114112 command_runner.go:130] >       "repoTags": [
	I0603 13:15:22.814181 1114112 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0603 13:15:22.814192 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.814199 1114112 command_runner.go:130] >       "repoDigests": [
	I0603 13:15:22.814206 1114112 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0603 13:15:22.814215 1114112 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0603 13:15:22.814221 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.814225 1114112 command_runner.go:130] >       "size": "750414",
	I0603 13:15:22.814228 1114112 command_runner.go:130] >       "uid": {
	I0603 13:15:22.814238 1114112 command_runner.go:130] >         "value": "65535"
	I0603 13:15:22.814244 1114112 command_runner.go:130] >       },
	I0603 13:15:22.814248 1114112 command_runner.go:130] >       "username": "",
	I0603 13:15:22.814252 1114112 command_runner.go:130] >       "spec": null,
	I0603 13:15:22.814255 1114112 command_runner.go:130] >       "pinned": true
	I0603 13:15:22.814258 1114112 command_runner.go:130] >     }
	I0603 13:15:22.814261 1114112 command_runner.go:130] >   ]
	I0603 13:15:22.814265 1114112 command_runner.go:130] > }
	I0603 13:15:22.814424 1114112 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 13:15:22.814438 1114112 cache_images.go:84] Images are preloaded, skipping loading
	I0603 13:15:22.814447 1114112 kubeadm.go:928] updating node { 192.168.39.141 8443 v1.30.1 crio true true} ...
	I0603 13:15:22.814579 1114112 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-101468 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.141
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-101468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 13:15:22.814643 1114112 ssh_runner.go:195] Run: crio config
	I0603 13:15:22.848439 1114112 command_runner.go:130] ! time="2024-06-03 13:15:22.825434204Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0603 13:15:22.854503 1114112 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0603 13:15:22.861480 1114112 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0603 13:15:22.861502 1114112 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0603 13:15:22.861508 1114112 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0603 13:15:22.861511 1114112 command_runner.go:130] > #
	I0603 13:15:22.861519 1114112 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0603 13:15:22.861525 1114112 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0603 13:15:22.861531 1114112 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0603 13:15:22.861537 1114112 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0603 13:15:22.861547 1114112 command_runner.go:130] > # reload'.
	I0603 13:15:22.861553 1114112 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0603 13:15:22.861559 1114112 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0603 13:15:22.861565 1114112 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0603 13:15:22.861580 1114112 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0603 13:15:22.861588 1114112 command_runner.go:130] > [crio]
	I0603 13:15:22.861599 1114112 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0603 13:15:22.861609 1114112 command_runner.go:130] > # containers images, in this directory.
	I0603 13:15:22.861616 1114112 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0603 13:15:22.861631 1114112 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0603 13:15:22.861642 1114112 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0603 13:15:22.861653 1114112 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0603 13:15:22.861660 1114112 command_runner.go:130] > # imagestore = ""
	I0603 13:15:22.861670 1114112 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0603 13:15:22.861691 1114112 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0603 13:15:22.861698 1114112 command_runner.go:130] > storage_driver = "overlay"
	I0603 13:15:22.861703 1114112 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0603 13:15:22.861713 1114112 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0603 13:15:22.861723 1114112 command_runner.go:130] > storage_option = [
	I0603 13:15:22.861733 1114112 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0603 13:15:22.861741 1114112 command_runner.go:130] > ]
	I0603 13:15:22.861751 1114112 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0603 13:15:22.861757 1114112 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0603 13:15:22.861761 1114112 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0603 13:15:22.861766 1114112 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0603 13:15:22.861775 1114112 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0603 13:15:22.861779 1114112 command_runner.go:130] > # always happen on a node reboot
	I0603 13:15:22.861786 1114112 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0603 13:15:22.861798 1114112 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0603 13:15:22.861806 1114112 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0603 13:15:22.861811 1114112 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0603 13:15:22.861816 1114112 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0603 13:15:22.861823 1114112 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0603 13:15:22.861833 1114112 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0603 13:15:22.861838 1114112 command_runner.go:130] > # internal_wipe = true
	I0603 13:15:22.861846 1114112 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0603 13:15:22.861859 1114112 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0603 13:15:22.861865 1114112 command_runner.go:130] > # internal_repair = false
	I0603 13:15:22.861871 1114112 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0603 13:15:22.861876 1114112 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0603 13:15:22.861884 1114112 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0603 13:15:22.861889 1114112 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0603 13:15:22.861894 1114112 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0603 13:15:22.861900 1114112 command_runner.go:130] > [crio.api]
	I0603 13:15:22.861905 1114112 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0603 13:15:22.861912 1114112 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0603 13:15:22.861917 1114112 command_runner.go:130] > # IP address on which the stream server will listen.
	I0603 13:15:22.861924 1114112 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0603 13:15:22.861930 1114112 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0603 13:15:22.861937 1114112 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0603 13:15:22.861941 1114112 command_runner.go:130] > # stream_port = "0"
	I0603 13:15:22.861946 1114112 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0603 13:15:22.861953 1114112 command_runner.go:130] > # stream_enable_tls = false
	I0603 13:15:22.861959 1114112 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0603 13:15:22.861965 1114112 command_runner.go:130] > # stream_idle_timeout = ""
	I0603 13:15:22.861971 1114112 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0603 13:15:22.861977 1114112 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0603 13:15:22.861983 1114112 command_runner.go:130] > # minutes.
	I0603 13:15:22.861987 1114112 command_runner.go:130] > # stream_tls_cert = ""
	I0603 13:15:22.861996 1114112 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0603 13:15:22.862005 1114112 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0603 13:15:22.862008 1114112 command_runner.go:130] > # stream_tls_key = ""
	I0603 13:15:22.862014 1114112 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0603 13:15:22.862021 1114112 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0603 13:15:22.862039 1114112 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0603 13:15:22.862047 1114112 command_runner.go:130] > # stream_tls_ca = ""
	I0603 13:15:22.862053 1114112 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0603 13:15:22.862058 1114112 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0603 13:15:22.862064 1114112 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0603 13:15:22.862074 1114112 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0603 13:15:22.862080 1114112 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0603 13:15:22.862085 1114112 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0603 13:15:22.862093 1114112 command_runner.go:130] > [crio.runtime]
	I0603 13:15:22.862098 1114112 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0603 13:15:22.862103 1114112 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0603 13:15:22.862107 1114112 command_runner.go:130] > # "nofile=1024:2048"
	I0603 13:15:22.862112 1114112 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0603 13:15:22.862116 1114112 command_runner.go:130] > # default_ulimits = [
	I0603 13:15:22.862119 1114112 command_runner.go:130] > # ]
	I0603 13:15:22.862124 1114112 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0603 13:15:22.862129 1114112 command_runner.go:130] > # no_pivot = false
	I0603 13:15:22.862133 1114112 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0603 13:15:22.862139 1114112 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0603 13:15:22.862144 1114112 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0603 13:15:22.862149 1114112 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0603 13:15:22.862155 1114112 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0603 13:15:22.862161 1114112 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0603 13:15:22.862166 1114112 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0603 13:15:22.862170 1114112 command_runner.go:130] > # Cgroup setting for conmon
	I0603 13:15:22.862178 1114112 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0603 13:15:22.862182 1114112 command_runner.go:130] > conmon_cgroup = "pod"
	I0603 13:15:22.862187 1114112 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0603 13:15:22.862194 1114112 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0603 13:15:22.862201 1114112 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0603 13:15:22.862207 1114112 command_runner.go:130] > conmon_env = [
	I0603 13:15:22.862212 1114112 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0603 13:15:22.862215 1114112 command_runner.go:130] > ]
	I0603 13:15:22.862220 1114112 command_runner.go:130] > # Additional environment variables to set for all the
	I0603 13:15:22.862230 1114112 command_runner.go:130] > # containers. These are overridden if set in the
	I0603 13:15:22.862238 1114112 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0603 13:15:22.862242 1114112 command_runner.go:130] > # default_env = [
	I0603 13:15:22.862245 1114112 command_runner.go:130] > # ]
	I0603 13:15:22.862251 1114112 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0603 13:15:22.862260 1114112 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0603 13:15:22.862270 1114112 command_runner.go:130] > # selinux = false
	I0603 13:15:22.862278 1114112 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0603 13:15:22.862284 1114112 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0603 13:15:22.862291 1114112 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0603 13:15:22.862301 1114112 command_runner.go:130] > # seccomp_profile = ""
	I0603 13:15:22.862309 1114112 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0603 13:15:22.862314 1114112 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0603 13:15:22.862320 1114112 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0603 13:15:22.862326 1114112 command_runner.go:130] > # which might increase security.
	I0603 13:15:22.862331 1114112 command_runner.go:130] > # This option is currently deprecated,
	I0603 13:15:22.862336 1114112 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0603 13:15:22.862343 1114112 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0603 13:15:22.862349 1114112 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0603 13:15:22.862356 1114112 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0603 13:15:22.862362 1114112 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0603 13:15:22.862370 1114112 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0603 13:15:22.862375 1114112 command_runner.go:130] > # This option supports live configuration reload.
	I0603 13:15:22.862382 1114112 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0603 13:15:22.862387 1114112 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0603 13:15:22.862394 1114112 command_runner.go:130] > # the cgroup blockio controller.
	I0603 13:15:22.862398 1114112 command_runner.go:130] > # blockio_config_file = ""
	I0603 13:15:22.862404 1114112 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0603 13:15:22.862409 1114112 command_runner.go:130] > # blockio parameters.
	I0603 13:15:22.862412 1114112 command_runner.go:130] > # blockio_reload = false
	I0603 13:15:22.862418 1114112 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0603 13:15:22.862424 1114112 command_runner.go:130] > # irqbalance daemon.
	I0603 13:15:22.862429 1114112 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0603 13:15:22.862438 1114112 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0603 13:15:22.862444 1114112 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0603 13:15:22.862453 1114112 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0603 13:15:22.862458 1114112 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0603 13:15:22.862465 1114112 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0603 13:15:22.862471 1114112 command_runner.go:130] > # This option supports live configuration reload.
	I0603 13:15:22.862475 1114112 command_runner.go:130] > # rdt_config_file = ""
	I0603 13:15:22.862480 1114112 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0603 13:15:22.862484 1114112 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0603 13:15:22.862517 1114112 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0603 13:15:22.862524 1114112 command_runner.go:130] > # separate_pull_cgroup = ""
	I0603 13:15:22.862530 1114112 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0603 13:15:22.862535 1114112 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0603 13:15:22.862547 1114112 command_runner.go:130] > # will be added.
	I0603 13:15:22.862552 1114112 command_runner.go:130] > # default_capabilities = [
	I0603 13:15:22.862555 1114112 command_runner.go:130] > # 	"CHOWN",
	I0603 13:15:22.862561 1114112 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0603 13:15:22.862564 1114112 command_runner.go:130] > # 	"FSETID",
	I0603 13:15:22.862570 1114112 command_runner.go:130] > # 	"FOWNER",
	I0603 13:15:22.862573 1114112 command_runner.go:130] > # 	"SETGID",
	I0603 13:15:22.862577 1114112 command_runner.go:130] > # 	"SETUID",
	I0603 13:15:22.862580 1114112 command_runner.go:130] > # 	"SETPCAP",
	I0603 13:15:22.862584 1114112 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0603 13:15:22.862587 1114112 command_runner.go:130] > # 	"KILL",
	I0603 13:15:22.862591 1114112 command_runner.go:130] > # ]
	I0603 13:15:22.862600 1114112 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0603 13:15:22.862609 1114112 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0603 13:15:22.862613 1114112 command_runner.go:130] > # add_inheritable_capabilities = false
	I0603 13:15:22.862621 1114112 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0603 13:15:22.862626 1114112 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0603 13:15:22.862631 1114112 command_runner.go:130] > default_sysctls = [
	I0603 13:15:22.862638 1114112 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0603 13:15:22.862641 1114112 command_runner.go:130] > ]
	I0603 13:15:22.862646 1114112 command_runner.go:130] > # List of devices on the host that a
	I0603 13:15:22.862654 1114112 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0603 13:15:22.862658 1114112 command_runner.go:130] > # allowed_devices = [
	I0603 13:15:22.862668 1114112 command_runner.go:130] > # 	"/dev/fuse",
	I0603 13:15:22.862671 1114112 command_runner.go:130] > # ]
	I0603 13:15:22.862676 1114112 command_runner.go:130] > # List of additional devices. specified as
	I0603 13:15:22.862683 1114112 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0603 13:15:22.862690 1114112 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0603 13:15:22.862695 1114112 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0603 13:15:22.862701 1114112 command_runner.go:130] > # additional_devices = [
	I0603 13:15:22.862704 1114112 command_runner.go:130] > # ]
	I0603 13:15:22.862709 1114112 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0603 13:15:22.862715 1114112 command_runner.go:130] > # cdi_spec_dirs = [
	I0603 13:15:22.862718 1114112 command_runner.go:130] > # 	"/etc/cdi",
	I0603 13:15:22.862722 1114112 command_runner.go:130] > # 	"/var/run/cdi",
	I0603 13:15:22.862725 1114112 command_runner.go:130] > # ]
	I0603 13:15:22.862737 1114112 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0603 13:15:22.862745 1114112 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0603 13:15:22.862749 1114112 command_runner.go:130] > # Defaults to false.
	I0603 13:15:22.862756 1114112 command_runner.go:130] > # device_ownership_from_security_context = false
	I0603 13:15:22.862762 1114112 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0603 13:15:22.862770 1114112 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0603 13:15:22.862773 1114112 command_runner.go:130] > # hooks_dir = [
	I0603 13:15:22.862778 1114112 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0603 13:15:22.862784 1114112 command_runner.go:130] > # ]
	I0603 13:15:22.862789 1114112 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0603 13:15:22.862795 1114112 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0603 13:15:22.862802 1114112 command_runner.go:130] > # its default mounts from the following two files:
	I0603 13:15:22.862805 1114112 command_runner.go:130] > #
	I0603 13:15:22.862811 1114112 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0603 13:15:22.862818 1114112 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0603 13:15:22.862823 1114112 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0603 13:15:22.862829 1114112 command_runner.go:130] > #
	I0603 13:15:22.862834 1114112 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0603 13:15:22.862842 1114112 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0603 13:15:22.862849 1114112 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0603 13:15:22.862855 1114112 command_runner.go:130] > #      only add mounts it finds in this file.
	I0603 13:15:22.862858 1114112 command_runner.go:130] > #
	I0603 13:15:22.862862 1114112 command_runner.go:130] > # default_mounts_file = ""
	I0603 13:15:22.862867 1114112 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0603 13:15:22.862874 1114112 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0603 13:15:22.862878 1114112 command_runner.go:130] > pids_limit = 1024
	I0603 13:15:22.862884 1114112 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0603 13:15:22.862891 1114112 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0603 13:15:22.862897 1114112 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0603 13:15:22.862907 1114112 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0603 13:15:22.862911 1114112 command_runner.go:130] > # log_size_max = -1
	I0603 13:15:22.862917 1114112 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0603 13:15:22.862929 1114112 command_runner.go:130] > # log_to_journald = false
	I0603 13:15:22.862937 1114112 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0603 13:15:22.862942 1114112 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0603 13:15:22.862949 1114112 command_runner.go:130] > # Path to directory for container attach sockets.
	I0603 13:15:22.862958 1114112 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0603 13:15:22.862966 1114112 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0603 13:15:22.862970 1114112 command_runner.go:130] > # bind_mount_prefix = ""
	I0603 13:15:22.862977 1114112 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0603 13:15:22.862981 1114112 command_runner.go:130] > # read_only = false
	I0603 13:15:22.862988 1114112 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0603 13:15:22.862993 1114112 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0603 13:15:22.862998 1114112 command_runner.go:130] > # live configuration reload.
	I0603 13:15:22.863001 1114112 command_runner.go:130] > # log_level = "info"
	I0603 13:15:22.863006 1114112 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0603 13:15:22.863013 1114112 command_runner.go:130] > # This option supports live configuration reload.
	I0603 13:15:22.863017 1114112 command_runner.go:130] > # log_filter = ""
	I0603 13:15:22.863022 1114112 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0603 13:15:22.863031 1114112 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0603 13:15:22.863035 1114112 command_runner.go:130] > # separated by comma.
	I0603 13:15:22.863042 1114112 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0603 13:15:22.863049 1114112 command_runner.go:130] > # uid_mappings = ""
	I0603 13:15:22.863055 1114112 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0603 13:15:22.863065 1114112 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0603 13:15:22.863071 1114112 command_runner.go:130] > # separated by comma.
	I0603 13:15:22.863078 1114112 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0603 13:15:22.863084 1114112 command_runner.go:130] > # gid_mappings = ""
	I0603 13:15:22.863090 1114112 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0603 13:15:22.863097 1114112 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0603 13:15:22.863102 1114112 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0603 13:15:22.863112 1114112 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0603 13:15:22.863116 1114112 command_runner.go:130] > # minimum_mappable_uid = -1
	I0603 13:15:22.863123 1114112 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0603 13:15:22.863129 1114112 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0603 13:15:22.863137 1114112 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0603 13:15:22.863144 1114112 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0603 13:15:22.863151 1114112 command_runner.go:130] > # minimum_mappable_gid = -1
	I0603 13:15:22.863156 1114112 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0603 13:15:22.863164 1114112 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0603 13:15:22.863169 1114112 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0603 13:15:22.863176 1114112 command_runner.go:130] > # ctr_stop_timeout = 30
	I0603 13:15:22.863186 1114112 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0603 13:15:22.863194 1114112 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0603 13:15:22.863198 1114112 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0603 13:15:22.863203 1114112 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0603 13:15:22.863207 1114112 command_runner.go:130] > drop_infra_ctr = false
	I0603 13:15:22.863213 1114112 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0603 13:15:22.863220 1114112 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0603 13:15:22.863226 1114112 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0603 13:15:22.863231 1114112 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0603 13:15:22.863237 1114112 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0603 13:15:22.863243 1114112 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0603 13:15:22.863250 1114112 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0603 13:15:22.863256 1114112 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0603 13:15:22.863262 1114112 command_runner.go:130] > # shared_cpuset = ""
	I0603 13:15:22.863275 1114112 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0603 13:15:22.863283 1114112 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0603 13:15:22.863287 1114112 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0603 13:15:22.863297 1114112 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0603 13:15:22.863301 1114112 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0603 13:15:22.863306 1114112 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0603 13:15:22.863314 1114112 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0603 13:15:22.863318 1114112 command_runner.go:130] > # enable_criu_support = false
	I0603 13:15:22.863327 1114112 command_runner.go:130] > # Enable/disable the generation of the container,
	I0603 13:15:22.863333 1114112 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0603 13:15:22.863340 1114112 command_runner.go:130] > # enable_pod_events = false
	I0603 13:15:22.863346 1114112 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0603 13:15:22.863354 1114112 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0603 13:15:22.863359 1114112 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0603 13:15:22.863365 1114112 command_runner.go:130] > # default_runtime = "runc"
	I0603 13:15:22.863370 1114112 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0603 13:15:22.863378 1114112 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0603 13:15:22.863388 1114112 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0603 13:15:22.863396 1114112 command_runner.go:130] > # creation as a file is not desired either.
	I0603 13:15:22.863403 1114112 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0603 13:15:22.863408 1114112 command_runner.go:130] > # the hostname is being managed dynamically.
	I0603 13:15:22.863418 1114112 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0603 13:15:22.863426 1114112 command_runner.go:130] > # ]
	I0603 13:15:22.863434 1114112 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0603 13:15:22.863440 1114112 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0603 13:15:22.863448 1114112 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0603 13:15:22.863453 1114112 command_runner.go:130] > # Each entry in the table should follow the format:
	I0603 13:15:22.863458 1114112 command_runner.go:130] > #
	I0603 13:15:22.863462 1114112 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0603 13:15:22.863467 1114112 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0603 13:15:22.863516 1114112 command_runner.go:130] > # runtime_type = "oci"
	I0603 13:15:22.863523 1114112 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0603 13:15:22.863527 1114112 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0603 13:15:22.863531 1114112 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0603 13:15:22.863537 1114112 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0603 13:15:22.863541 1114112 command_runner.go:130] > # monitor_env = []
	I0603 13:15:22.863548 1114112 command_runner.go:130] > # privileged_without_host_devices = false
	I0603 13:15:22.863551 1114112 command_runner.go:130] > # allowed_annotations = []
	I0603 13:15:22.863557 1114112 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0603 13:15:22.863562 1114112 command_runner.go:130] > # Where:
	I0603 13:15:22.863567 1114112 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0603 13:15:22.863576 1114112 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0603 13:15:22.863582 1114112 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0603 13:15:22.863590 1114112 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0603 13:15:22.863594 1114112 command_runner.go:130] > #   in $PATH.
	I0603 13:15:22.863601 1114112 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0603 13:15:22.863606 1114112 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0603 13:15:22.863614 1114112 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0603 13:15:22.863618 1114112 command_runner.go:130] > #   state.
	I0603 13:15:22.863626 1114112 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0603 13:15:22.863631 1114112 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0603 13:15:22.863639 1114112 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0603 13:15:22.863644 1114112 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0603 13:15:22.863650 1114112 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0603 13:15:22.863657 1114112 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0603 13:15:22.863669 1114112 command_runner.go:130] > #   The currently recognized values are:
	I0603 13:15:22.863677 1114112 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0603 13:15:22.863684 1114112 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0603 13:15:22.863697 1114112 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0603 13:15:22.863705 1114112 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0603 13:15:22.863712 1114112 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0603 13:15:22.863720 1114112 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0603 13:15:22.863727 1114112 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0603 13:15:22.863735 1114112 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0603 13:15:22.863740 1114112 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0603 13:15:22.863748 1114112 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0603 13:15:22.863753 1114112 command_runner.go:130] > #   deprecated option "conmon".
	I0603 13:15:22.863760 1114112 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0603 13:15:22.863766 1114112 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0603 13:15:22.863773 1114112 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0603 13:15:22.863779 1114112 command_runner.go:130] > #   should be moved to the container's cgroup
	I0603 13:15:22.863786 1114112 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0603 13:15:22.863792 1114112 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0603 13:15:22.863799 1114112 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0603 13:15:22.863806 1114112 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0603 13:15:22.863809 1114112 command_runner.go:130] > #
	I0603 13:15:22.863814 1114112 command_runner.go:130] > # Using the seccomp notifier feature:
	I0603 13:15:22.863817 1114112 command_runner.go:130] > #
	I0603 13:15:22.863822 1114112 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0603 13:15:22.863829 1114112 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0603 13:15:22.863832 1114112 command_runner.go:130] > #
	I0603 13:15:22.863838 1114112 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0603 13:15:22.863846 1114112 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0603 13:15:22.863849 1114112 command_runner.go:130] > #
	I0603 13:15:22.863857 1114112 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0603 13:15:22.863860 1114112 command_runner.go:130] > # feature.
	I0603 13:15:22.863863 1114112 command_runner.go:130] > #
	I0603 13:15:22.863868 1114112 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0603 13:15:22.863874 1114112 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0603 13:15:22.863880 1114112 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0603 13:15:22.863889 1114112 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0603 13:15:22.863895 1114112 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0603 13:15:22.863900 1114112 command_runner.go:130] > #
	I0603 13:15:22.863906 1114112 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0603 13:15:22.863917 1114112 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0603 13:15:22.863922 1114112 command_runner.go:130] > #
	I0603 13:15:22.863927 1114112 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0603 13:15:22.863935 1114112 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0603 13:15:22.863938 1114112 command_runner.go:130] > #
	I0603 13:15:22.863944 1114112 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0603 13:15:22.863951 1114112 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0603 13:15:22.863955 1114112 command_runner.go:130] > # limitation.
	I0603 13:15:22.863959 1114112 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0603 13:15:22.863966 1114112 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0603 13:15:22.863970 1114112 command_runner.go:130] > runtime_type = "oci"
	I0603 13:15:22.863974 1114112 command_runner.go:130] > runtime_root = "/run/runc"
	I0603 13:15:22.863978 1114112 command_runner.go:130] > runtime_config_path = ""
	I0603 13:15:22.863983 1114112 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0603 13:15:22.863989 1114112 command_runner.go:130] > monitor_cgroup = "pod"
	I0603 13:15:22.863993 1114112 command_runner.go:130] > monitor_exec_cgroup = ""
	I0603 13:15:22.863997 1114112 command_runner.go:130] > monitor_env = [
	I0603 13:15:22.864002 1114112 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0603 13:15:22.864005 1114112 command_runner.go:130] > ]
	I0603 13:15:22.864009 1114112 command_runner.go:130] > privileged_without_host_devices = false
	I0603 13:15:22.864017 1114112 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0603 13:15:22.864022 1114112 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0603 13:15:22.864031 1114112 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0603 13:15:22.864038 1114112 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0603 13:15:22.864049 1114112 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0603 13:15:22.864057 1114112 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0603 13:15:22.864065 1114112 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0603 13:15:22.864075 1114112 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0603 13:15:22.864080 1114112 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0603 13:15:22.864087 1114112 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0603 13:15:22.864090 1114112 command_runner.go:130] > # Example:
	I0603 13:15:22.864094 1114112 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0603 13:15:22.864098 1114112 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0603 13:15:22.864103 1114112 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0603 13:15:22.864108 1114112 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0603 13:15:22.864111 1114112 command_runner.go:130] > # cpuset = 0
	I0603 13:15:22.864119 1114112 command_runner.go:130] > # cpushares = "0-1"
	I0603 13:15:22.864122 1114112 command_runner.go:130] > # Where:
	I0603 13:15:22.864126 1114112 command_runner.go:130] > # The workload name is workload-type.
	I0603 13:15:22.864133 1114112 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0603 13:15:22.864138 1114112 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0603 13:15:22.864145 1114112 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0603 13:15:22.864152 1114112 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0603 13:15:22.864157 1114112 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0603 13:15:22.864162 1114112 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0603 13:15:22.864168 1114112 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0603 13:15:22.864174 1114112 command_runner.go:130] > # Default value is set to true
	I0603 13:15:22.864178 1114112 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0603 13:15:22.864184 1114112 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0603 13:15:22.864190 1114112 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0603 13:15:22.864194 1114112 command_runner.go:130] > # Default value is set to 'false'
	I0603 13:15:22.864200 1114112 command_runner.go:130] > # disable_hostport_mapping = false
	I0603 13:15:22.864206 1114112 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0603 13:15:22.864211 1114112 command_runner.go:130] > #
	I0603 13:15:22.864216 1114112 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0603 13:15:22.864223 1114112 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0603 13:15:22.864229 1114112 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0603 13:15:22.864237 1114112 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0603 13:15:22.864242 1114112 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0603 13:15:22.864248 1114112 command_runner.go:130] > [crio.image]
	I0603 13:15:22.864254 1114112 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0603 13:15:22.864261 1114112 command_runner.go:130] > # default_transport = "docker://"
	I0603 13:15:22.864270 1114112 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0603 13:15:22.864278 1114112 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0603 13:15:22.864282 1114112 command_runner.go:130] > # global_auth_file = ""
	I0603 13:15:22.864287 1114112 command_runner.go:130] > # The image used to instantiate infra containers.
	I0603 13:15:22.864292 1114112 command_runner.go:130] > # This option supports live configuration reload.
	I0603 13:15:22.864297 1114112 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0603 13:15:22.864306 1114112 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0603 13:15:22.864311 1114112 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0603 13:15:22.864318 1114112 command_runner.go:130] > # This option supports live configuration reload.
	I0603 13:15:22.864322 1114112 command_runner.go:130] > # pause_image_auth_file = ""
	I0603 13:15:22.864333 1114112 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0603 13:15:22.864341 1114112 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0603 13:15:22.864347 1114112 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0603 13:15:22.864355 1114112 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0603 13:15:22.864359 1114112 command_runner.go:130] > # pause_command = "/pause"
	I0603 13:15:22.864364 1114112 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0603 13:15:22.864374 1114112 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0603 13:15:22.864380 1114112 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0603 13:15:22.864387 1114112 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0603 13:15:22.864394 1114112 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0603 13:15:22.864400 1114112 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0603 13:15:22.864406 1114112 command_runner.go:130] > # pinned_images = [
	I0603 13:15:22.864409 1114112 command_runner.go:130] > # ]
	I0603 13:15:22.864414 1114112 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0603 13:15:22.864423 1114112 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0603 13:15:22.864429 1114112 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0603 13:15:22.864437 1114112 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0603 13:15:22.864441 1114112 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0603 13:15:22.864446 1114112 command_runner.go:130] > # signature_policy = ""
	I0603 13:15:22.864452 1114112 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0603 13:15:22.864460 1114112 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0603 13:15:22.864466 1114112 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0603 13:15:22.864474 1114112 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0603 13:15:22.864479 1114112 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0603 13:15:22.864483 1114112 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0603 13:15:22.864491 1114112 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0603 13:15:22.864497 1114112 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0603 13:15:22.864503 1114112 command_runner.go:130] > # changing them here.
	I0603 13:15:22.864507 1114112 command_runner.go:130] > # insecure_registries = [
	I0603 13:15:22.864512 1114112 command_runner.go:130] > # ]
	I0603 13:15:22.864518 1114112 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0603 13:15:22.864525 1114112 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0603 13:15:22.864529 1114112 command_runner.go:130] > # image_volumes = "mkdir"
	I0603 13:15:22.864534 1114112 command_runner.go:130] > # Temporary directory to use for storing big files
	I0603 13:15:22.864539 1114112 command_runner.go:130] > # big_files_temporary_dir = ""
	I0603 13:15:22.864547 1114112 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0603 13:15:22.864555 1114112 command_runner.go:130] > # CNI plugins.
	I0603 13:15:22.864561 1114112 command_runner.go:130] > [crio.network]
	I0603 13:15:22.864567 1114112 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0603 13:15:22.864574 1114112 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0603 13:15:22.864578 1114112 command_runner.go:130] > # cni_default_network = ""
	I0603 13:15:22.864583 1114112 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0603 13:15:22.864590 1114112 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0603 13:15:22.864595 1114112 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0603 13:15:22.864600 1114112 command_runner.go:130] > # plugin_dirs = [
	I0603 13:15:22.864604 1114112 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0603 13:15:22.864610 1114112 command_runner.go:130] > # ]
	I0603 13:15:22.864615 1114112 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0603 13:15:22.864621 1114112 command_runner.go:130] > [crio.metrics]
	I0603 13:15:22.864626 1114112 command_runner.go:130] > # Globally enable or disable metrics support.
	I0603 13:15:22.864631 1114112 command_runner.go:130] > enable_metrics = true
	I0603 13:15:22.864636 1114112 command_runner.go:130] > # Specify enabled metrics collectors.
	I0603 13:15:22.864641 1114112 command_runner.go:130] > # Per default all metrics are enabled.
	I0603 13:15:22.864647 1114112 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0603 13:15:22.864655 1114112 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0603 13:15:22.864660 1114112 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0603 13:15:22.864667 1114112 command_runner.go:130] > # metrics_collectors = [
	I0603 13:15:22.864670 1114112 command_runner.go:130] > # 	"operations",
	I0603 13:15:22.864675 1114112 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0603 13:15:22.864679 1114112 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0603 13:15:22.864685 1114112 command_runner.go:130] > # 	"operations_errors",
	I0603 13:15:22.864689 1114112 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0603 13:15:22.864695 1114112 command_runner.go:130] > # 	"image_pulls_by_name",
	I0603 13:15:22.864699 1114112 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0603 13:15:22.864703 1114112 command_runner.go:130] > # 	"image_pulls_failures",
	I0603 13:15:22.864707 1114112 command_runner.go:130] > # 	"image_pulls_successes",
	I0603 13:15:22.864713 1114112 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0603 13:15:22.864717 1114112 command_runner.go:130] > # 	"image_layer_reuse",
	I0603 13:15:22.864721 1114112 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0603 13:15:22.864725 1114112 command_runner.go:130] > # 	"containers_oom_total",
	I0603 13:15:22.864729 1114112 command_runner.go:130] > # 	"containers_oom",
	I0603 13:15:22.864733 1114112 command_runner.go:130] > # 	"processes_defunct",
	I0603 13:15:22.864741 1114112 command_runner.go:130] > # 	"operations_total",
	I0603 13:15:22.864747 1114112 command_runner.go:130] > # 	"operations_latency_seconds",
	I0603 13:15:22.864752 1114112 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0603 13:15:22.864758 1114112 command_runner.go:130] > # 	"operations_errors_total",
	I0603 13:15:22.864762 1114112 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0603 13:15:22.864766 1114112 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0603 13:15:22.864772 1114112 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0603 13:15:22.864776 1114112 command_runner.go:130] > # 	"image_pulls_success_total",
	I0603 13:15:22.864780 1114112 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0603 13:15:22.864787 1114112 command_runner.go:130] > # 	"containers_oom_count_total",
	I0603 13:15:22.864791 1114112 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0603 13:15:22.864798 1114112 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0603 13:15:22.864801 1114112 command_runner.go:130] > # ]
	I0603 13:15:22.864805 1114112 command_runner.go:130] > # The port on which the metrics server will listen.
	I0603 13:15:22.864815 1114112 command_runner.go:130] > # metrics_port = 9090
	I0603 13:15:22.864823 1114112 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0603 13:15:22.864827 1114112 command_runner.go:130] > # metrics_socket = ""
	I0603 13:15:22.864833 1114112 command_runner.go:130] > # The certificate for the secure metrics server.
	I0603 13:15:22.864841 1114112 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0603 13:15:22.864846 1114112 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0603 13:15:22.864853 1114112 command_runner.go:130] > # certificate on any modification event.
	I0603 13:15:22.864856 1114112 command_runner.go:130] > # metrics_cert = ""
	I0603 13:15:22.864861 1114112 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0603 13:15:22.864867 1114112 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0603 13:15:22.864871 1114112 command_runner.go:130] > # metrics_key = ""
	I0603 13:15:22.864879 1114112 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0603 13:15:22.864882 1114112 command_runner.go:130] > [crio.tracing]
	I0603 13:15:22.864890 1114112 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0603 13:15:22.864894 1114112 command_runner.go:130] > # enable_tracing = false
	I0603 13:15:22.864901 1114112 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0603 13:15:22.864905 1114112 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0603 13:15:22.864914 1114112 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0603 13:15:22.864919 1114112 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0603 13:15:22.864925 1114112 command_runner.go:130] > # CRI-O NRI configuration.
	I0603 13:15:22.864929 1114112 command_runner.go:130] > [crio.nri]
	I0603 13:15:22.864935 1114112 command_runner.go:130] > # Globally enable or disable NRI.
	I0603 13:15:22.864943 1114112 command_runner.go:130] > # enable_nri = false
	I0603 13:15:22.864950 1114112 command_runner.go:130] > # NRI socket to listen on.
	I0603 13:15:22.864954 1114112 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0603 13:15:22.864958 1114112 command_runner.go:130] > # NRI plugin directory to use.
	I0603 13:15:22.864964 1114112 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0603 13:15:22.864969 1114112 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0603 13:15:22.864976 1114112 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0603 13:15:22.864981 1114112 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0603 13:15:22.864987 1114112 command_runner.go:130] > # nri_disable_connections = false
	I0603 13:15:22.864992 1114112 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0603 13:15:22.864999 1114112 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0603 13:15:22.865003 1114112 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0603 13:15:22.865009 1114112 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0603 13:15:22.865015 1114112 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0603 13:15:22.865019 1114112 command_runner.go:130] > [crio.stats]
	I0603 13:15:22.865025 1114112 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0603 13:15:22.865033 1114112 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0603 13:15:22.865036 1114112 command_runner.go:130] > # stats_collection_period = 0
	I0603 13:15:22.865195 1114112 cni.go:84] Creating CNI manager for ""
	I0603 13:15:22.865210 1114112 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0603 13:15:22.865220 1114112 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 13:15:22.865241 1114112 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.141 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-101468 NodeName:multinode-101468 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.141"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.141 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 13:15:22.865394 1114112 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.141
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-101468"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.141
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.141"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 13:15:22.865465 1114112 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 13:15:22.875999 1114112 command_runner.go:130] > kubeadm
	I0603 13:15:22.876022 1114112 command_runner.go:130] > kubectl
	I0603 13:15:22.876027 1114112 command_runner.go:130] > kubelet
	I0603 13:15:22.876099 1114112 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 13:15:22.876176 1114112 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 13:15:22.885466 1114112 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0603 13:15:22.902666 1114112 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 13:15:22.919466 1114112 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0603 13:15:22.937367 1114112 ssh_runner.go:195] Run: grep 192.168.39.141	control-plane.minikube.internal$ /etc/hosts
	I0603 13:15:22.941397 1114112 command_runner.go:130] > 192.168.39.141	control-plane.minikube.internal
	I0603 13:15:22.941528 1114112 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:15:23.100868 1114112 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:15:23.116099 1114112 certs.go:68] Setting up /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/multinode-101468 for IP: 192.168.39.141
	I0603 13:15:23.116159 1114112 certs.go:194] generating shared ca certs ...
	I0603 13:15:23.116185 1114112 certs.go:226] acquiring lock for ca certs: {Name:mkeec5aabce7c9540fcb31b78e4f96c2851d54f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:15:23.116372 1114112 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key
	I0603 13:15:23.116412 1114112 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key
	I0603 13:15:23.116423 1114112 certs.go:256] generating profile certs ...
	I0603 13:15:23.116513 1114112 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/multinode-101468/client.key
	I0603 13:15:23.116565 1114112 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/multinode-101468/apiserver.key.6effd4cc
	I0603 13:15:23.116598 1114112 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/multinode-101468/proxy-client.key
	I0603 13:15:23.116609 1114112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0603 13:15:23.116620 1114112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0603 13:15:23.116637 1114112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0603 13:15:23.116649 1114112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0603 13:15:23.116660 1114112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/multinode-101468/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0603 13:15:23.116673 1114112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/multinode-101468/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0603 13:15:23.116684 1114112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/multinode-101468/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0603 13:15:23.116700 1114112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/multinode-101468/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0603 13:15:23.116767 1114112 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem (1338 bytes)
	W0603 13:15:23.116818 1114112 certs.go:480] ignoring /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251_empty.pem, impossibly tiny 0 bytes
	I0603 13:15:23.116832 1114112 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 13:15:23.116861 1114112 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem (1078 bytes)
	I0603 13:15:23.116890 1114112 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem (1123 bytes)
	I0603 13:15:23.116908 1114112 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem (1675 bytes)
	I0603 13:15:23.116955 1114112 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:15:23.116980 1114112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> /usr/share/ca-certificates/10862512.pem
	I0603 13:15:23.116993 1114112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:15:23.117008 1114112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem -> /usr/share/ca-certificates/1086251.pem
	I0603 13:15:23.117927 1114112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 13:15:23.143292 1114112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 13:15:23.168452 1114112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 13:15:23.192252 1114112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0603 13:15:23.215918 1114112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/multinode-101468/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0603 13:15:23.240345 1114112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/multinode-101468/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 13:15:23.264637 1114112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/multinode-101468/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 13:15:23.288382 1114112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/multinode-101468/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0603 13:15:23.312328 1114112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /usr/share/ca-certificates/10862512.pem (1708 bytes)
	I0603 13:15:23.336713 1114112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 13:15:23.361876 1114112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem --> /usr/share/ca-certificates/1086251.pem (1338 bytes)
	I0603 13:15:23.386167 1114112 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 13:15:23.402851 1114112 ssh_runner.go:195] Run: openssl version
	I0603 13:15:23.408734 1114112 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0603 13:15:23.408816 1114112 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10862512.pem && ln -fs /usr/share/ca-certificates/10862512.pem /etc/ssl/certs/10862512.pem"
	I0603 13:15:23.420054 1114112 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10862512.pem
	I0603 13:15:23.424809 1114112 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun  3 12:37 /usr/share/ca-certificates/10862512.pem
	I0603 13:15:23.424861 1114112 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:37 /usr/share/ca-certificates/10862512.pem
	I0603 13:15:23.424899 1114112 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10862512.pem
	I0603 13:15:23.430501 1114112 command_runner.go:130] > 3ec20f2e
	I0603 13:15:23.430566 1114112 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10862512.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 13:15:23.440411 1114112 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 13:15:23.452168 1114112 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:15:23.456967 1114112 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun  3 12:24 /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:15:23.457107 1114112 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:24 /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:15:23.457167 1114112 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:15:23.463299 1114112 command_runner.go:130] > b5213941
	I0603 13:15:23.464596 1114112 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 13:15:23.476647 1114112 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1086251.pem && ln -fs /usr/share/ca-certificates/1086251.pem /etc/ssl/certs/1086251.pem"
	I0603 13:15:23.488187 1114112 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1086251.pem
	I0603 13:15:23.492719 1114112 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun  3 12:37 /usr/share/ca-certificates/1086251.pem
	I0603 13:15:23.492754 1114112 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:37 /usr/share/ca-certificates/1086251.pem
	I0603 13:15:23.492802 1114112 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1086251.pem
	I0603 13:15:23.498540 1114112 command_runner.go:130] > 51391683
	I0603 13:15:23.498617 1114112 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1086251.pem /etc/ssl/certs/51391683.0"
	I0603 13:15:23.510152 1114112 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 13:15:23.515090 1114112 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 13:15:23.515128 1114112 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0603 13:15:23.515138 1114112 command_runner.go:130] > Device: 253,1	Inode: 6292502     Links: 1
	I0603 13:15:23.515148 1114112 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0603 13:15:23.515177 1114112 command_runner.go:130] > Access: 2024-06-03 13:09:22.392722678 +0000
	I0603 13:15:23.515190 1114112 command_runner.go:130] > Modify: 2024-06-03 13:09:22.392722678 +0000
	I0603 13:15:23.515197 1114112 command_runner.go:130] > Change: 2024-06-03 13:09:22.392722678 +0000
	I0603 13:15:23.515209 1114112 command_runner.go:130] >  Birth: 2024-06-03 13:09:22.392722678 +0000
	I0603 13:15:23.515281 1114112 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 13:15:23.521592 1114112 command_runner.go:130] > Certificate will not expire
	I0603 13:15:23.521865 1114112 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 13:15:23.527790 1114112 command_runner.go:130] > Certificate will not expire
	I0603 13:15:23.528027 1114112 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 13:15:23.534195 1114112 command_runner.go:130] > Certificate will not expire
	I0603 13:15:23.534392 1114112 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 13:15:23.540324 1114112 command_runner.go:130] > Certificate will not expire
	I0603 13:15:23.540383 1114112 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 13:15:23.546213 1114112 command_runner.go:130] > Certificate will not expire
	I0603 13:15:23.546429 1114112 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 13:15:23.552649 1114112 command_runner.go:130] > Certificate will not expire
	I0603 13:15:23.552727 1114112 kubeadm.go:391] StartCluster: {Name:multinode-101468 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:multinode-101468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.203 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:15:23.552840 1114112 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 13:15:23.552896 1114112 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:15:23.598377 1114112 command_runner.go:130] > 4aaed31d9690e67af1e8a3189c7ad89bbe7cca30dd59d49fa24f78fbb9b81166
	I0603 13:15:23.598404 1114112 command_runner.go:130] > c41d1ef9ae6ed41e317322b6b6f9ecdde78b0252da19619c1a79af409962ccf3
	I0603 13:15:23.598411 1114112 command_runner.go:130] > b5fb5fac18c2746c85ef619629aa9f8e67e8c840e0ebaf7de43627d03ef00e55
	I0603 13:15:23.598419 1114112 command_runner.go:130] > 4c205814428f5f446bf31b3c1eb05d88a0df1b650bc2eb6dac437cfc1aac5cb7
	I0603 13:15:23.598424 1114112 command_runner.go:130] > 21a5bccaa9cf36906929cabd4c0566494209b0dd42c22fbe5ca3dc90836ea727
	I0603 13:15:23.598429 1114112 command_runner.go:130] > d685e8439e32392d6f330b06bfe5a68dc490239dfa52a2263107ab0486ca22d3
	I0603 13:15:23.598440 1114112 command_runner.go:130] > 796bbd6b016f5db78dda5e2dd3aa3a11c30982b1456a74336ec89e55dcf5f94f
	I0603 13:15:23.598450 1114112 command_runner.go:130] > e9be4b439e8723094acf920c06dddcd4a0b5d64b13a048e5f569dc51d0fab096
	I0603 13:15:23.598483 1114112 cri.go:89] found id: "4aaed31d9690e67af1e8a3189c7ad89bbe7cca30dd59d49fa24f78fbb9b81166"
	I0603 13:15:23.598492 1114112 cri.go:89] found id: "c41d1ef9ae6ed41e317322b6b6f9ecdde78b0252da19619c1a79af409962ccf3"
	I0603 13:15:23.598495 1114112 cri.go:89] found id: "b5fb5fac18c2746c85ef619629aa9f8e67e8c840e0ebaf7de43627d03ef00e55"
	I0603 13:15:23.598498 1114112 cri.go:89] found id: "4c205814428f5f446bf31b3c1eb05d88a0df1b650bc2eb6dac437cfc1aac5cb7"
	I0603 13:15:23.598501 1114112 cri.go:89] found id: "21a5bccaa9cf36906929cabd4c0566494209b0dd42c22fbe5ca3dc90836ea727"
	I0603 13:15:23.598506 1114112 cri.go:89] found id: "d685e8439e32392d6f330b06bfe5a68dc490239dfa52a2263107ab0486ca22d3"
	I0603 13:15:23.598509 1114112 cri.go:89] found id: "796bbd6b016f5db78dda5e2dd3aa3a11c30982b1456a74336ec89e55dcf5f94f"
	I0603 13:15:23.598512 1114112 cri.go:89] found id: "e9be4b439e8723094acf920c06dddcd4a0b5d64b13a048e5f569dc51d0fab096"
	I0603 13:15:23.598514 1114112 cri.go:89] found id: ""
	I0603 13:15:23.598559 1114112 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jun 03 13:16:44 multinode-101468 crio[2896]: time="2024-06-03 13:16:44.282395626Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:65d39c69b35e974aabef43493e6ff16b27ce52472d1b264650f930122df5ff76,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-7jrcp,Uid:7a0d546e-6072-497f-8464-3a2dd172f9a3,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1717420563413651894,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-7jrcp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a0d546e-6072-497f-8464-3a2dd172f9a3,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03T13:15:29.259174603Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:420c4b77ca003abeb4ca2cc9c79777203d60c304f197ce46a6d8691c7d7e0029,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-rszqr,Uid:ceb550ef-f06f-425c-b564-f4ad51d298bc,Namespace:kube-system,Attempt:1,}
,State:SANDBOX_READY,CreatedAt:1717420529667895121,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-rszqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ceb550ef-f06f-425c-b564-f4ad51d298bc,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03T13:15:29.259179178Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:267760408b1b8ef656a779d0b188e5b04ae9c32b3b6a9d326fac2e2d5e567dbc,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:9bf865e3-3171-4447-a928-3f7bcde9b7c4,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1717420529607675505,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bf865e3-3171-4447-a928-3f7bcde9b7c4,},Annotations:map[string]stri
ng{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-06-03T13:15:29.259173296Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:da05292c8449d5c902f541bec9bc609284234cbe5e2d19f1eaef1ce8ad6b9a1a,Metadata:&PodSandboxMetadata{Name:kube-proxy-nf6c2,Uid:10b1fbac-04e0-46c6-a2cd-8befd0343e0e,Namespace:kube-system,Atte
mpt:1,},State:SANDBOX_READY,CreatedAt:1717420529606693416,Labels:map[string]string{controller-revision-hash: 5dbf89796d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-nf6c2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b1fbac-04e0-46c6-a2cd-8befd0343e0e,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03T13:15:29.259178218Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:243827bd1bbb9cce2948cb638088914ba16c5a0b89e7be7b5b6b4fdd27151e60,Metadata:&PodSandboxMetadata{Name:kindnet-m96bv,Uid:3e7c090a-031c-483b-b89d-6192f0b73a9d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1717420529603357858,Labels:map[string]string{app: kindnet,controller-revision-hash: 84c66bd94d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-m96bv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e7c090a-031c-483b-b89d-6192f0b73a9d,k8s-app: kindnet,pod-template-generat
ion: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03T13:15:29.259175679Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9cf1b716a8eca5ec118f3498e95cb8548791359877cef31c354c19a230d6f67f,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-101468,Uid:d7f804e707df88666558ffa84b5d48ff,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1717420525784271165,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7f804e707df88666558ffa84b5d48ff,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d7f804e707df88666558ffa84b5d48ff,kubernetes.io/config.seen: 2024-06-03T13:15:25.266399927Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2840e96fa2e7090f356f5e7c41aa3c8731c9901af22ff231660c9962c4e8bf3e,Metadata:&PodSandboxMetadat
a{Name:kube-apiserver-multinode-101468,Uid:d12e547dd6860d1022394e38f43085b7,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1717420525780289626,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e547dd6860d1022394e38f43085b7,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.141:8443,kubernetes.io/config.hash: d12e547dd6860d1022394e38f43085b7,kubernetes.io/config.seen: 2024-06-03T13:15:25.266398602Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:01c9270f89191bd2de7d3adf3e0a0a2d705d439b925ca9a2562e02bd46baf958,Metadata:&PodSandboxMetadata{Name:etcd-multinode-101468,Uid:8642d3d47b20a69d006a8efccbbe2d84,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1717420525776734358,Labels:map[string]string{component: etcd,io.kubernetes.conta
iner.name: POD,io.kubernetes.pod.name: etcd-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8642d3d47b20a69d006a8efccbbe2d84,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.141:2379,kubernetes.io/config.hash: 8642d3d47b20a69d006a8efccbbe2d84,kubernetes.io/config.seen: 2024-06-03T13:15:25.266394330Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1737d2a651d4666ebccc320b9251e96666b9e18973f1b0578464b033366ec903,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-101468,Uid:8ac3cdbe5a6f72ed950e19c2ab2acb01,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1717420525775818149,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac3cdbe5a6f72ed950e19c2ab2acb01,tier: control-plane,},Annotations:map[string]string{kuberne
tes.io/config.hash: 8ac3cdbe5a6f72ed950e19c2ab2acb01,kubernetes.io/config.seen: 2024-06-03T13:15:25.266400756Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c293a70ba7ceac8adf622b552619290e65c8d4abff507481189dfb62d5ebf39c,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-7jrcp,Uid:7a0d546e-6072-497f-8464-3a2dd172f9a3,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1717420228939239644,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-7jrcp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a0d546e-6072-497f-8464-3a2dd172f9a3,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03T13:10:28.626138862Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a4b34884aaaa17d07560def26c9f359984ef0f7155d017a2c1b19a6b3c07f6e6,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:9bf865e3-3171-4447-a928-3f7bcde9b7c4,Namespace:kube-system,Attempt:0,}
,State:SANDBOX_NOTREADY,CreatedAt:1717420189399654579,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bf865e3-3171-4447-a928-3f7bcde9b7c4,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path
\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-06-03T13:09:49.088767342Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4d9065a030ffe63bfe7748d1d106db329870299e20d9a8773f45bef910578630,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-rszqr,Uid:ceb550ef-f06f-425c-b564-f4ad51d298bc,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1717420189389844220,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-rszqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ceb550ef-f06f-425c-b564-f4ad51d298bc,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03T13:09:49.082505928Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c3ae5b1239c127ea1c3d266bdbc5ee258abe17a07155f4fbeb527a0ffd88d2e9,Metadata:&PodSandboxMetadata{Name:kindnet-m96bv,Uid:3e7c090a-031c-483b-b89d-6192f0b73a9d,Namespace:kube-sys
tem,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1717420185391566889,Labels:map[string]string{app: kindnet,controller-revision-hash: 84c66bd94d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-m96bv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e7c090a-031c-483b-b89d-6192f0b73a9d,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03T13:09:45.067120973Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e055bef89aef572e5003403c6061f1c6923792ca977d474bb6508c5370028764,Metadata:&PodSandboxMetadata{Name:kube-proxy-nf6c2,Uid:10b1fbac-04e0-46c6-a2cd-8befd0343e0e,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1717420185381355089,Labels:map[string]string{controller-revision-hash: 5dbf89796d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-nf6c2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b1fbac-04e0-46c6-a2cd-8befd0343e0e,k8s-app: kub
e-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03T13:09:45.046362419Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:af0205261b8d4f59896594a7f99964fcde0a17d73a703f933043c87a0b24de8c,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-101468,Uid:d7f804e707df88666558ffa84b5d48ff,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1717420166665901806,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7f804e707df88666558ffa84b5d48ff,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d7f804e707df88666558ffa84b5d48ff,kubernetes.io/config.seen: 2024-06-03T13:09:25.582893718Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:af3bfbe386473bcfc6b51ddb30ff65b002f1ab37171c0c16b7b5c30f4f5b1899,Metadat
a:&PodSandboxMetadata{Name:kube-scheduler-multinode-101468,Uid:8ac3cdbe5a6f72ed950e19c2ab2acb01,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1717420166652126567,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac3cdbe5a6f72ed950e19c2ab2acb01,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8ac3cdbe5a6f72ed950e19c2ab2acb01,kubernetes.io/config.seen: 2024-06-03T13:09:25.582894572Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f9da6a04531a7f21581fc7febdacaa4e365226a02593c378dc3afdd315a8b302,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-101468,Uid:d12e547dd6860d1022394e38f43085b7,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1717420166651988766,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name:
kube-apiserver-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e547dd6860d1022394e38f43085b7,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.141:8443,kubernetes.io/config.hash: d12e547dd6860d1022394e38f43085b7,kubernetes.io/config.seen: 2024-06-03T13:09:25.582892237Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f10085f6aa1f8124745fba1d55bd4818fdd9515f0fbbac5b35d9856397c00530,Metadata:&PodSandboxMetadata{Name:etcd-multinode-101468,Uid:8642d3d47b20a69d006a8efccbbe2d84,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1717420166650357136,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8642d3d47b20a69d006a8efccbbe2d84,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https:
//192.168.39.141:2379,kubernetes.io/config.hash: 8642d3d47b20a69d006a8efccbbe2d84,kubernetes.io/config.seen: 2024-06-03T13:09:25.582888088Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=d1a56fdf-65cb-4da8-bfa7-6b61aad2d689 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 03 13:16:44 multinode-101468 crio[2896]: time="2024-06-03 13:16:44.283208542Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8b44d8d6-c0fb-40b0-b58f-5f69802d6355 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:16:44 multinode-101468 crio[2896]: time="2024-06-03 13:16:44.283262441Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8b44d8d6-c0fb-40b0-b58f-5f69802d6355 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:16:44 multinode-101468 crio[2896]: time="2024-06-03 13:16:44.283743317Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:638078678bbb1e7359164bfde6c512c483000ef1fb524416d4f2c0817749b67d,PodSandboxId:65d39c69b35e974aabef43493e6ff16b27ce52472d1b264650f930122df5ff76,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717420563566846044,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7jrcp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a0d546e-6072-497f-8464-3a2dd172f9a3,},Annotations:map[string]string{io.kubernetes.container.hash: ce8be3dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:981940dc117e5b39129bdfede644eebffb008799a316d3bfa0f1984889408b6c,PodSandboxId:420c4b77ca003abeb4ca2cc9c79777203d60c304f197ce46a6d8691c7d7e0029,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717420529993150557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rszqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ceb550ef-f06f-425c-b564-f4ad51d298bc,},Annotations:map[string]string{io.kubernetes.container.hash: 5f040b62,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a98e88b0a3e80fcaff5145157d273db81ab5e79b0361e5bf5d87a7f94af5219,PodSandboxId:243827bd1bbb9cce2948cb638088914ba16c5a0b89e7be7b5b6b4fdd27151e60,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1717420529963896439,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-m96bv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e7c090a-031c-483b-b89d-6192f
0b73a9d,},Annotations:map[string]string{io.kubernetes.container.hash: c3f2c2e3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f420f7e5b26fef5489931b9a2b3ce0b4bc0f6cd832ae70950acf8e34fc8f97c,PodSandboxId:267760408b1b8ef656a779d0b188e5b04ae9c32b3b6a9d326fac2e2d5e567dbc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717420529881369545,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bf865e3-3171-4447-a928-3f7bcde9b7c4,},An
notations:map[string]string{io.kubernetes.container.hash: cb13b890,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cda936f669af5731768b8c429bbe487aa9cdf8a5510ea785c0150229ba2c5f0d,PodSandboxId:da05292c8449d5c902f541bec9bc609284234cbe5e2d19f1eaef1ce8ad6b9a1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717420529804680902,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nf6c2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b1fbac-04e0-46c6-a2cd-8befd0343e0e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 7f2dd8b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe63357eb594d2e010697e48431e7f3c3eea9a1aaea980335c0d4335033da8ca,PodSandboxId:01c9270f89191bd2de7d3adf3e0a0a2d705d439b925ca9a2562e02bd46baf958,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717420526047790762,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8642d3d47b20a69d006a8efccbbe2d84,},Annotations:map[string]string{io.kubernetes.container.hash: e28d7e08,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b7fd9adda334d55768d0d9b6cf77daedde0203b28a375a75eb3fc3c344c52a3,PodSandboxId:2840e96fa2e7090f356f5e7c41aa3c8731c9901af22ff231660c9962c4e8bf3e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717420526000046093,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e547dd6860d1022394e38f43085b7,},Annotations:map[string]string{io.kubernetes.container.hash: c125c438,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:115ca08701ae57c4ebbda3e53cd4d8ac85cc2e414e2c662a45e0f7bf8e8a8ddb,PodSandboxId:1737d2a651d4666ebccc320b9251e96666b9e18973f1b0578464b033366ec903,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717420526017502749,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac3cdbe5a6f72ed950e19c2ab2acb01,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64361fea21d48d186fcc54b1dbcb7e9ebc1a6a1a5ca1d3014d7af495415caa31,PodSandboxId:9cf1b716a8eca5ec118f3498e95cb8548791359877cef31c354c19a230d6f67f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717420525955565407,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7f804e707df88666558ffa84b5d48ff,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e5ab5496d7e7db8f5b2d7ea36ba64a84ede2f508b16a2fc00edb87740393109,PodSandboxId:c293a70ba7ceac8adf622b552619290e65c8d4abff507481189dfb62d5ebf39c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1717420229782857320,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7jrcp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a0d546e-6072-497f-8464-3a2dd172f9a3,},Annotations:map[string]string{io.kubernetes.container.hash: ce8be3dc,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4aaed31d9690e67af1e8a3189c7ad89bbe7cca30dd59d49fa24f78fbb9b81166,PodSandboxId:4d9065a030ffe63bfe7748d1d106db329870299e20d9a8773f45bef910578630,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717420189613417242,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rszqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ceb550ef-f06f-425c-b564-f4ad51d298bc,},Annotations:map[string]string{io.kubernetes.container.hash: 5f040b62,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c41d1ef9ae6ed41e317322b6b6f9ecdde78b0252da19619c1a79af409962ccf3,PodSandboxId:a4b34884aaaa17d07560def26c9f359984ef0f7155d017a2c1b19a6b3c07f6e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717420189575251486,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 9bf865e3-3171-4447-a928-3f7bcde9b7c4,},Annotations:map[string]string{io.kubernetes.container.hash: cb13b890,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5fb5fac18c2746c85ef619629aa9f8e67e8c840e0ebaf7de43627d03ef00e55,PodSandboxId:c3ae5b1239c127ea1c3d266bdbc5ee258abe17a07155f4fbeb527a0ffd88d2e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717420188328971472,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-m96bv,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 3e7c090a-031c-483b-b89d-6192f0b73a9d,},Annotations:map[string]string{io.kubernetes.container.hash: c3f2c2e3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c205814428f5f446bf31b3c1eb05d88a0df1b650bc2eb6dac437cfc1aac5cb7,PodSandboxId:e055bef89aef572e5003403c6061f1c6923792ca977d474bb6508c5370028764,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717420185928954873,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nf6c2,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 10b1fbac-04e0-46c6-a2cd-8befd0343e0e,},Annotations:map[string]string{io.kubernetes.container.hash: 7f2dd8b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d685e8439e32392d6f330b06bfe5a68dc490239dfa52a2263107ab0486ca22d3,PodSandboxId:f10085f6aa1f8124745fba1d55bd4818fdd9515f0fbbac5b35d9856397c00530,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717420166857354713,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8642d3d47b20a69d006a8efccbbe2d
84,},Annotations:map[string]string{io.kubernetes.container.hash: e28d7e08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21a5bccaa9cf36906929cabd4c0566494209b0dd42c22fbe5ca3dc90836ea727,PodSandboxId:af0205261b8d4f59896594a7f99964fcde0a17d73a703f933043c87a0b24de8c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717420166908479492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7f804e707df886665
58ffa84b5d48ff,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:796bbd6b016f5db78dda5e2dd3aa3a11c30982b1456a74336ec89e55dcf5f94f,PodSandboxId:af3bfbe386473bcfc6b51ddb30ff65b002f1ab37171c0c16b7b5c30f4f5b1899,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717420166856585006,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac3cdbe5a6f72ed950e19c2ab2acb01,
},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9be4b439e8723094acf920c06dddcd4a0b5d64b13a048e5f569dc51d0fab096,PodSandboxId:f9da6a04531a7f21581fc7febdacaa4e365226a02593c378dc3afdd315a8b302,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717420166811627955,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e547dd6860d1022394e38f43085b7,},Annotations:m
ap[string]string{io.kubernetes.container.hash: c125c438,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8b44d8d6-c0fb-40b0-b58f-5f69802d6355 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:16:44 multinode-101468 crio[2896]: time="2024-06-03 13:16:44.317888236Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d61b584d-8a26-484e-9d1c-09fd57aac3d4 name=/runtime.v1.RuntimeService/Version
	Jun 03 13:16:44 multinode-101468 crio[2896]: time="2024-06-03 13:16:44.317989793Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d61b584d-8a26-484e-9d1c-09fd57aac3d4 name=/runtime.v1.RuntimeService/Version
	Jun 03 13:16:44 multinode-101468 crio[2896]: time="2024-06-03 13:16:44.319354841Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=66521b67-d2cc-44ca-8528-827851346359 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 13:16:44 multinode-101468 crio[2896]: time="2024-06-03 13:16:44.319777095Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717420604319755356,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143025,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=66521b67-d2cc-44ca-8528-827851346359 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 13:16:44 multinode-101468 crio[2896]: time="2024-06-03 13:16:44.320422599Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=07eb3666-6e98-441d-97fd-bf0998b6fb82 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:16:44 multinode-101468 crio[2896]: time="2024-06-03 13:16:44.320474445Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=07eb3666-6e98-441d-97fd-bf0998b6fb82 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:16:44 multinode-101468 crio[2896]: time="2024-06-03 13:16:44.322629488Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:638078678bbb1e7359164bfde6c512c483000ef1fb524416d4f2c0817749b67d,PodSandboxId:65d39c69b35e974aabef43493e6ff16b27ce52472d1b264650f930122df5ff76,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717420563566846044,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7jrcp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a0d546e-6072-497f-8464-3a2dd172f9a3,},Annotations:map[string]string{io.kubernetes.container.hash: ce8be3dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:981940dc117e5b39129bdfede644eebffb008799a316d3bfa0f1984889408b6c,PodSandboxId:420c4b77ca003abeb4ca2cc9c79777203d60c304f197ce46a6d8691c7d7e0029,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717420529993150557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rszqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ceb550ef-f06f-425c-b564-f4ad51d298bc,},Annotations:map[string]string{io.kubernetes.container.hash: 5f040b62,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a98e88b0a3e80fcaff5145157d273db81ab5e79b0361e5bf5d87a7f94af5219,PodSandboxId:243827bd1bbb9cce2948cb638088914ba16c5a0b89e7be7b5b6b4fdd27151e60,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1717420529963896439,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-m96bv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e7c090a-031c-483b-b89d-6192f
0b73a9d,},Annotations:map[string]string{io.kubernetes.container.hash: c3f2c2e3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f420f7e5b26fef5489931b9a2b3ce0b4bc0f6cd832ae70950acf8e34fc8f97c,PodSandboxId:267760408b1b8ef656a779d0b188e5b04ae9c32b3b6a9d326fac2e2d5e567dbc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717420529881369545,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bf865e3-3171-4447-a928-3f7bcde9b7c4,},An
notations:map[string]string{io.kubernetes.container.hash: cb13b890,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cda936f669af5731768b8c429bbe487aa9cdf8a5510ea785c0150229ba2c5f0d,PodSandboxId:da05292c8449d5c902f541bec9bc609284234cbe5e2d19f1eaef1ce8ad6b9a1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717420529804680902,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nf6c2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b1fbac-04e0-46c6-a2cd-8befd0343e0e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 7f2dd8b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe63357eb594d2e010697e48431e7f3c3eea9a1aaea980335c0d4335033da8ca,PodSandboxId:01c9270f89191bd2de7d3adf3e0a0a2d705d439b925ca9a2562e02bd46baf958,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717420526047790762,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8642d3d47b20a69d006a8efccbbe2d84,},Annotations:map[string]string{io.kubernetes.container.hash: e28d7e08,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b7fd9adda334d55768d0d9b6cf77daedde0203b28a375a75eb3fc3c344c52a3,PodSandboxId:2840e96fa2e7090f356f5e7c41aa3c8731c9901af22ff231660c9962c4e8bf3e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717420526000046093,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e547dd6860d1022394e38f43085b7,},Annotations:map[string]string{io.kubernetes.container.hash: c125c438,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:115ca08701ae57c4ebbda3e53cd4d8ac85cc2e414e2c662a45e0f7bf8e8a8ddb,PodSandboxId:1737d2a651d4666ebccc320b9251e96666b9e18973f1b0578464b033366ec903,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717420526017502749,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac3cdbe5a6f72ed950e19c2ab2acb01,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64361fea21d48d186fcc54b1dbcb7e9ebc1a6a1a5ca1d3014d7af495415caa31,PodSandboxId:9cf1b716a8eca5ec118f3498e95cb8548791359877cef31c354c19a230d6f67f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717420525955565407,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7f804e707df88666558ffa84b5d48ff,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e5ab5496d7e7db8f5b2d7ea36ba64a84ede2f508b16a2fc00edb87740393109,PodSandboxId:c293a70ba7ceac8adf622b552619290e65c8d4abff507481189dfb62d5ebf39c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1717420229782857320,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7jrcp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a0d546e-6072-497f-8464-3a2dd172f9a3,},Annotations:map[string]string{io.kubernetes.container.hash: ce8be3dc,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4aaed31d9690e67af1e8a3189c7ad89bbe7cca30dd59d49fa24f78fbb9b81166,PodSandboxId:4d9065a030ffe63bfe7748d1d106db329870299e20d9a8773f45bef910578630,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717420189613417242,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rszqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ceb550ef-f06f-425c-b564-f4ad51d298bc,},Annotations:map[string]string{io.kubernetes.container.hash: 5f040b62,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c41d1ef9ae6ed41e317322b6b6f9ecdde78b0252da19619c1a79af409962ccf3,PodSandboxId:a4b34884aaaa17d07560def26c9f359984ef0f7155d017a2c1b19a6b3c07f6e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717420189575251486,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 9bf865e3-3171-4447-a928-3f7bcde9b7c4,},Annotations:map[string]string{io.kubernetes.container.hash: cb13b890,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5fb5fac18c2746c85ef619629aa9f8e67e8c840e0ebaf7de43627d03ef00e55,PodSandboxId:c3ae5b1239c127ea1c3d266bdbc5ee258abe17a07155f4fbeb527a0ffd88d2e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717420188328971472,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-m96bv,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 3e7c090a-031c-483b-b89d-6192f0b73a9d,},Annotations:map[string]string{io.kubernetes.container.hash: c3f2c2e3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c205814428f5f446bf31b3c1eb05d88a0df1b650bc2eb6dac437cfc1aac5cb7,PodSandboxId:e055bef89aef572e5003403c6061f1c6923792ca977d474bb6508c5370028764,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717420185928954873,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nf6c2,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 10b1fbac-04e0-46c6-a2cd-8befd0343e0e,},Annotations:map[string]string{io.kubernetes.container.hash: 7f2dd8b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d685e8439e32392d6f330b06bfe5a68dc490239dfa52a2263107ab0486ca22d3,PodSandboxId:f10085f6aa1f8124745fba1d55bd4818fdd9515f0fbbac5b35d9856397c00530,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717420166857354713,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8642d3d47b20a69d006a8efccbbe2d
84,},Annotations:map[string]string{io.kubernetes.container.hash: e28d7e08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21a5bccaa9cf36906929cabd4c0566494209b0dd42c22fbe5ca3dc90836ea727,PodSandboxId:af0205261b8d4f59896594a7f99964fcde0a17d73a703f933043c87a0b24de8c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717420166908479492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7f804e707df886665
58ffa84b5d48ff,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:796bbd6b016f5db78dda5e2dd3aa3a11c30982b1456a74336ec89e55dcf5f94f,PodSandboxId:af3bfbe386473bcfc6b51ddb30ff65b002f1ab37171c0c16b7b5c30f4f5b1899,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717420166856585006,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac3cdbe5a6f72ed950e19c2ab2acb01,
},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9be4b439e8723094acf920c06dddcd4a0b5d64b13a048e5f569dc51d0fab096,PodSandboxId:f9da6a04531a7f21581fc7febdacaa4e365226a02593c378dc3afdd315a8b302,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717420166811627955,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e547dd6860d1022394e38f43085b7,},Annotations:m
ap[string]string{io.kubernetes.container.hash: c125c438,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=07eb3666-6e98-441d-97fd-bf0998b6fb82 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:16:44 multinode-101468 crio[2896]: time="2024-06-03 13:16:44.371192785Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fe17824c-dd37-4b6c-86ee-bba71e5665ae name=/runtime.v1.RuntimeService/Version
	Jun 03 13:16:44 multinode-101468 crio[2896]: time="2024-06-03 13:16:44.371285416Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fe17824c-dd37-4b6c-86ee-bba71e5665ae name=/runtime.v1.RuntimeService/Version
	Jun 03 13:16:44 multinode-101468 crio[2896]: time="2024-06-03 13:16:44.372513487Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=71f7d4c7-79a8-4934-926e-2222801cfd50 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 13:16:44 multinode-101468 crio[2896]: time="2024-06-03 13:16:44.373214779Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717420604373034373,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143025,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=71f7d4c7-79a8-4934-926e-2222801cfd50 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 13:16:44 multinode-101468 crio[2896]: time="2024-06-03 13:16:44.373753589Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=47ad3472-d08d-4c40-a1c3-97cd0af43dd9 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:16:44 multinode-101468 crio[2896]: time="2024-06-03 13:16:44.373828864Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=47ad3472-d08d-4c40-a1c3-97cd0af43dd9 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:16:44 multinode-101468 crio[2896]: time="2024-06-03 13:16:44.374288560Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:638078678bbb1e7359164bfde6c512c483000ef1fb524416d4f2c0817749b67d,PodSandboxId:65d39c69b35e974aabef43493e6ff16b27ce52472d1b264650f930122df5ff76,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717420563566846044,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7jrcp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a0d546e-6072-497f-8464-3a2dd172f9a3,},Annotations:map[string]string{io.kubernetes.container.hash: ce8be3dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:981940dc117e5b39129bdfede644eebffb008799a316d3bfa0f1984889408b6c,PodSandboxId:420c4b77ca003abeb4ca2cc9c79777203d60c304f197ce46a6d8691c7d7e0029,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717420529993150557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rszqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ceb550ef-f06f-425c-b564-f4ad51d298bc,},Annotations:map[string]string{io.kubernetes.container.hash: 5f040b62,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a98e88b0a3e80fcaff5145157d273db81ab5e79b0361e5bf5d87a7f94af5219,PodSandboxId:243827bd1bbb9cce2948cb638088914ba16c5a0b89e7be7b5b6b4fdd27151e60,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1717420529963896439,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-m96bv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e7c090a-031c-483b-b89d-6192f
0b73a9d,},Annotations:map[string]string{io.kubernetes.container.hash: c3f2c2e3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f420f7e5b26fef5489931b9a2b3ce0b4bc0f6cd832ae70950acf8e34fc8f97c,PodSandboxId:267760408b1b8ef656a779d0b188e5b04ae9c32b3b6a9d326fac2e2d5e567dbc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717420529881369545,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bf865e3-3171-4447-a928-3f7bcde9b7c4,},An
notations:map[string]string{io.kubernetes.container.hash: cb13b890,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cda936f669af5731768b8c429bbe487aa9cdf8a5510ea785c0150229ba2c5f0d,PodSandboxId:da05292c8449d5c902f541bec9bc609284234cbe5e2d19f1eaef1ce8ad6b9a1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717420529804680902,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nf6c2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b1fbac-04e0-46c6-a2cd-8befd0343e0e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 7f2dd8b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe63357eb594d2e010697e48431e7f3c3eea9a1aaea980335c0d4335033da8ca,PodSandboxId:01c9270f89191bd2de7d3adf3e0a0a2d705d439b925ca9a2562e02bd46baf958,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717420526047790762,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8642d3d47b20a69d006a8efccbbe2d84,},Annotations:map[string]string{io.kubernetes.container.hash: e28d7e08,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b7fd9adda334d55768d0d9b6cf77daedde0203b28a375a75eb3fc3c344c52a3,PodSandboxId:2840e96fa2e7090f356f5e7c41aa3c8731c9901af22ff231660c9962c4e8bf3e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717420526000046093,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e547dd6860d1022394e38f43085b7,},Annotations:map[string]string{io.kubernetes.container.hash: c125c438,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:115ca08701ae57c4ebbda3e53cd4d8ac85cc2e414e2c662a45e0f7bf8e8a8ddb,PodSandboxId:1737d2a651d4666ebccc320b9251e96666b9e18973f1b0578464b033366ec903,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717420526017502749,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac3cdbe5a6f72ed950e19c2ab2acb01,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64361fea21d48d186fcc54b1dbcb7e9ebc1a6a1a5ca1d3014d7af495415caa31,PodSandboxId:9cf1b716a8eca5ec118f3498e95cb8548791359877cef31c354c19a230d6f67f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717420525955565407,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7f804e707df88666558ffa84b5d48ff,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e5ab5496d7e7db8f5b2d7ea36ba64a84ede2f508b16a2fc00edb87740393109,PodSandboxId:c293a70ba7ceac8adf622b552619290e65c8d4abff507481189dfb62d5ebf39c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1717420229782857320,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7jrcp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a0d546e-6072-497f-8464-3a2dd172f9a3,},Annotations:map[string]string{io.kubernetes.container.hash: ce8be3dc,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4aaed31d9690e67af1e8a3189c7ad89bbe7cca30dd59d49fa24f78fbb9b81166,PodSandboxId:4d9065a030ffe63bfe7748d1d106db329870299e20d9a8773f45bef910578630,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717420189613417242,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rszqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ceb550ef-f06f-425c-b564-f4ad51d298bc,},Annotations:map[string]string{io.kubernetes.container.hash: 5f040b62,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c41d1ef9ae6ed41e317322b6b6f9ecdde78b0252da19619c1a79af409962ccf3,PodSandboxId:a4b34884aaaa17d07560def26c9f359984ef0f7155d017a2c1b19a6b3c07f6e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717420189575251486,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 9bf865e3-3171-4447-a928-3f7bcde9b7c4,},Annotations:map[string]string{io.kubernetes.container.hash: cb13b890,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5fb5fac18c2746c85ef619629aa9f8e67e8c840e0ebaf7de43627d03ef00e55,PodSandboxId:c3ae5b1239c127ea1c3d266bdbc5ee258abe17a07155f4fbeb527a0ffd88d2e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717420188328971472,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-m96bv,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 3e7c090a-031c-483b-b89d-6192f0b73a9d,},Annotations:map[string]string{io.kubernetes.container.hash: c3f2c2e3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c205814428f5f446bf31b3c1eb05d88a0df1b650bc2eb6dac437cfc1aac5cb7,PodSandboxId:e055bef89aef572e5003403c6061f1c6923792ca977d474bb6508c5370028764,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717420185928954873,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nf6c2,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 10b1fbac-04e0-46c6-a2cd-8befd0343e0e,},Annotations:map[string]string{io.kubernetes.container.hash: 7f2dd8b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d685e8439e32392d6f330b06bfe5a68dc490239dfa52a2263107ab0486ca22d3,PodSandboxId:f10085f6aa1f8124745fba1d55bd4818fdd9515f0fbbac5b35d9856397c00530,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717420166857354713,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8642d3d47b20a69d006a8efccbbe2d
84,},Annotations:map[string]string{io.kubernetes.container.hash: e28d7e08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21a5bccaa9cf36906929cabd4c0566494209b0dd42c22fbe5ca3dc90836ea727,PodSandboxId:af0205261b8d4f59896594a7f99964fcde0a17d73a703f933043c87a0b24de8c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717420166908479492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7f804e707df886665
58ffa84b5d48ff,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:796bbd6b016f5db78dda5e2dd3aa3a11c30982b1456a74336ec89e55dcf5f94f,PodSandboxId:af3bfbe386473bcfc6b51ddb30ff65b002f1ab37171c0c16b7b5c30f4f5b1899,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717420166856585006,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac3cdbe5a6f72ed950e19c2ab2acb01,
},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9be4b439e8723094acf920c06dddcd4a0b5d64b13a048e5f569dc51d0fab096,PodSandboxId:f9da6a04531a7f21581fc7febdacaa4e365226a02593c378dc3afdd315a8b302,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717420166811627955,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e547dd6860d1022394e38f43085b7,},Annotations:m
ap[string]string{io.kubernetes.container.hash: c125c438,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=47ad3472-d08d-4c40-a1c3-97cd0af43dd9 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:16:44 multinode-101468 crio[2896]: time="2024-06-03 13:16:44.417347185Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=19f4f5f2-d343-4816-8681-b9e6fdb21e8d name=/runtime.v1.RuntimeService/Version
	Jun 03 13:16:44 multinode-101468 crio[2896]: time="2024-06-03 13:16:44.417435369Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=19f4f5f2-d343-4816-8681-b9e6fdb21e8d name=/runtime.v1.RuntimeService/Version
	Jun 03 13:16:44 multinode-101468 crio[2896]: time="2024-06-03 13:16:44.418456926Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a128fa26-26fd-4f8f-88a1-06107ee08fa3 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 13:16:44 multinode-101468 crio[2896]: time="2024-06-03 13:16:44.419011803Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717420604418990662,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143025,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a128fa26-26fd-4f8f-88a1-06107ee08fa3 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 13:16:44 multinode-101468 crio[2896]: time="2024-06-03 13:16:44.419673951Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5e153282-dd81-4490-8560-481c1679e6a8 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:16:44 multinode-101468 crio[2896]: time="2024-06-03 13:16:44.419754955Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5e153282-dd81-4490-8560-481c1679e6a8 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:16:44 multinode-101468 crio[2896]: time="2024-06-03 13:16:44.420151870Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:638078678bbb1e7359164bfde6c512c483000ef1fb524416d4f2c0817749b67d,PodSandboxId:65d39c69b35e974aabef43493e6ff16b27ce52472d1b264650f930122df5ff76,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717420563566846044,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7jrcp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a0d546e-6072-497f-8464-3a2dd172f9a3,},Annotations:map[string]string{io.kubernetes.container.hash: ce8be3dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:981940dc117e5b39129bdfede644eebffb008799a316d3bfa0f1984889408b6c,PodSandboxId:420c4b77ca003abeb4ca2cc9c79777203d60c304f197ce46a6d8691c7d7e0029,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717420529993150557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rszqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ceb550ef-f06f-425c-b564-f4ad51d298bc,},Annotations:map[string]string{io.kubernetes.container.hash: 5f040b62,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a98e88b0a3e80fcaff5145157d273db81ab5e79b0361e5bf5d87a7f94af5219,PodSandboxId:243827bd1bbb9cce2948cb638088914ba16c5a0b89e7be7b5b6b4fdd27151e60,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1717420529963896439,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-m96bv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e7c090a-031c-483b-b89d-6192f
0b73a9d,},Annotations:map[string]string{io.kubernetes.container.hash: c3f2c2e3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f420f7e5b26fef5489931b9a2b3ce0b4bc0f6cd832ae70950acf8e34fc8f97c,PodSandboxId:267760408b1b8ef656a779d0b188e5b04ae9c32b3b6a9d326fac2e2d5e567dbc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717420529881369545,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bf865e3-3171-4447-a928-3f7bcde9b7c4,},An
notations:map[string]string{io.kubernetes.container.hash: cb13b890,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cda936f669af5731768b8c429bbe487aa9cdf8a5510ea785c0150229ba2c5f0d,PodSandboxId:da05292c8449d5c902f541bec9bc609284234cbe5e2d19f1eaef1ce8ad6b9a1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717420529804680902,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nf6c2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b1fbac-04e0-46c6-a2cd-8befd0343e0e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 7f2dd8b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe63357eb594d2e010697e48431e7f3c3eea9a1aaea980335c0d4335033da8ca,PodSandboxId:01c9270f89191bd2de7d3adf3e0a0a2d705d439b925ca9a2562e02bd46baf958,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717420526047790762,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8642d3d47b20a69d006a8efccbbe2d84,},Annotations:map[string]string{io.kubernetes.container.hash: e28d7e08,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b7fd9adda334d55768d0d9b6cf77daedde0203b28a375a75eb3fc3c344c52a3,PodSandboxId:2840e96fa2e7090f356f5e7c41aa3c8731c9901af22ff231660c9962c4e8bf3e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717420526000046093,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e547dd6860d1022394e38f43085b7,},Annotations:map[string]string{io.kubernetes.container.hash: c125c438,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:115ca08701ae57c4ebbda3e53cd4d8ac85cc2e414e2c662a45e0f7bf8e8a8ddb,PodSandboxId:1737d2a651d4666ebccc320b9251e96666b9e18973f1b0578464b033366ec903,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717420526017502749,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac3cdbe5a6f72ed950e19c2ab2acb01,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64361fea21d48d186fcc54b1dbcb7e9ebc1a6a1a5ca1d3014d7af495415caa31,PodSandboxId:9cf1b716a8eca5ec118f3498e95cb8548791359877cef31c354c19a230d6f67f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717420525955565407,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7f804e707df88666558ffa84b5d48ff,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e5ab5496d7e7db8f5b2d7ea36ba64a84ede2f508b16a2fc00edb87740393109,PodSandboxId:c293a70ba7ceac8adf622b552619290e65c8d4abff507481189dfb62d5ebf39c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1717420229782857320,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7jrcp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a0d546e-6072-497f-8464-3a2dd172f9a3,},Annotations:map[string]string{io.kubernetes.container.hash: ce8be3dc,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4aaed31d9690e67af1e8a3189c7ad89bbe7cca30dd59d49fa24f78fbb9b81166,PodSandboxId:4d9065a030ffe63bfe7748d1d106db329870299e20d9a8773f45bef910578630,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717420189613417242,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rszqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ceb550ef-f06f-425c-b564-f4ad51d298bc,},Annotations:map[string]string{io.kubernetes.container.hash: 5f040b62,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c41d1ef9ae6ed41e317322b6b6f9ecdde78b0252da19619c1a79af409962ccf3,PodSandboxId:a4b34884aaaa17d07560def26c9f359984ef0f7155d017a2c1b19a6b3c07f6e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717420189575251486,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 9bf865e3-3171-4447-a928-3f7bcde9b7c4,},Annotations:map[string]string{io.kubernetes.container.hash: cb13b890,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5fb5fac18c2746c85ef619629aa9f8e67e8c840e0ebaf7de43627d03ef00e55,PodSandboxId:c3ae5b1239c127ea1c3d266bdbc5ee258abe17a07155f4fbeb527a0ffd88d2e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717420188328971472,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-m96bv,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 3e7c090a-031c-483b-b89d-6192f0b73a9d,},Annotations:map[string]string{io.kubernetes.container.hash: c3f2c2e3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c205814428f5f446bf31b3c1eb05d88a0df1b650bc2eb6dac437cfc1aac5cb7,PodSandboxId:e055bef89aef572e5003403c6061f1c6923792ca977d474bb6508c5370028764,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717420185928954873,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nf6c2,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 10b1fbac-04e0-46c6-a2cd-8befd0343e0e,},Annotations:map[string]string{io.kubernetes.container.hash: 7f2dd8b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d685e8439e32392d6f330b06bfe5a68dc490239dfa52a2263107ab0486ca22d3,PodSandboxId:f10085f6aa1f8124745fba1d55bd4818fdd9515f0fbbac5b35d9856397c00530,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717420166857354713,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8642d3d47b20a69d006a8efccbbe2d
84,},Annotations:map[string]string{io.kubernetes.container.hash: e28d7e08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21a5bccaa9cf36906929cabd4c0566494209b0dd42c22fbe5ca3dc90836ea727,PodSandboxId:af0205261b8d4f59896594a7f99964fcde0a17d73a703f933043c87a0b24de8c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717420166908479492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7f804e707df886665
58ffa84b5d48ff,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:796bbd6b016f5db78dda5e2dd3aa3a11c30982b1456a74336ec89e55dcf5f94f,PodSandboxId:af3bfbe386473bcfc6b51ddb30ff65b002f1ab37171c0c16b7b5c30f4f5b1899,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717420166856585006,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac3cdbe5a6f72ed950e19c2ab2acb01,
},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9be4b439e8723094acf920c06dddcd4a0b5d64b13a048e5f569dc51d0fab096,PodSandboxId:f9da6a04531a7f21581fc7febdacaa4e365226a02593c378dc3afdd315a8b302,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717420166811627955,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e547dd6860d1022394e38f43085b7,},Annotations:m
ap[string]string{io.kubernetes.container.hash: c125c438,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5e153282-dd81-4490-8560-481c1679e6a8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	638078678bbb1       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      40 seconds ago       Running             busybox                   1                   65d39c69b35e9       busybox-fc5497c4f-7jrcp
	981940dc117e5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   420c4b77ca003       coredns-7db6d8ff4d-rszqr
	7a98e88b0a3e8       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      About a minute ago   Running             kindnet-cni               1                   243827bd1bbb9       kindnet-m96bv
	5f420f7e5b26f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   267760408b1b8       storage-provisioner
	cda936f669af5       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      About a minute ago   Running             kube-proxy                1                   da05292c8449d       kube-proxy-nf6c2
	fe63357eb594d       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   01c9270f89191       etcd-multinode-101468
	115ca08701ae5       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      About a minute ago   Running             kube-scheduler            1                   1737d2a651d46       kube-scheduler-multinode-101468
	2b7fd9adda334       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      About a minute ago   Running             kube-apiserver            1                   2840e96fa2e70       kube-apiserver-multinode-101468
	64361fea21d48       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      About a minute ago   Running             kube-controller-manager   1                   9cf1b716a8eca       kube-controller-manager-multinode-101468
	0e5ab5496d7e7       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   6 minutes ago        Exited              busybox                   0                   c293a70ba7cea       busybox-fc5497c4f-7jrcp
	4aaed31d9690e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago        Exited              coredns                   0                   4d9065a030ffe       coredns-7db6d8ff4d-rszqr
	c41d1ef9ae6ed       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago        Exited              storage-provisioner       0                   a4b34884aaaa1       storage-provisioner
	b5fb5fac18c27       docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266    6 minutes ago        Exited              kindnet-cni               0                   c3ae5b1239c12       kindnet-m96bv
	4c205814428f5       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      6 minutes ago        Exited              kube-proxy                0                   e055bef89aef5       kube-proxy-nf6c2
	21a5bccaa9cf3       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      7 minutes ago        Exited              kube-controller-manager   0                   af0205261b8d4       kube-controller-manager-multinode-101468
	d685e8439e323       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago        Exited              etcd                      0                   f10085f6aa1f8       etcd-multinode-101468
	796bbd6b016f5       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      7 minutes ago        Exited              kube-scheduler            0                   af3bfbe386473       kube-scheduler-multinode-101468
	e9be4b439e872       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      7 minutes ago        Exited              kube-apiserver            0                   f9da6a04531a7       kube-apiserver-multinode-101468
	
	
	==> coredns [4aaed31d9690e67af1e8a3189c7ad89bbe7cca30dd59d49fa24f78fbb9b81166] <==
	[INFO] 10.244.0.3:43959 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001862491s
	[INFO] 10.244.0.3:57291 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000086461s
	[INFO] 10.244.0.3:40828 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081709s
	[INFO] 10.244.0.3:50719 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001335634s
	[INFO] 10.244.0.3:45764 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000050319s
	[INFO] 10.244.0.3:41720 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000030767s
	[INFO] 10.244.0.3:35115 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000042304s
	[INFO] 10.244.1.2:57848 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012116s
	[INFO] 10.244.1.2:36461 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000101992s
	[INFO] 10.244.1.2:55584 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00008513s
	[INFO] 10.244.1.2:39933 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076652s
	[INFO] 10.244.0.3:52564 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114075s
	[INFO] 10.244.0.3:37895 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000068288s
	[INFO] 10.244.0.3:50037 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000049075s
	[INFO] 10.244.0.3:60385 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000111106s
	[INFO] 10.244.1.2:38500 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128494s
	[INFO] 10.244.1.2:59854 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000155604s
	[INFO] 10.244.1.2:48098 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000090347s
	[INFO] 10.244.1.2:47118 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000113579s
	[INFO] 10.244.0.3:41061 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096684s
	[INFO] 10.244.0.3:48514 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000101317s
	[INFO] 10.244.0.3:33790 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000052214s
	[INFO] 10.244.0.3:52582 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000071463s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [981940dc117e5b39129bdfede644eebffb008799a316d3bfa0f1984889408b6c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:33921 - 634 "HINFO IN 8246024549565837961.4546759840687616933. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.05002989s
	
	
	==> describe nodes <==
	Name:               multinode-101468
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-101468
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	                    minikube.k8s.io/name=multinode-101468
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_03T13_09_32_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 13:09:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-101468
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 13:16:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 13:15:29 +0000   Mon, 03 Jun 2024 13:09:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 13:15:29 +0000   Mon, 03 Jun 2024 13:09:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 13:15:29 +0000   Mon, 03 Jun 2024 13:09:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 13:15:29 +0000   Mon, 03 Jun 2024 13:09:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.141
	  Hostname:    multinode-101468
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fbce10a053614ea7b4edc56b16e8c1e3
	  System UUID:                fbce10a0-5361-4ea7-b4ed-c56b16e8c1e3
	  Boot ID:                    5dc59376-86a3-4f14-bf29-4db523acb769
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-7jrcp                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m16s
	  kube-system                 coredns-7db6d8ff4d-rszqr                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m59s
	  kube-system                 etcd-multinode-101468                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m12s
	  kube-system                 kindnet-m96bv                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m59s
	  kube-system                 kube-apiserver-multinode-101468             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m14s
	  kube-system                 kube-controller-manager-multinode-101468    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m12s
	  kube-system                 kube-proxy-nf6c2                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m59s
	  kube-system                 kube-scheduler-multinode-101468             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m12s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m58s                  kube-proxy       
	  Normal  Starting                 74s                    kube-proxy       
	  Normal  NodeAllocatableEnforced  7m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m18s (x8 over 7m19s)  kubelet          Node multinode-101468 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m18s (x8 over 7m19s)  kubelet          Node multinode-101468 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m18s (x7 over 7m19s)  kubelet          Node multinode-101468 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  7m13s                  kubelet          Node multinode-101468 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  7m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    7m13s                  kubelet          Node multinode-101468 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m13s                  kubelet          Node multinode-101468 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m13s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m                     node-controller  Node multinode-101468 event: Registered Node multinode-101468 in Controller
	  Normal  NodeReady                6m55s                  kubelet          Node multinode-101468 status is now: NodeReady
	  Normal  Starting                 79s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  79s (x8 over 79s)      kubelet          Node multinode-101468 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    79s (x8 over 79s)      kubelet          Node multinode-101468 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     79s (x7 over 79s)      kubelet          Node multinode-101468 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  79s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           62s                    node-controller  Node multinode-101468 event: Registered Node multinode-101468 in Controller
	
	
	Name:               multinode-101468-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-101468-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	                    minikube.k8s.io/name=multinode-101468
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_03T13_16_08_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 13:16:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-101468-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 13:16:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 13:16:38 +0000   Mon, 03 Jun 2024 13:16:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 13:16:38 +0000   Mon, 03 Jun 2024 13:16:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 13:16:38 +0000   Mon, 03 Jun 2024 13:16:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 13:16:38 +0000   Mon, 03 Jun 2024 13:16:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.17
	  Hostname:    multinode-101468-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 091f000580784a52ac949dd19c81c1ff
	  System UUID:                091f0005-8078-4a52-ac94-9dd19c81c1ff
	  Boot ID:                    3671d987-c1a9-4829-8b8b-3bef68dcee08
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-hjfgd    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         42s
	  kube-system                 kindnet-2lwvt              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m26s
	  kube-system                 kube-proxy-zq896           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 32s                    kube-proxy       
	  Normal  Starting                 6m22s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m26s (x2 over 6m26s)  kubelet          Node multinode-101468-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m26s (x2 over 6m26s)  kubelet          Node multinode-101468-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m26s (x2 over 6m26s)  kubelet          Node multinode-101468-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m18s                  kubelet          Node multinode-101468-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  37s (x2 over 37s)      kubelet          Node multinode-101468-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s (x2 over 37s)      kubelet          Node multinode-101468-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s (x2 over 37s)      kubelet          Node multinode-101468-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  37s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           32s                    node-controller  Node multinode-101468-m02 event: Registered Node multinode-101468-m02 in Controller
	  Normal  NodeReady                29s                    kubelet          Node multinode-101468-m02 status is now: NodeReady
	
	
	Name:               multinode-101468-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-101468-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	                    minikube.k8s.io/name=multinode-101468
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_03T13_16_34_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 13:16:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-101468-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 13:16:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 13:16:41 +0000   Mon, 03 Jun 2024 13:16:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 13:16:41 +0000   Mon, 03 Jun 2024 13:16:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 13:16:41 +0000   Mon, 03 Jun 2024 13:16:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 13:16:41 +0000   Mon, 03 Jun 2024 13:16:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.203
	  Hostname:    multinode-101468-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 137814128d6b48449253065be6c5ec6e
	  System UUID:                13781412-8d6b-4844-9253-065be6c5ec6e
	  Boot ID:                    d462854a-c0e4-4006-9790-d011c7616f0b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-vhd2b       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m47s
	  kube-system                 kube-proxy-jd5x2    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 5m42s                  kube-proxy  
	  Normal  Starting                 6s                     kube-proxy  
	  Normal  Starting                 5m4s                   kube-proxy  
	  Normal  NodeHasNoDiskPressure    5m47s (x2 over 5m47s)  kubelet     Node multinode-101468-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m47s (x2 over 5m47s)  kubelet     Node multinode-101468-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m47s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m47s (x2 over 5m47s)  kubelet     Node multinode-101468-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m38s                  kubelet     Node multinode-101468-m03 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  5m8s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     5m7s (x2 over 5m8s)    kubelet     Node multinode-101468-m03 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    5m7s (x2 over 5m8s)    kubelet     Node multinode-101468-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m7s (x2 over 5m8s)    kubelet     Node multinode-101468-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m1s                   kubelet     Node multinode-101468-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  10s (x2 over 10s)      kubelet     Node multinode-101468-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10s (x2 over 10s)      kubelet     Node multinode-101468-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10s (x2 over 10s)      kubelet     Node multinode-101468-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet     Node multinode-101468-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +6.483250] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.055742] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060969] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.162613] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.134263] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.280125] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +4.259872] systemd-fstab-generator[760]: Ignoring "noauto" option for root device
	[  +4.506475] systemd-fstab-generator[944]: Ignoring "noauto" option for root device
	[  +0.062202] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.471282] systemd-fstab-generator[1286]: Ignoring "noauto" option for root device
	[  +0.084758] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.482107] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.675168] systemd-fstab-generator[1474]: Ignoring "noauto" option for root device
	[  +5.250697] kauditd_printk_skb: 80 callbacks suppressed
	[Jun 3 13:15] systemd-fstab-generator[2813]: Ignoring "noauto" option for root device
	[  +0.143642] systemd-fstab-generator[2826]: Ignoring "noauto" option for root device
	[  +0.182463] systemd-fstab-generator[2840]: Ignoring "noauto" option for root device
	[  +0.155772] systemd-fstab-generator[2852]: Ignoring "noauto" option for root device
	[  +0.321514] systemd-fstab-generator[2880]: Ignoring "noauto" option for root device
	[  +0.741085] systemd-fstab-generator[2978]: Ignoring "noauto" option for root device
	[  +2.051083] systemd-fstab-generator[3103]: Ignoring "noauto" option for root device
	[  +4.641857] kauditd_printk_skb: 184 callbacks suppressed
	[ +12.709565] kauditd_printk_skb: 32 callbacks suppressed
	[  +0.213684] systemd-fstab-generator[3923]: Ignoring "noauto" option for root device
	[Jun 3 13:16] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [d685e8439e32392d6f330b06bfe5a68dc490239dfa52a2263107ab0486ca22d3] <==
	{"level":"info","ts":"2024-06-03T13:09:27.698377Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2024-06-03T13:10:18.503406Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.19218ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8343920919220089995 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-101468-m02.17d58096d5530ffd\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-101468-m02.17d58096d5530ffd\" value_size:642 lease:8343920919220089018 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-06-03T13:10:18.503807Z","caller":"traceutil/trace.go:171","msg":"trace[1232507046] transaction","detail":"{read_only:false; response_revision:470; number_of_response:1; }","duration":"256.775489ms","start":"2024-06-03T13:10:18.246985Z","end":"2024-06-03T13:10:18.50376Z","steps":["trace[1232507046] 'process raft request'  (duration: 131.331476ms)","trace[1232507046] 'compare'  (duration: 124.100984ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-03T13:10:18.504616Z","caller":"traceutil/trace.go:171","msg":"trace[1636408013] transaction","detail":"{read_only:false; response_revision:471; number_of_response:1; }","duration":"175.00846ms","start":"2024-06-03T13:10:18.329583Z","end":"2024-06-03T13:10:18.504592Z","steps":["trace[1636408013] 'process raft request'  (duration: 174.045088ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T13:10:22.85024Z","caller":"traceutil/trace.go:171","msg":"trace[550781007] transaction","detail":"{read_only:false; response_revision:505; number_of_response:1; }","duration":"150.752963ms","start":"2024-06-03T13:10:22.699427Z","end":"2024-06-03T13:10:22.85018Z","steps":["trace[550781007] 'process raft request'  (duration: 150.598237ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T13:10:22.912768Z","caller":"traceutil/trace.go:171","msg":"trace[2141742139] transaction","detail":"{read_only:false; response_revision:506; number_of_response:1; }","duration":"132.93847ms","start":"2024-06-03T13:10:22.779813Z","end":"2024-06-03T13:10:22.912752Z","steps":["trace[2141742139] 'process raft request'  (duration: 132.840839ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T13:10:57.835395Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.050474ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8343920919220090360 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-101468-m03.17d5809ffe106274\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-101468-m03.17d5809ffe106274\" value_size:640 lease:8343920919220090165 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-06-03T13:10:57.835917Z","caller":"traceutil/trace.go:171","msg":"trace[1418118352] transaction","detail":"{read_only:false; response_revision:589; number_of_response:1; }","duration":"227.069098ms","start":"2024-06-03T13:10:57.608827Z","end":"2024-06-03T13:10:57.835896Z","steps":["trace[1418118352] 'process raft request'  (duration: 121.386696ms)","trace[1418118352] 'compare'  (duration: 104.707903ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-03T13:10:57.836134Z","caller":"traceutil/trace.go:171","msg":"trace[1797267974] transaction","detail":"{read_only:false; response_revision:590; number_of_response:1; }","duration":"161.315102ms","start":"2024-06-03T13:10:57.674743Z","end":"2024-06-03T13:10:57.836058Z","steps":["trace[1797267974] 'process raft request'  (duration: 161.15068ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T13:10:57.836254Z","caller":"traceutil/trace.go:171","msg":"trace[1173506688] linearizableReadLoop","detail":"{readStateIndex:620; appliedIndex:619; }","duration":"188.810089ms","start":"2024-06-03T13:10:57.647435Z","end":"2024-06-03T13:10:57.836245Z","steps":["trace[1173506688] 'read index received'  (duration: 82.732802ms)","trace[1173506688] 'applied index is now lower than readState.Index'  (duration: 106.076446ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-03T13:10:57.836468Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"189.023952ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-101468-m03\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-06-03T13:10:57.836527Z","caller":"traceutil/trace.go:171","msg":"trace[1269450301] range","detail":"{range_begin:/registry/minions/multinode-101468-m03; range_end:; response_count:1; response_revision:590; }","duration":"189.120434ms","start":"2024-06-03T13:10:57.647396Z","end":"2024-06-03T13:10:57.836516Z","steps":["trace[1269450301] 'agreement among raft nodes before linearized reading'  (duration: 188.948124ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T13:10:57.836498Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.89141ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas/\" range_end:\"/registry/resourcequotas0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-03T13:10:57.836617Z","caller":"traceutil/trace.go:171","msg":"trace[1983427030] range","detail":"{range_begin:/registry/resourcequotas/; range_end:/registry/resourcequotas0; response_count:0; response_revision:590; }","duration":"138.038172ms","start":"2024-06-03T13:10:57.698569Z","end":"2024-06-03T13:10:57.836607Z","steps":["trace[1983427030] 'agreement among raft nodes before linearized reading'  (duration: 137.905816ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T13:11:40.266758Z","caller":"traceutil/trace.go:171","msg":"trace[1282143115] transaction","detail":"{read_only:false; response_revision:700; number_of_response:1; }","duration":"115.697577ms","start":"2024-06-03T13:11:40.15104Z","end":"2024-06-03T13:11:40.266737Z","steps":["trace[1282143115] 'process raft request'  (duration: 115.575772ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T13:13:50.155986Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-06-03T13:13:50.156168Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-101468","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.141:2380"],"advertise-client-urls":["https://192.168.39.141:2379"]}
	{"level":"warn","ts":"2024-06-03T13:13:50.156316Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-03T13:13:50.156424Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-03T13:13:50.240135Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.141:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-03T13:13:50.240185Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.141:2379: use of closed network connection"}
	{"level":"info","ts":"2024-06-03T13:13:50.24031Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"2398e045949c73cb","current-leader-member-id":"2398e045949c73cb"}
	{"level":"info","ts":"2024-06-03T13:13:50.243278Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.141:2380"}
	{"level":"info","ts":"2024-06-03T13:13:50.243414Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.141:2380"}
	{"level":"info","ts":"2024-06-03T13:13:50.243425Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-101468","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.141:2380"],"advertise-client-urls":["https://192.168.39.141:2379"]}
	
	
	==> etcd [fe63357eb594d2e010697e48431e7f3c3eea9a1aaea980335c0d4335033da8ca] <==
	{"level":"info","ts":"2024-06-03T13:15:26.533517Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-03T13:15:26.53377Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2398e045949c73cb switched to configuration voters=(2565046577238143947)"}
	{"level":"info","ts":"2024-06-03T13:15:26.533847Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"bf8381628c3e4cea","local-member-id":"2398e045949c73cb","added-peer-id":"2398e045949c73cb","added-peer-peer-urls":["https://192.168.39.141:2380"]}
	{"level":"info","ts":"2024-06-03T13:15:26.533963Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bf8381628c3e4cea","local-member-id":"2398e045949c73cb","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T13:15:26.534006Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T13:15:26.584197Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-03T13:15:26.584457Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"2398e045949c73cb","initial-advertise-peer-urls":["https://192.168.39.141:2380"],"listen-peer-urls":["https://192.168.39.141:2380"],"advertise-client-urls":["https://192.168.39.141:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.141:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-03T13:15:26.58451Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-03T13:15:26.584621Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.141:2380"}
	{"level":"info","ts":"2024-06-03T13:15:26.584647Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.141:2380"}
	{"level":"info","ts":"2024-06-03T13:15:27.684665Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2398e045949c73cb is starting a new election at term 2"}
	{"level":"info","ts":"2024-06-03T13:15:27.684725Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2398e045949c73cb became pre-candidate at term 2"}
	{"level":"info","ts":"2024-06-03T13:15:27.684749Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2398e045949c73cb received MsgPreVoteResp from 2398e045949c73cb at term 2"}
	{"level":"info","ts":"2024-06-03T13:15:27.684762Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2398e045949c73cb became candidate at term 3"}
	{"level":"info","ts":"2024-06-03T13:15:27.684768Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2398e045949c73cb received MsgVoteResp from 2398e045949c73cb at term 3"}
	{"level":"info","ts":"2024-06-03T13:15:27.684776Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2398e045949c73cb became leader at term 3"}
	{"level":"info","ts":"2024-06-03T13:15:27.684784Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2398e045949c73cb elected leader 2398e045949c73cb at term 3"}
	{"level":"info","ts":"2024-06-03T13:15:27.690551Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"2398e045949c73cb","local-member-attributes":"{Name:multinode-101468 ClientURLs:[https://192.168.39.141:2379]}","request-path":"/0/members/2398e045949c73cb/attributes","cluster-id":"bf8381628c3e4cea","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-03T13:15:27.690732Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-03T13:15:27.690768Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-03T13:15:27.691228Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-03T13:15:27.691301Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-03T13:15:27.693027Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.141:2379"}
	{"level":"info","ts":"2024-06-03T13:15:27.693198Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-03T13:16:37.999867Z","caller":"traceutil/trace.go:171","msg":"trace[1945236411] transaction","detail":"{read_only:false; response_revision:1096; number_of_response:1; }","duration":"132.54954ms","start":"2024-06-03T13:16:37.867255Z","end":"2024-06-03T13:16:37.999805Z","steps":["trace[1945236411] 'process raft request'  (duration: 132.333333ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:16:44 up 7 min,  0 users,  load average: 0.29, 0.25, 0.13
	Linux multinode-101468 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [7a98e88b0a3e80fcaff5145157d273db81ab5e79b0361e5bf5d87a7f94af5219] <==
	I0603 13:16:00.815528       1 main.go:250] Node multinode-101468-m03 has CIDR [10.244.3.0/24] 
	I0603 13:16:10.824452       1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
	I0603 13:16:10.824498       1 main.go:227] handling current node
	I0603 13:16:10.824517       1 main.go:223] Handling node with IPs: map[192.168.39.17:{}]
	I0603 13:16:10.824522       1 main.go:250] Node multinode-101468-m02 has CIDR [10.244.1.0/24] 
	I0603 13:16:10.824705       1 main.go:223] Handling node with IPs: map[192.168.39.203:{}]
	I0603 13:16:10.824738       1 main.go:250] Node multinode-101468-m03 has CIDR [10.244.3.0/24] 
	I0603 13:16:20.832450       1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
	I0603 13:16:20.832509       1 main.go:227] handling current node
	I0603 13:16:20.832525       1 main.go:223] Handling node with IPs: map[192.168.39.17:{}]
	I0603 13:16:20.832533       1 main.go:250] Node multinode-101468-m02 has CIDR [10.244.1.0/24] 
	I0603 13:16:20.832679       1 main.go:223] Handling node with IPs: map[192.168.39.203:{}]
	I0603 13:16:20.832713       1 main.go:250] Node multinode-101468-m03 has CIDR [10.244.3.0/24] 
	I0603 13:16:30.838206       1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
	I0603 13:16:30.838306       1 main.go:227] handling current node
	I0603 13:16:30.838330       1 main.go:223] Handling node with IPs: map[192.168.39.17:{}]
	I0603 13:16:30.838348       1 main.go:250] Node multinode-101468-m02 has CIDR [10.244.1.0/24] 
	I0603 13:16:30.838487       1 main.go:223] Handling node with IPs: map[192.168.39.203:{}]
	I0603 13:16:30.838508       1 main.go:250] Node multinode-101468-m03 has CIDR [10.244.3.0/24] 
	I0603 13:16:40.842979       1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
	I0603 13:16:40.843028       1 main.go:227] handling current node
	I0603 13:16:40.843039       1 main.go:223] Handling node with IPs: map[192.168.39.17:{}]
	I0603 13:16:40.843044       1 main.go:250] Node multinode-101468-m02 has CIDR [10.244.1.0/24] 
	I0603 13:16:40.843623       1 main.go:223] Handling node with IPs: map[192.168.39.203:{}]
	I0603 13:16:40.843655       1 main.go:250] Node multinode-101468-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [b5fb5fac18c2746c85ef619629aa9f8e67e8c840e0ebaf7de43627d03ef00e55] <==
	I0603 13:13:09.094408       1 main.go:250] Node multinode-101468-m03 has CIDR [10.244.3.0/24] 
	I0603 13:13:19.107719       1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
	I0603 13:13:19.107825       1 main.go:227] handling current node
	I0603 13:13:19.107854       1 main.go:223] Handling node with IPs: map[192.168.39.17:{}]
	I0603 13:13:19.107878       1 main.go:250] Node multinode-101468-m02 has CIDR [10.244.1.0/24] 
	I0603 13:13:19.107995       1 main.go:223] Handling node with IPs: map[192.168.39.203:{}]
	I0603 13:13:19.108017       1 main.go:250] Node multinode-101468-m03 has CIDR [10.244.3.0/24] 
	I0603 13:13:29.121049       1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
	I0603 13:13:29.121211       1 main.go:227] handling current node
	I0603 13:13:29.121235       1 main.go:223] Handling node with IPs: map[192.168.39.17:{}]
	I0603 13:13:29.121253       1 main.go:250] Node multinode-101468-m02 has CIDR [10.244.1.0/24] 
	I0603 13:13:29.121457       1 main.go:223] Handling node with IPs: map[192.168.39.203:{}]
	I0603 13:13:29.121526       1 main.go:250] Node multinode-101468-m03 has CIDR [10.244.3.0/24] 
	I0603 13:13:39.132519       1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
	I0603 13:13:39.132641       1 main.go:227] handling current node
	I0603 13:13:39.132669       1 main.go:223] Handling node with IPs: map[192.168.39.17:{}]
	I0603 13:13:39.132732       1 main.go:250] Node multinode-101468-m02 has CIDR [10.244.1.0/24] 
	I0603 13:13:39.132987       1 main.go:223] Handling node with IPs: map[192.168.39.203:{}]
	I0603 13:13:39.133031       1 main.go:250] Node multinode-101468-m03 has CIDR [10.244.3.0/24] 
	I0603 13:13:49.142793       1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
	I0603 13:13:49.142982       1 main.go:227] handling current node
	I0603 13:13:49.143007       1 main.go:223] Handling node with IPs: map[192.168.39.17:{}]
	I0603 13:13:49.143025       1 main.go:250] Node multinode-101468-m02 has CIDR [10.244.1.0/24] 
	I0603 13:13:49.143300       1 main.go:223] Handling node with IPs: map[192.168.39.203:{}]
	I0603 13:13:49.143386       1 main.go:250] Node multinode-101468-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [2b7fd9adda334d55768d0d9b6cf77daedde0203b28a375a75eb3fc3c344c52a3] <==
	I0603 13:15:29.071919       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0603 13:15:29.071958       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0603 13:15:29.075806       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0603 13:15:29.076174       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0603 13:15:29.076297       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0603 13:15:29.079829       1 shared_informer.go:320] Caches are synced for configmaps
	I0603 13:15:29.079945       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0603 13:15:29.082326       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0603 13:15:29.082359       1 policy_source.go:224] refreshing policies
	I0603 13:15:29.082761       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0603 13:15:29.084623       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0603 13:15:29.099222       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0603 13:15:29.111458       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0603 13:15:29.111658       1 aggregator.go:165] initial CRD sync complete...
	I0603 13:15:29.111741       1 autoregister_controller.go:141] Starting autoregister controller
	I0603 13:15:29.111764       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0603 13:15:29.111787       1 cache.go:39] Caches are synced for autoregister controller
	I0603 13:15:29.988652       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0603 13:15:31.287751       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0603 13:15:31.414271       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0603 13:15:31.434275       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0603 13:15:31.504934       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0603 13:15:31.512154       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0603 13:15:42.220467       1 controller.go:615] quota admission added evaluator for: endpoints
	I0603 13:15:42.375140       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [e9be4b439e8723094acf920c06dddcd4a0b5d64b13a048e5f569dc51d0fab096] <==
	I0603 13:09:30.795762       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0603 13:09:30.845667       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0603 13:09:30.853049       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.141]
	I0603 13:09:30.854192       1 controller.go:615] quota admission added evaluator for: endpoints
	I0603 13:09:30.859442       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0603 13:09:31.210410       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0603 13:09:31.950757       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0603 13:09:31.973294       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0603 13:09:31.990437       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0603 13:09:44.998329       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0603 13:09:45.248648       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0603 13:10:30.913763       1 conn.go:339] Error on socket receive: read tcp 192.168.39.141:8443->192.168.39.1:46902: use of closed network connection
	E0603 13:10:31.101347       1 conn.go:339] Error on socket receive: read tcp 192.168.39.141:8443->192.168.39.1:46908: use of closed network connection
	E0603 13:10:31.289955       1 conn.go:339] Error on socket receive: read tcp 192.168.39.141:8443->192.168.39.1:46912: use of closed network connection
	E0603 13:10:31.470192       1 conn.go:339] Error on socket receive: read tcp 192.168.39.141:8443->192.168.39.1:46928: use of closed network connection
	E0603 13:10:31.638249       1 conn.go:339] Error on socket receive: read tcp 192.168.39.141:8443->192.168.39.1:46942: use of closed network connection
	E0603 13:10:31.811516       1 conn.go:339] Error on socket receive: read tcp 192.168.39.141:8443->192.168.39.1:46970: use of closed network connection
	E0603 13:10:32.109797       1 conn.go:339] Error on socket receive: read tcp 192.168.39.141:8443->192.168.39.1:47008: use of closed network connection
	E0603 13:10:32.285163       1 conn.go:339] Error on socket receive: read tcp 192.168.39.141:8443->192.168.39.1:47016: use of closed network connection
	E0603 13:10:32.455863       1 conn.go:339] Error on socket receive: read tcp 192.168.39.141:8443->192.168.39.1:47036: use of closed network connection
	E0603 13:10:32.622052       1 conn.go:339] Error on socket receive: read tcp 192.168.39.141:8443->192.168.39.1:47064: use of closed network connection
	I0603 13:13:50.169539       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0603 13:13:50.171280       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0603 13:13:50.183333       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0603 13:13:50.186392       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	
	
	==> kube-controller-manager [21a5bccaa9cf36906929cabd4c0566494209b0dd42c22fbe5ca3dc90836ea727] <==
	I0603 13:09:50.072308       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="99.122µs"
	I0603 13:10:18.508308       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101468-m02\" does not exist"
	I0603 13:10:18.522942       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-101468-m02" podCIDRs=["10.244.1.0/24"]
	I0603 13:10:19.458602       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-101468-m02"
	I0603 13:10:26.343271       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101468-m02"
	I0603 13:10:28.635160       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.393334ms"
	I0603 13:10:28.649994       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.774352ms"
	I0603 13:10:28.650124       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.514µs"
	I0603 13:10:30.178888       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.939236ms"
	I0603 13:10:30.179047       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.064µs"
	I0603 13:10:30.396424       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.615312ms"
	I0603 13:10:30.396666       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.202µs"
	I0603 13:10:57.839441       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101468-m03\" does not exist"
	I0603 13:10:57.839579       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101468-m02"
	I0603 13:10:57.851258       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-101468-m03" podCIDRs=["10.244.2.0/24"]
	I0603 13:10:59.475364       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-101468-m03"
	I0603 13:11:06.998222       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101468-m02"
	I0603 13:11:35.981884       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101468-m02"
	I0603 13:11:37.038348       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101468-m03\" does not exist"
	I0603 13:11:37.038639       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101468-m02"
	I0603 13:11:37.063299       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-101468-m03" podCIDRs=["10.244.3.0/24"]
	I0603 13:11:43.756484       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101468-m02"
	I0603 13:12:24.529125       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101468-m03"
	I0603 13:12:24.604492       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.821312ms"
	I0603 13:12:24.605144       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="125.26µs"
	
	
	==> kube-controller-manager [64361fea21d48d186fcc54b1dbcb7e9ebc1a6a1a5ca1d3014d7af495415caa31] <==
	I0603 13:15:42.923549       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 13:15:42.923599       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0603 13:16:02.935792       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.53512ms"
	I0603 13:16:02.936331       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="76.562µs"
	I0603 13:16:02.951541       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.671078ms"
	I0603 13:16:02.951943       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="97.645µs"
	I0603 13:16:07.437897       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101468-m02\" does not exist"
	I0603 13:16:07.453053       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-101468-m02" podCIDRs=["10.244.1.0/24"]
	I0603 13:16:09.341747       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.183µs"
	I0603 13:16:09.382375       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.511µs"
	I0603 13:16:09.395676       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.495µs"
	I0603 13:16:09.402974       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="71.147µs"
	I0603 13:16:09.410623       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="66.693µs"
	I0603 13:16:09.414842       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.478µs"
	I0603 13:16:12.138188       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.866µs"
	I0603 13:16:15.446884       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101468-m02"
	I0603 13:16:15.462178       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.926µs"
	I0603 13:16:15.481835       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.592µs"
	I0603 13:16:16.891955       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.310445ms"
	I0603 13:16:16.894457       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="144.375µs"
	I0603 13:16:33.656003       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101468-m02"
	I0603 13:16:34.635851       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101468-m02"
	I0603 13:16:34.635950       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101468-m03\" does not exist"
	I0603 13:16:34.648208       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-101468-m03" podCIDRs=["10.244.2.0/24"]
	I0603 13:16:41.445487       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101468-m02"
	
	
	==> kube-proxy [4c205814428f5f446bf31b3c1eb05d88a0df1b650bc2eb6dac437cfc1aac5cb7] <==
	I0603 13:09:46.123830       1 server_linux.go:69] "Using iptables proxy"
	I0603 13:09:46.140223       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.141"]
	I0603 13:09:46.180500       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 13:09:46.180536       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 13:09:46.180551       1 server_linux.go:165] "Using iptables Proxier"
	I0603 13:09:46.183544       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 13:09:46.183839       1 server.go:872] "Version info" version="v1.30.1"
	I0603 13:09:46.183890       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 13:09:46.185218       1 config.go:192] "Starting service config controller"
	I0603 13:09:46.185317       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 13:09:46.185361       1 config.go:101] "Starting endpoint slice config controller"
	I0603 13:09:46.185378       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 13:09:46.185862       1 config.go:319] "Starting node config controller"
	I0603 13:09:46.185903       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 13:09:46.286003       1 shared_informer.go:320] Caches are synced for node config
	I0603 13:09:46.286052       1 shared_informer.go:320] Caches are synced for service config
	I0603 13:09:46.286143       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [cda936f669af5731768b8c429bbe487aa9cdf8a5510ea785c0150229ba2c5f0d] <==
	I0603 13:15:30.121030       1 server_linux.go:69] "Using iptables proxy"
	I0603 13:15:30.135591       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.141"]
	I0603 13:15:30.240538       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 13:15:30.240639       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 13:15:30.240656       1 server_linux.go:165] "Using iptables Proxier"
	I0603 13:15:30.245192       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 13:15:30.245392       1 server.go:872] "Version info" version="v1.30.1"
	I0603 13:15:30.245424       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 13:15:30.246963       1 config.go:192] "Starting service config controller"
	I0603 13:15:30.247035       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 13:15:30.247106       1 config.go:101] "Starting endpoint slice config controller"
	I0603 13:15:30.247128       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 13:15:30.253424       1 config.go:319] "Starting node config controller"
	I0603 13:15:30.253454       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 13:15:30.348335       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 13:15:30.348410       1 shared_informer.go:320] Caches are synced for service config
	I0603 13:15:30.353976       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [115ca08701ae57c4ebbda3e53cd4d8ac85cc2e414e2c662a45e0f7bf8e8a8ddb] <==
	I0603 13:15:27.090809       1 serving.go:380] Generated self-signed cert in-memory
	W0603 13:15:29.048930       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0603 13:15:29.049119       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0603 13:15:29.049151       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0603 13:15:29.049255       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0603 13:15:29.088776       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0603 13:15:29.088823       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 13:15:29.090701       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0603 13:15:29.090879       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0603 13:15:29.090926       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 13:15:29.090958       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 13:15:29.191172       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [796bbd6b016f5db78dda5e2dd3aa3a11c30982b1456a74336ec89e55dcf5f94f] <==
	E0603 13:09:29.250866       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0603 13:09:29.250894       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0603 13:09:29.250922       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0603 13:09:29.250948       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0603 13:09:29.250987       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0603 13:09:30.070403       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0603 13:09:30.070458       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0603 13:09:30.188875       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0603 13:09:30.188955       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0603 13:09:30.207966       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0603 13:09:30.208159       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0603 13:09:30.214937       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0603 13:09:30.215103       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0603 13:09:30.219013       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0603 13:09:30.219157       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0603 13:09:30.325391       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0603 13:09:30.325529       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0603 13:09:30.351841       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0603 13:09:30.351962       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0603 13:09:30.372929       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0603 13:09:30.373012       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0603 13:09:30.711267       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0603 13:09:30.711871       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 13:09:32.725475       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0603 13:13:50.165820       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jun 03 13:15:26 multinode-101468 kubelet[3110]: I0603 13:15:26.788263    3110 kubelet_node_status.go:73] "Attempting to register node" node="multinode-101468"
	Jun 03 13:15:29 multinode-101468 kubelet[3110]: I0603 13:15:29.166267    3110 kubelet_node_status.go:112] "Node was previously registered" node="multinode-101468"
	Jun 03 13:15:29 multinode-101468 kubelet[3110]: I0603 13:15:29.166383    3110 kubelet_node_status.go:76] "Successfully registered node" node="multinode-101468"
	Jun 03 13:15:29 multinode-101468 kubelet[3110]: I0603 13:15:29.167968    3110 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jun 03 13:15:29 multinode-101468 kubelet[3110]: I0603 13:15:29.169685    3110 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jun 03 13:15:29 multinode-101468 kubelet[3110]: E0603 13:15:29.182750    3110 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"multinode-101468\" not found"
	Jun 03 13:15:29 multinode-101468 kubelet[3110]: I0603 13:15:29.255033    3110 apiserver.go:52] "Watching apiserver"
	Jun 03 13:15:29 multinode-101468 kubelet[3110]: I0603 13:15:29.259467    3110 topology_manager.go:215] "Topology Admit Handler" podUID="9bf865e3-3171-4447-a928-3f7bcde9b7c4" podNamespace="kube-system" podName="storage-provisioner"
	Jun 03 13:15:29 multinode-101468 kubelet[3110]: I0603 13:15:29.259595    3110 topology_manager.go:215] "Topology Admit Handler" podUID="3e7c090a-031c-483b-b89d-6192f0b73a9d" podNamespace="kube-system" podName="kindnet-m96bv"
	Jun 03 13:15:29 multinode-101468 kubelet[3110]: I0603 13:15:29.259645    3110 topology_manager.go:215] "Topology Admit Handler" podUID="10b1fbac-04e0-46c6-a2cd-8befd0343e0e" podNamespace="kube-system" podName="kube-proxy-nf6c2"
	Jun 03 13:15:29 multinode-101468 kubelet[3110]: I0603 13:15:29.259691    3110 topology_manager.go:215] "Topology Admit Handler" podUID="ceb550ef-f06f-425c-b564-f4ad51d298bc" podNamespace="kube-system" podName="coredns-7db6d8ff4d-rszqr"
	Jun 03 13:15:29 multinode-101468 kubelet[3110]: I0603 13:15:29.259731    3110 topology_manager.go:215] "Topology Admit Handler" podUID="7a0d546e-6072-497f-8464-3a2dd172f9a3" podNamespace="default" podName="busybox-fc5497c4f-7jrcp"
	Jun 03 13:15:29 multinode-101468 kubelet[3110]: I0603 13:15:29.274322    3110 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jun 03 13:15:29 multinode-101468 kubelet[3110]: I0603 13:15:29.374900    3110 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/3e7c090a-031c-483b-b89d-6192f0b73a9d-cni-cfg\") pod \"kindnet-m96bv\" (UID: \"3e7c090a-031c-483b-b89d-6192f0b73a9d\") " pod="kube-system/kindnet-m96bv"
	Jun 03 13:15:29 multinode-101468 kubelet[3110]: I0603 13:15:29.375523    3110 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e7c090a-031c-483b-b89d-6192f0b73a9d-xtables-lock\") pod \"kindnet-m96bv\" (UID: \"3e7c090a-031c-483b-b89d-6192f0b73a9d\") " pod="kube-system/kindnet-m96bv"
	Jun 03 13:15:29 multinode-101468 kubelet[3110]: I0603 13:15:29.375724    3110 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e7c090a-031c-483b-b89d-6192f0b73a9d-lib-modules\") pod \"kindnet-m96bv\" (UID: \"3e7c090a-031c-483b-b89d-6192f0b73a9d\") " pod="kube-system/kindnet-m96bv"
	Jun 03 13:15:29 multinode-101468 kubelet[3110]: I0603 13:15:29.375805    3110 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/10b1fbac-04e0-46c6-a2cd-8befd0343e0e-lib-modules\") pod \"kube-proxy-nf6c2\" (UID: \"10b1fbac-04e0-46c6-a2cd-8befd0343e0e\") " pod="kube-system/kube-proxy-nf6c2"
	Jun 03 13:15:29 multinode-101468 kubelet[3110]: I0603 13:15:29.376197    3110 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/10b1fbac-04e0-46c6-a2cd-8befd0343e0e-xtables-lock\") pod \"kube-proxy-nf6c2\" (UID: \"10b1fbac-04e0-46c6-a2cd-8befd0343e0e\") " pod="kube-system/kube-proxy-nf6c2"
	Jun 03 13:15:29 multinode-101468 kubelet[3110]: I0603 13:15:29.376507    3110 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9bf865e3-3171-4447-a928-3f7bcde9b7c4-tmp\") pod \"storage-provisioner\" (UID: \"9bf865e3-3171-4447-a928-3f7bcde9b7c4\") " pod="kube-system/storage-provisioner"
	Jun 03 13:15:29 multinode-101468 kubelet[3110]: E0603 13:15:29.442819    3110 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-multinode-101468\" already exists" pod="kube-system/kube-apiserver-multinode-101468"
	Jun 03 13:16:25 multinode-101468 kubelet[3110]: E0603 13:16:25.368543    3110 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 13:16:25 multinode-101468 kubelet[3110]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 13:16:25 multinode-101468 kubelet[3110]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 13:16:25 multinode-101468 kubelet[3110]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 13:16:25 multinode-101468 kubelet[3110]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0603 13:16:43.975731 1115108 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19011-1078924/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-101468 -n multinode-101468
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-101468 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (299.40s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-101468 stop
E0603 13:17:45.541435 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.crt: no such file or directory
E0603 13:18:01.277952 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/functional-093300/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-101468 stop: exit status 82 (2m0.479667876s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-101468-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-101468 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-101468 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-101468 status: exit status 3 (18.710944895s)

                                                
                                                
-- stdout --
	multinode-101468
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-101468-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0603 13:19:07.497834 1116206 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.17:22: connect: no route to host
	E0603 13:19:07.497883 1116206 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.17:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-101468 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-101468 -n multinode-101468
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-101468 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-101468 logs -n 25: (1.528997243s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-101468 ssh -n                                                                 | multinode-101468 | jenkins | v1.33.1 | 03 Jun 24 13:11 UTC | 03 Jun 24 13:11 UTC |
	|         | multinode-101468-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-101468 cp multinode-101468-m02:/home/docker/cp-test.txt                       | multinode-101468 | jenkins | v1.33.1 | 03 Jun 24 13:11 UTC | 03 Jun 24 13:11 UTC |
	|         | multinode-101468:/home/docker/cp-test_multinode-101468-m02_multinode-101468.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-101468 ssh -n                                                                 | multinode-101468 | jenkins | v1.33.1 | 03 Jun 24 13:11 UTC | 03 Jun 24 13:11 UTC |
	|         | multinode-101468-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-101468 ssh -n multinode-101468 sudo cat                                       | multinode-101468 | jenkins | v1.33.1 | 03 Jun 24 13:11 UTC | 03 Jun 24 13:11 UTC |
	|         | /home/docker/cp-test_multinode-101468-m02_multinode-101468.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-101468 cp multinode-101468-m02:/home/docker/cp-test.txt                       | multinode-101468 | jenkins | v1.33.1 | 03 Jun 24 13:11 UTC | 03 Jun 24 13:11 UTC |
	|         | multinode-101468-m03:/home/docker/cp-test_multinode-101468-m02_multinode-101468-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-101468 ssh -n                                                                 | multinode-101468 | jenkins | v1.33.1 | 03 Jun 24 13:11 UTC | 03 Jun 24 13:11 UTC |
	|         | multinode-101468-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-101468 ssh -n multinode-101468-m03 sudo cat                                   | multinode-101468 | jenkins | v1.33.1 | 03 Jun 24 13:11 UTC | 03 Jun 24 13:11 UTC |
	|         | /home/docker/cp-test_multinode-101468-m02_multinode-101468-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-101468 cp testdata/cp-test.txt                                                | multinode-101468 | jenkins | v1.33.1 | 03 Jun 24 13:11 UTC | 03 Jun 24 13:11 UTC |
	|         | multinode-101468-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-101468 ssh -n                                                                 | multinode-101468 | jenkins | v1.33.1 | 03 Jun 24 13:11 UTC | 03 Jun 24 13:11 UTC |
	|         | multinode-101468-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-101468 cp multinode-101468-m03:/home/docker/cp-test.txt                       | multinode-101468 | jenkins | v1.33.1 | 03 Jun 24 13:11 UTC | 03 Jun 24 13:11 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2236251675/001/cp-test_multinode-101468-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-101468 ssh -n                                                                 | multinode-101468 | jenkins | v1.33.1 | 03 Jun 24 13:11 UTC | 03 Jun 24 13:11 UTC |
	|         | multinode-101468-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-101468 cp multinode-101468-m03:/home/docker/cp-test.txt                       | multinode-101468 | jenkins | v1.33.1 | 03 Jun 24 13:11 UTC | 03 Jun 24 13:11 UTC |
	|         | multinode-101468:/home/docker/cp-test_multinode-101468-m03_multinode-101468.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-101468 ssh -n                                                                 | multinode-101468 | jenkins | v1.33.1 | 03 Jun 24 13:11 UTC | 03 Jun 24 13:11 UTC |
	|         | multinode-101468-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-101468 ssh -n multinode-101468 sudo cat                                       | multinode-101468 | jenkins | v1.33.1 | 03 Jun 24 13:11 UTC | 03 Jun 24 13:11 UTC |
	|         | /home/docker/cp-test_multinode-101468-m03_multinode-101468.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-101468 cp multinode-101468-m03:/home/docker/cp-test.txt                       | multinode-101468 | jenkins | v1.33.1 | 03 Jun 24 13:11 UTC | 03 Jun 24 13:11 UTC |
	|         | multinode-101468-m02:/home/docker/cp-test_multinode-101468-m03_multinode-101468-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-101468 ssh -n                                                                 | multinode-101468 | jenkins | v1.33.1 | 03 Jun 24 13:11 UTC | 03 Jun 24 13:11 UTC |
	|         | multinode-101468-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-101468 ssh -n multinode-101468-m02 sudo cat                                   | multinode-101468 | jenkins | v1.33.1 | 03 Jun 24 13:11 UTC | 03 Jun 24 13:11 UTC |
	|         | /home/docker/cp-test_multinode-101468-m03_multinode-101468-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-101468 node stop m03                                                          | multinode-101468 | jenkins | v1.33.1 | 03 Jun 24 13:11 UTC | 03 Jun 24 13:11 UTC |
	| node    | multinode-101468 node start                                                             | multinode-101468 | jenkins | v1.33.1 | 03 Jun 24 13:11 UTC | 03 Jun 24 13:11 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-101468                                                                | multinode-101468 | jenkins | v1.33.1 | 03 Jun 24 13:11 UTC |                     |
	| stop    | -p multinode-101468                                                                     | multinode-101468 | jenkins | v1.33.1 | 03 Jun 24 13:11 UTC |                     |
	| start   | -p multinode-101468                                                                     | multinode-101468 | jenkins | v1.33.1 | 03 Jun 24 13:13 UTC | 03 Jun 24 13:16 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-101468                                                                | multinode-101468 | jenkins | v1.33.1 | 03 Jun 24 13:16 UTC |                     |
	| node    | multinode-101468 node delete                                                            | multinode-101468 | jenkins | v1.33.1 | 03 Jun 24 13:16 UTC | 03 Jun 24 13:16 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-101468 stop                                                                   | multinode-101468 | jenkins | v1.33.1 | 03 Jun 24 13:16 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 13:13:49
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 13:13:49.241626 1114112 out.go:291] Setting OutFile to fd 1 ...
	I0603 13:13:49.242056 1114112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:13:49.242073 1114112 out.go:304] Setting ErrFile to fd 2...
	I0603 13:13:49.242080 1114112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:13:49.242724 1114112 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
	I0603 13:13:49.243501 1114112 out.go:298] Setting JSON to false
	I0603 13:13:49.244527 1114112 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":14176,"bootTime":1717406253,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 13:13:49.244591 1114112 start.go:139] virtualization: kvm guest
	I0603 13:13:49.247675 1114112 out.go:177] * [multinode-101468] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 13:13:49.249373 1114112 out.go:177]   - MINIKUBE_LOCATION=19011
	I0603 13:13:49.250850 1114112 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 13:13:49.249393 1114112 notify.go:220] Checking for updates...
	I0603 13:13:49.253725 1114112 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 13:13:49.254931 1114112 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 13:13:49.256121 1114112 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 13:13:49.257381 1114112 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 13:13:49.259123 1114112 config.go:182] Loaded profile config "multinode-101468": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:13:49.259251 1114112 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 13:13:49.259665 1114112 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:13:49.259724 1114112 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:13:49.278236 1114112 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34157
	I0603 13:13:49.278667 1114112 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:13:49.279210 1114112 main.go:141] libmachine: Using API Version  1
	I0603 13:13:49.279245 1114112 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:13:49.279600 1114112 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:13:49.279808 1114112 main.go:141] libmachine: (multinode-101468) Calling .DriverName
	I0603 13:13:49.315322 1114112 out.go:177] * Using the kvm2 driver based on existing profile
	I0603 13:13:49.316706 1114112 start.go:297] selected driver: kvm2
	I0603 13:13:49.316728 1114112 start.go:901] validating driver "kvm2" against &{Name:multinode-101468 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.1 ClusterName:multinode-101468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.203 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:13:49.316871 1114112 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 13:13:49.317237 1114112 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 13:13:49.317323 1114112 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19011-1078924/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0603 13:13:49.332599 1114112 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0603 13:13:49.333299 1114112 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 13:13:49.333376 1114112 cni.go:84] Creating CNI manager for ""
	I0603 13:13:49.333393 1114112 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0603 13:13:49.333514 1114112 start.go:340] cluster config:
	{Name:multinode-101468 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-101468 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.203 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:13:49.333658 1114112 iso.go:125] acquiring lock: {Name:mka26d6a83f88b83737ccc78b57cc462fbe70fe1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 13:13:49.335654 1114112 out.go:177] * Starting "multinode-101468" primary control-plane node in "multinode-101468" cluster
	I0603 13:13:49.337011 1114112 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 13:13:49.337050 1114112 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0603 13:13:49.337068 1114112 cache.go:56] Caching tarball of preloaded images
	I0603 13:13:49.337194 1114112 preload.go:173] Found /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 13:13:49.337210 1114112 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0603 13:13:49.337363 1114112 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/multinode-101468/config.json ...
	I0603 13:13:49.337626 1114112 start.go:360] acquireMachinesLock for multinode-101468: {Name:mk20baaab39609d00406b78ad309423511e633ec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 13:13:49.337674 1114112 start.go:364] duration metric: took 25.744µs to acquireMachinesLock for "multinode-101468"
	I0603 13:13:49.337695 1114112 start.go:96] Skipping create...Using existing machine configuration
	I0603 13:13:49.337704 1114112 fix.go:54] fixHost starting: 
	I0603 13:13:49.338011 1114112 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:13:49.338046 1114112 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:13:49.352642 1114112 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40653
	I0603 13:13:49.353099 1114112 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:13:49.353695 1114112 main.go:141] libmachine: Using API Version  1
	I0603 13:13:49.353725 1114112 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:13:49.354070 1114112 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:13:49.354303 1114112 main.go:141] libmachine: (multinode-101468) Calling .DriverName
	I0603 13:13:49.354474 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetState
	I0603 13:13:49.356129 1114112 fix.go:112] recreateIfNeeded on multinode-101468: state=Running err=<nil>
	W0603 13:13:49.356147 1114112 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 13:13:49.359152 1114112 out.go:177] * Updating the running kvm2 "multinode-101468" VM ...
	I0603 13:13:49.360573 1114112 machine.go:94] provisionDockerMachine start ...
	I0603 13:13:49.360602 1114112 main.go:141] libmachine: (multinode-101468) Calling .DriverName
	I0603 13:13:49.360842 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHHostname
	I0603 13:13:49.363589 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:13:49.363944 1114112 main.go:141] libmachine: (multinode-101468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:8e:40", ip: ""} in network mk-multinode-101468: {Iface:virbr1 ExpiryTime:2024-06-03 14:09:07 +0000 UTC Type:0 Mac:52:54:00:ab:8e:40 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-101468 Clientid:01:52:54:00:ab:8e:40}
	I0603 13:13:49.363980 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined IP address 192.168.39.141 and MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:13:49.364107 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHPort
	I0603 13:13:49.364307 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHKeyPath
	I0603 13:13:49.364495 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHKeyPath
	I0603 13:13:49.364653 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHUsername
	I0603 13:13:49.364836 1114112 main.go:141] libmachine: Using SSH client type: native
	I0603 13:13:49.365068 1114112 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I0603 13:13:49.365081 1114112 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 13:13:49.483591 1114112 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-101468
	
	I0603 13:13:49.483619 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetMachineName
	I0603 13:13:49.483885 1114112 buildroot.go:166] provisioning hostname "multinode-101468"
	I0603 13:13:49.483915 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetMachineName
	I0603 13:13:49.484112 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHHostname
	I0603 13:13:49.486811 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:13:49.487205 1114112 main.go:141] libmachine: (multinode-101468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:8e:40", ip: ""} in network mk-multinode-101468: {Iface:virbr1 ExpiryTime:2024-06-03 14:09:07 +0000 UTC Type:0 Mac:52:54:00:ab:8e:40 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-101468 Clientid:01:52:54:00:ab:8e:40}
	I0603 13:13:49.487256 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined IP address 192.168.39.141 and MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:13:49.487396 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHPort
	I0603 13:13:49.487577 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHKeyPath
	I0603 13:13:49.487737 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHKeyPath
	I0603 13:13:49.487898 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHUsername
	I0603 13:13:49.488063 1114112 main.go:141] libmachine: Using SSH client type: native
	I0603 13:13:49.488277 1114112 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I0603 13:13:49.488294 1114112 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-101468 && echo "multinode-101468" | sudo tee /etc/hostname
	I0603 13:13:49.614930 1114112 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-101468
	
	I0603 13:13:49.614976 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHHostname
	I0603 13:13:49.617743 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:13:49.618071 1114112 main.go:141] libmachine: (multinode-101468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:8e:40", ip: ""} in network mk-multinode-101468: {Iface:virbr1 ExpiryTime:2024-06-03 14:09:07 +0000 UTC Type:0 Mac:52:54:00:ab:8e:40 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-101468 Clientid:01:52:54:00:ab:8e:40}
	I0603 13:13:49.618101 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined IP address 192.168.39.141 and MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:13:49.618262 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHPort
	I0603 13:13:49.618490 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHKeyPath
	I0603 13:13:49.618714 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHKeyPath
	I0603 13:13:49.618884 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHUsername
	I0603 13:13:49.619067 1114112 main.go:141] libmachine: Using SSH client type: native
	I0603 13:13:49.619250 1114112 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I0603 13:13:49.619267 1114112 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-101468' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-101468/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-101468' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 13:13:49.726553 1114112 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 13:13:49.726582 1114112 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19011-1078924/.minikube CaCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19011-1078924/.minikube}
	I0603 13:13:49.726617 1114112 buildroot.go:174] setting up certificates
	I0603 13:13:49.726628 1114112 provision.go:84] configureAuth start
	I0603 13:13:49.726640 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetMachineName
	I0603 13:13:49.726957 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetIP
	I0603 13:13:49.729838 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:13:49.730194 1114112 main.go:141] libmachine: (multinode-101468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:8e:40", ip: ""} in network mk-multinode-101468: {Iface:virbr1 ExpiryTime:2024-06-03 14:09:07 +0000 UTC Type:0 Mac:52:54:00:ab:8e:40 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-101468 Clientid:01:52:54:00:ab:8e:40}
	I0603 13:13:49.730222 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined IP address 192.168.39.141 and MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:13:49.730408 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHHostname
	I0603 13:13:49.732614 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:13:49.733007 1114112 main.go:141] libmachine: (multinode-101468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:8e:40", ip: ""} in network mk-multinode-101468: {Iface:virbr1 ExpiryTime:2024-06-03 14:09:07 +0000 UTC Type:0 Mac:52:54:00:ab:8e:40 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-101468 Clientid:01:52:54:00:ab:8e:40}
	I0603 13:13:49.733045 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined IP address 192.168.39.141 and MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:13:49.733197 1114112 provision.go:143] copyHostCerts
	I0603 13:13:49.733233 1114112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 13:13:49.733285 1114112 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem, removing ...
	I0603 13:13:49.733296 1114112 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 13:13:49.733386 1114112 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem (1078 bytes)
	I0603 13:13:49.733494 1114112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 13:13:49.733517 1114112 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem, removing ...
	I0603 13:13:49.733526 1114112 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 13:13:49.733557 1114112 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem (1123 bytes)
	I0603 13:13:49.733606 1114112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 13:13:49.733630 1114112 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem, removing ...
	I0603 13:13:49.733639 1114112 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 13:13:49.733669 1114112 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem (1675 bytes)
	I0603 13:13:49.733724 1114112 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem org=jenkins.multinode-101468 san=[127.0.0.1 192.168.39.141 localhost minikube multinode-101468]
	I0603 13:13:49.853554 1114112 provision.go:177] copyRemoteCerts
	I0603 13:13:49.853617 1114112 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 13:13:49.853642 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHHostname
	I0603 13:13:49.856178 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:13:49.856509 1114112 main.go:141] libmachine: (multinode-101468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:8e:40", ip: ""} in network mk-multinode-101468: {Iface:virbr1 ExpiryTime:2024-06-03 14:09:07 +0000 UTC Type:0 Mac:52:54:00:ab:8e:40 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-101468 Clientid:01:52:54:00:ab:8e:40}
	I0603 13:13:49.856532 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined IP address 192.168.39.141 and MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:13:49.856667 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHPort
	I0603 13:13:49.856893 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHKeyPath
	I0603 13:13:49.857024 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHUsername
	I0603 13:13:49.857137 1114112 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/multinode-101468/id_rsa Username:docker}
	I0603 13:13:49.944861 1114112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0603 13:13:49.944945 1114112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 13:13:49.972068 1114112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0603 13:13:49.972139 1114112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 13:13:49.997167 1114112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0603 13:13:49.997253 1114112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0603 13:13:50.022768 1114112 provision.go:87] duration metric: took 296.124569ms to configureAuth
	I0603 13:13:50.022798 1114112 buildroot.go:189] setting minikube options for container-runtime
	I0603 13:13:50.023030 1114112 config.go:182] Loaded profile config "multinode-101468": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:13:50.023126 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHHostname
	I0603 13:13:50.025510 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:13:50.025895 1114112 main.go:141] libmachine: (multinode-101468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:8e:40", ip: ""} in network mk-multinode-101468: {Iface:virbr1 ExpiryTime:2024-06-03 14:09:07 +0000 UTC Type:0 Mac:52:54:00:ab:8e:40 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-101468 Clientid:01:52:54:00:ab:8e:40}
	I0603 13:13:50.025932 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined IP address 192.168.39.141 and MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:13:50.026066 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHPort
	I0603 13:13:50.026301 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHKeyPath
	I0603 13:13:50.026485 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHKeyPath
	I0603 13:13:50.026639 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHUsername
	I0603 13:13:50.026777 1114112 main.go:141] libmachine: Using SSH client type: native
	I0603 13:13:50.026962 1114112 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I0603 13:13:50.026982 1114112 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 13:15:20.805943 1114112 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 13:15:20.805978 1114112 machine.go:97] duration metric: took 1m31.445384273s to provisionDockerMachine
	I0603 13:15:20.805999 1114112 start.go:293] postStartSetup for "multinode-101468" (driver="kvm2")
	I0603 13:15:20.806010 1114112 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 13:15:20.806028 1114112 main.go:141] libmachine: (multinode-101468) Calling .DriverName
	I0603 13:15:20.806413 1114112 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 13:15:20.806461 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHHostname
	I0603 13:15:20.809916 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:15:20.810338 1114112 main.go:141] libmachine: (multinode-101468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:8e:40", ip: ""} in network mk-multinode-101468: {Iface:virbr1 ExpiryTime:2024-06-03 14:09:07 +0000 UTC Type:0 Mac:52:54:00:ab:8e:40 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-101468 Clientid:01:52:54:00:ab:8e:40}
	I0603 13:15:20.810371 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined IP address 192.168.39.141 and MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:15:20.810491 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHPort
	I0603 13:15:20.810728 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHKeyPath
	I0603 13:15:20.810961 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHUsername
	I0603 13:15:20.811187 1114112 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/multinode-101468/id_rsa Username:docker}
	I0603 13:15:20.898141 1114112 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 13:15:20.903428 1114112 command_runner.go:130] > NAME=Buildroot
	I0603 13:15:20.903455 1114112 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0603 13:15:20.903461 1114112 command_runner.go:130] > ID=buildroot
	I0603 13:15:20.903469 1114112 command_runner.go:130] > VERSION_ID=2023.02.9
	I0603 13:15:20.903477 1114112 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0603 13:15:20.903519 1114112 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 13:15:20.903540 1114112 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/addons for local assets ...
	I0603 13:15:20.903610 1114112 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/files for local assets ...
	I0603 13:15:20.903703 1114112 filesync.go:149] local asset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> 10862512.pem in /etc/ssl/certs
	I0603 13:15:20.903726 1114112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> /etc/ssl/certs/10862512.pem
	I0603 13:15:20.903943 1114112 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 13:15:20.914818 1114112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:15:20.940954 1114112 start.go:296] duration metric: took 134.940351ms for postStartSetup
	I0603 13:15:20.941012 1114112 fix.go:56] duration metric: took 1m31.603308631s for fixHost
	I0603 13:15:20.941056 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHHostname
	I0603 13:15:20.943765 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:15:20.944140 1114112 main.go:141] libmachine: (multinode-101468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:8e:40", ip: ""} in network mk-multinode-101468: {Iface:virbr1 ExpiryTime:2024-06-03 14:09:07 +0000 UTC Type:0 Mac:52:54:00:ab:8e:40 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-101468 Clientid:01:52:54:00:ab:8e:40}
	I0603 13:15:20.944175 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined IP address 192.168.39.141 and MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:15:20.944375 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHPort
	I0603 13:15:20.944655 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHKeyPath
	I0603 13:15:20.944833 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHKeyPath
	I0603 13:15:20.944978 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHUsername
	I0603 13:15:20.945231 1114112 main.go:141] libmachine: Using SSH client type: native
	I0603 13:15:20.945451 1114112 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I0603 13:15:20.945463 1114112 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 13:15:21.054327 1114112 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717420521.031295798
	
	I0603 13:15:21.054353 1114112 fix.go:216] guest clock: 1717420521.031295798
	I0603 13:15:21.054360 1114112 fix.go:229] Guest: 2024-06-03 13:15:21.031295798 +0000 UTC Remote: 2024-06-03 13:15:20.941029963 +0000 UTC m=+91.737593437 (delta=90.265835ms)
	I0603 13:15:21.054400 1114112 fix.go:200] guest clock delta is within tolerance: 90.265835ms
	I0603 13:15:21.054405 1114112 start.go:83] releasing machines lock for "multinode-101468", held for 1m31.716719224s
	I0603 13:15:21.054439 1114112 main.go:141] libmachine: (multinode-101468) Calling .DriverName
	I0603 13:15:21.054759 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetIP
	I0603 13:15:21.057471 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:15:21.057773 1114112 main.go:141] libmachine: (multinode-101468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:8e:40", ip: ""} in network mk-multinode-101468: {Iface:virbr1 ExpiryTime:2024-06-03 14:09:07 +0000 UTC Type:0 Mac:52:54:00:ab:8e:40 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-101468 Clientid:01:52:54:00:ab:8e:40}
	I0603 13:15:21.057809 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined IP address 192.168.39.141 and MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:15:21.058012 1114112 main.go:141] libmachine: (multinode-101468) Calling .DriverName
	I0603 13:15:21.058578 1114112 main.go:141] libmachine: (multinode-101468) Calling .DriverName
	I0603 13:15:21.058775 1114112 main.go:141] libmachine: (multinode-101468) Calling .DriverName
	I0603 13:15:21.058889 1114112 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 13:15:21.058937 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHHostname
	I0603 13:15:21.059050 1114112 ssh_runner.go:195] Run: cat /version.json
	I0603 13:15:21.059076 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHHostname
	I0603 13:15:21.061615 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:15:21.061656 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:15:21.061966 1114112 main.go:141] libmachine: (multinode-101468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:8e:40", ip: ""} in network mk-multinode-101468: {Iface:virbr1 ExpiryTime:2024-06-03 14:09:07 +0000 UTC Type:0 Mac:52:54:00:ab:8e:40 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-101468 Clientid:01:52:54:00:ab:8e:40}
	I0603 13:15:21.061994 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined IP address 192.168.39.141 and MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:15:21.062043 1114112 main.go:141] libmachine: (multinode-101468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:8e:40", ip: ""} in network mk-multinode-101468: {Iface:virbr1 ExpiryTime:2024-06-03 14:09:07 +0000 UTC Type:0 Mac:52:54:00:ab:8e:40 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-101468 Clientid:01:52:54:00:ab:8e:40}
	I0603 13:15:21.062068 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined IP address 192.168.39.141 and MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:15:21.062156 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHPort
	I0603 13:15:21.062242 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHPort
	I0603 13:15:21.062320 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHKeyPath
	I0603 13:15:21.062399 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHKeyPath
	I0603 13:15:21.062453 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHUsername
	I0603 13:15:21.062515 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetSSHUsername
	I0603 13:15:21.062580 1114112 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/multinode-101468/id_rsa Username:docker}
	I0603 13:15:21.062684 1114112 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/multinode-101468/id_rsa Username:docker}
	I0603 13:15:21.142402 1114112 command_runner.go:130] > {"iso_version": "v1.33.1-1716398070-18934", "kicbase_version": "v0.0.44-1716228441-18934", "minikube_version": "v1.33.1", "commit": "7bc64cce06153f72c1bf9cbcf2114663ad5af3b7"}
	I0603 13:15:21.166262 1114112 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0603 13:15:21.167118 1114112 ssh_runner.go:195] Run: systemctl --version
	I0603 13:15:21.173308 1114112 command_runner.go:130] > systemd 252 (252)
	I0603 13:15:21.173346 1114112 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0603 13:15:21.173597 1114112 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 13:15:21.334506 1114112 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0603 13:15:21.340968 1114112 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0603 13:15:21.341174 1114112 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 13:15:21.341258 1114112 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 13:15:21.351729 1114112 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0603 13:15:21.351758 1114112 start.go:494] detecting cgroup driver to use...
	I0603 13:15:21.351827 1114112 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 13:15:21.370332 1114112 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 13:15:21.384472 1114112 docker.go:217] disabling cri-docker service (if available) ...
	I0603 13:15:21.384536 1114112 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 13:15:21.399293 1114112 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 13:15:21.413821 1114112 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 13:15:21.557557 1114112 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 13:15:21.702349 1114112 docker.go:233] disabling docker service ...
	I0603 13:15:21.702450 1114112 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 13:15:21.721882 1114112 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 13:15:21.737507 1114112 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 13:15:21.881991 1114112 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 13:15:22.046126 1114112 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 13:15:22.060366 1114112 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 13:15:22.080090 1114112 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0603 13:15:22.080149 1114112 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 13:15:22.080205 1114112 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:15:22.091567 1114112 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 13:15:22.091671 1114112 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:15:22.102508 1114112 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:15:22.113794 1114112 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:15:22.124559 1114112 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 13:15:22.135819 1114112 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:15:22.146808 1114112 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:15:22.158392 1114112 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:15:22.169246 1114112 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 13:15:22.179027 1114112 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0603 13:15:22.179158 1114112 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 13:15:22.189154 1114112 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:15:22.357469 1114112 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 13:15:22.609111 1114112 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 13:15:22.609208 1114112 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 13:15:22.614503 1114112 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0603 13:15:22.614525 1114112 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0603 13:15:22.614532 1114112 command_runner.go:130] > Device: 0,22	Inode: 1338        Links: 1
	I0603 13:15:22.614539 1114112 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0603 13:15:22.614545 1114112 command_runner.go:130] > Access: 2024-06-03 13:15:22.475642304 +0000
	I0603 13:15:22.614551 1114112 command_runner.go:130] > Modify: 2024-06-03 13:15:22.475642304 +0000
	I0603 13:15:22.614555 1114112 command_runner.go:130] > Change: 2024-06-03 13:15:22.475642304 +0000
	I0603 13:15:22.614559 1114112 command_runner.go:130] >  Birth: -
	I0603 13:15:22.614577 1114112 start.go:562] Will wait 60s for crictl version
	I0603 13:15:22.614619 1114112 ssh_runner.go:195] Run: which crictl
	I0603 13:15:22.618402 1114112 command_runner.go:130] > /usr/bin/crictl
	I0603 13:15:22.618478 1114112 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 13:15:22.655001 1114112 command_runner.go:130] > Version:  0.1.0
	I0603 13:15:22.655035 1114112 command_runner.go:130] > RuntimeName:  cri-o
	I0603 13:15:22.655055 1114112 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0603 13:15:22.655063 1114112 command_runner.go:130] > RuntimeApiVersion:  v1
	I0603 13:15:22.655142 1114112 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 13:15:22.655261 1114112 ssh_runner.go:195] Run: crio --version
	I0603 13:15:22.688541 1114112 command_runner.go:130] > crio version 1.29.1
	I0603 13:15:22.688575 1114112 command_runner.go:130] > Version:        1.29.1
	I0603 13:15:22.688585 1114112 command_runner.go:130] > GitCommit:      unknown
	I0603 13:15:22.688592 1114112 command_runner.go:130] > GitCommitDate:  unknown
	I0603 13:15:22.688598 1114112 command_runner.go:130] > GitTreeState:   clean
	I0603 13:15:22.688607 1114112 command_runner.go:130] > BuildDate:      2024-05-22T23:02:45Z
	I0603 13:15:22.688613 1114112 command_runner.go:130] > GoVersion:      go1.21.6
	I0603 13:15:22.688619 1114112 command_runner.go:130] > Compiler:       gc
	I0603 13:15:22.688625 1114112 command_runner.go:130] > Platform:       linux/amd64
	I0603 13:15:22.688632 1114112 command_runner.go:130] > Linkmode:       dynamic
	I0603 13:15:22.688639 1114112 command_runner.go:130] > BuildTags:      
	I0603 13:15:22.688649 1114112 command_runner.go:130] >   containers_image_ostree_stub
	I0603 13:15:22.688657 1114112 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0603 13:15:22.688666 1114112 command_runner.go:130] >   btrfs_noversion
	I0603 13:15:22.688676 1114112 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0603 13:15:22.688685 1114112 command_runner.go:130] >   libdm_no_deferred_remove
	I0603 13:15:22.688693 1114112 command_runner.go:130] >   seccomp
	I0603 13:15:22.688702 1114112 command_runner.go:130] > LDFlags:          unknown
	I0603 13:15:22.688743 1114112 command_runner.go:130] > SeccompEnabled:   true
	I0603 13:15:22.688763 1114112 command_runner.go:130] > AppArmorEnabled:  false
	I0603 13:15:22.688853 1114112 ssh_runner.go:195] Run: crio --version
	I0603 13:15:22.717784 1114112 command_runner.go:130] > crio version 1.29.1
	I0603 13:15:22.717814 1114112 command_runner.go:130] > Version:        1.29.1
	I0603 13:15:22.717823 1114112 command_runner.go:130] > GitCommit:      unknown
	I0603 13:15:22.717831 1114112 command_runner.go:130] > GitCommitDate:  unknown
	I0603 13:15:22.717837 1114112 command_runner.go:130] > GitTreeState:   clean
	I0603 13:15:22.717846 1114112 command_runner.go:130] > BuildDate:      2024-05-22T23:02:45Z
	I0603 13:15:22.717878 1114112 command_runner.go:130] > GoVersion:      go1.21.6
	I0603 13:15:22.717887 1114112 command_runner.go:130] > Compiler:       gc
	I0603 13:15:22.717895 1114112 command_runner.go:130] > Platform:       linux/amd64
	I0603 13:15:22.717903 1114112 command_runner.go:130] > Linkmode:       dynamic
	I0603 13:15:22.717911 1114112 command_runner.go:130] > BuildTags:      
	I0603 13:15:22.717919 1114112 command_runner.go:130] >   containers_image_ostree_stub
	I0603 13:15:22.717928 1114112 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0603 13:15:22.717935 1114112 command_runner.go:130] >   btrfs_noversion
	I0603 13:15:22.717961 1114112 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0603 13:15:22.717968 1114112 command_runner.go:130] >   libdm_no_deferred_remove
	I0603 13:15:22.717974 1114112 command_runner.go:130] >   seccomp
	I0603 13:15:22.717982 1114112 command_runner.go:130] > LDFlags:          unknown
	I0603 13:15:22.717988 1114112 command_runner.go:130] > SeccompEnabled:   true
	I0603 13:15:22.717995 1114112 command_runner.go:130] > AppArmorEnabled:  false
	I0603 13:15:22.721480 1114112 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 13:15:22.723274 1114112 main.go:141] libmachine: (multinode-101468) Calling .GetIP
	I0603 13:15:22.726508 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:15:22.726928 1114112 main.go:141] libmachine: (multinode-101468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:8e:40", ip: ""} in network mk-multinode-101468: {Iface:virbr1 ExpiryTime:2024-06-03 14:09:07 +0000 UTC Type:0 Mac:52:54:00:ab:8e:40 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-101468 Clientid:01:52:54:00:ab:8e:40}
	I0603 13:15:22.726959 1114112 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined IP address 192.168.39.141 and MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:15:22.727177 1114112 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0603 13:15:22.731630 1114112 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0603 13:15:22.731764 1114112 kubeadm.go:877] updating cluster {Name:multinode-101468 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.1 ClusterName:multinode-101468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.203 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 13:15:22.731922 1114112 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 13:15:22.731966 1114112 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:15:22.777010 1114112 command_runner.go:130] > {
	I0603 13:15:22.777037 1114112 command_runner.go:130] >   "images": [
	I0603 13:15:22.777041 1114112 command_runner.go:130] >     {
	I0603 13:15:22.777049 1114112 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0603 13:15:22.777054 1114112 command_runner.go:130] >       "repoTags": [
	I0603 13:15:22.777060 1114112 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0603 13:15:22.777063 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.777067 1114112 command_runner.go:130] >       "repoDigests": [
	I0603 13:15:22.777075 1114112 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0603 13:15:22.777083 1114112 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0603 13:15:22.777087 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.777091 1114112 command_runner.go:130] >       "size": "65291810",
	I0603 13:15:22.777095 1114112 command_runner.go:130] >       "uid": null,
	I0603 13:15:22.777100 1114112 command_runner.go:130] >       "username": "",
	I0603 13:15:22.777109 1114112 command_runner.go:130] >       "spec": null,
	I0603 13:15:22.777114 1114112 command_runner.go:130] >       "pinned": false
	I0603 13:15:22.777117 1114112 command_runner.go:130] >     },
	I0603 13:15:22.777120 1114112 command_runner.go:130] >     {
	I0603 13:15:22.777127 1114112 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0603 13:15:22.777138 1114112 command_runner.go:130] >       "repoTags": [
	I0603 13:15:22.777144 1114112 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0603 13:15:22.777148 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.777152 1114112 command_runner.go:130] >       "repoDigests": [
	I0603 13:15:22.777160 1114112 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0603 13:15:22.777169 1114112 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0603 13:15:22.777173 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.777180 1114112 command_runner.go:130] >       "size": "65908273",
	I0603 13:15:22.777183 1114112 command_runner.go:130] >       "uid": null,
	I0603 13:15:22.777191 1114112 command_runner.go:130] >       "username": "",
	I0603 13:15:22.777197 1114112 command_runner.go:130] >       "spec": null,
	I0603 13:15:22.777201 1114112 command_runner.go:130] >       "pinned": false
	I0603 13:15:22.777207 1114112 command_runner.go:130] >     },
	I0603 13:15:22.777210 1114112 command_runner.go:130] >     {
	I0603 13:15:22.777216 1114112 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0603 13:15:22.777220 1114112 command_runner.go:130] >       "repoTags": [
	I0603 13:15:22.777231 1114112 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0603 13:15:22.777247 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.777251 1114112 command_runner.go:130] >       "repoDigests": [
	I0603 13:15:22.777257 1114112 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0603 13:15:22.777264 1114112 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0603 13:15:22.777270 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.777274 1114112 command_runner.go:130] >       "size": "1363676",
	I0603 13:15:22.777280 1114112 command_runner.go:130] >       "uid": null,
	I0603 13:15:22.777284 1114112 command_runner.go:130] >       "username": "",
	I0603 13:15:22.777289 1114112 command_runner.go:130] >       "spec": null,
	I0603 13:15:22.777293 1114112 command_runner.go:130] >       "pinned": false
	I0603 13:15:22.777299 1114112 command_runner.go:130] >     },
	I0603 13:15:22.777302 1114112 command_runner.go:130] >     {
	I0603 13:15:22.777310 1114112 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0603 13:15:22.777317 1114112 command_runner.go:130] >       "repoTags": [
	I0603 13:15:22.777322 1114112 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0603 13:15:22.777328 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.777332 1114112 command_runner.go:130] >       "repoDigests": [
	I0603 13:15:22.777342 1114112 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0603 13:15:22.777357 1114112 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0603 13:15:22.777364 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.777368 1114112 command_runner.go:130] >       "size": "31470524",
	I0603 13:15:22.777375 1114112 command_runner.go:130] >       "uid": null,
	I0603 13:15:22.777379 1114112 command_runner.go:130] >       "username": "",
	I0603 13:15:22.777395 1114112 command_runner.go:130] >       "spec": null,
	I0603 13:15:22.777418 1114112 command_runner.go:130] >       "pinned": false
	I0603 13:15:22.777428 1114112 command_runner.go:130] >     },
	I0603 13:15:22.777436 1114112 command_runner.go:130] >     {
	I0603 13:15:22.777449 1114112 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0603 13:15:22.777458 1114112 command_runner.go:130] >       "repoTags": [
	I0603 13:15:22.777466 1114112 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0603 13:15:22.777469 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.777473 1114112 command_runner.go:130] >       "repoDigests": [
	I0603 13:15:22.777480 1114112 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0603 13:15:22.777492 1114112 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0603 13:15:22.777497 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.777511 1114112 command_runner.go:130] >       "size": "61245718",
	I0603 13:15:22.777518 1114112 command_runner.go:130] >       "uid": null,
	I0603 13:15:22.777522 1114112 command_runner.go:130] >       "username": "nonroot",
	I0603 13:15:22.777526 1114112 command_runner.go:130] >       "spec": null,
	I0603 13:15:22.777537 1114112 command_runner.go:130] >       "pinned": false
	I0603 13:15:22.777543 1114112 command_runner.go:130] >     },
	I0603 13:15:22.777547 1114112 command_runner.go:130] >     {
	I0603 13:15:22.777555 1114112 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0603 13:15:22.777560 1114112 command_runner.go:130] >       "repoTags": [
	I0603 13:15:22.777564 1114112 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0603 13:15:22.777570 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.777575 1114112 command_runner.go:130] >       "repoDigests": [
	I0603 13:15:22.777584 1114112 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0603 13:15:22.777592 1114112 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0603 13:15:22.777598 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.777603 1114112 command_runner.go:130] >       "size": "150779692",
	I0603 13:15:22.777609 1114112 command_runner.go:130] >       "uid": {
	I0603 13:15:22.777613 1114112 command_runner.go:130] >         "value": "0"
	I0603 13:15:22.777616 1114112 command_runner.go:130] >       },
	I0603 13:15:22.777620 1114112 command_runner.go:130] >       "username": "",
	I0603 13:15:22.777626 1114112 command_runner.go:130] >       "spec": null,
	I0603 13:15:22.777630 1114112 command_runner.go:130] >       "pinned": false
	I0603 13:15:22.777636 1114112 command_runner.go:130] >     },
	I0603 13:15:22.777639 1114112 command_runner.go:130] >     {
	I0603 13:15:22.777647 1114112 command_runner.go:130] >       "id": "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a",
	I0603 13:15:22.777651 1114112 command_runner.go:130] >       "repoTags": [
	I0603 13:15:22.777656 1114112 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.1"
	I0603 13:15:22.777661 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.777665 1114112 command_runner.go:130] >       "repoDigests": [
	I0603 13:15:22.777679 1114112 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea",
	I0603 13:15:22.777694 1114112 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"
	I0603 13:15:22.777703 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.777709 1114112 command_runner.go:130] >       "size": "117601759",
	I0603 13:15:22.777718 1114112 command_runner.go:130] >       "uid": {
	I0603 13:15:22.777727 1114112 command_runner.go:130] >         "value": "0"
	I0603 13:15:22.777736 1114112 command_runner.go:130] >       },
	I0603 13:15:22.777751 1114112 command_runner.go:130] >       "username": "",
	I0603 13:15:22.777758 1114112 command_runner.go:130] >       "spec": null,
	I0603 13:15:22.777762 1114112 command_runner.go:130] >       "pinned": false
	I0603 13:15:22.777767 1114112 command_runner.go:130] >     },
	I0603 13:15:22.777771 1114112 command_runner.go:130] >     {
	I0603 13:15:22.777779 1114112 command_runner.go:130] >       "id": "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c",
	I0603 13:15:22.777786 1114112 command_runner.go:130] >       "repoTags": [
	I0603 13:15:22.777791 1114112 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.1"
	I0603 13:15:22.777797 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.777801 1114112 command_runner.go:130] >       "repoDigests": [
	I0603 13:15:22.777829 1114112 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52",
	I0603 13:15:22.777840 1114112 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"
	I0603 13:15:22.777843 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.777846 1114112 command_runner.go:130] >       "size": "112170310",
	I0603 13:15:22.777850 1114112 command_runner.go:130] >       "uid": {
	I0603 13:15:22.777859 1114112 command_runner.go:130] >         "value": "0"
	I0603 13:15:22.777867 1114112 command_runner.go:130] >       },
	I0603 13:15:22.777874 1114112 command_runner.go:130] >       "username": "",
	I0603 13:15:22.777880 1114112 command_runner.go:130] >       "spec": null,
	I0603 13:15:22.777885 1114112 command_runner.go:130] >       "pinned": false
	I0603 13:15:22.777889 1114112 command_runner.go:130] >     },
	I0603 13:15:22.777893 1114112 command_runner.go:130] >     {
	I0603 13:15:22.777902 1114112 command_runner.go:130] >       "id": "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd",
	I0603 13:15:22.777907 1114112 command_runner.go:130] >       "repoTags": [
	I0603 13:15:22.777914 1114112 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.1"
	I0603 13:15:22.777920 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.777926 1114112 command_runner.go:130] >       "repoDigests": [
	I0603 13:15:22.777953 1114112 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa",
	I0603 13:15:22.777964 1114112 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"
	I0603 13:15:22.777973 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.777979 1114112 command_runner.go:130] >       "size": "85933465",
	I0603 13:15:22.777988 1114112 command_runner.go:130] >       "uid": null,
	I0603 13:15:22.777996 1114112 command_runner.go:130] >       "username": "",
	I0603 13:15:22.778006 1114112 command_runner.go:130] >       "spec": null,
	I0603 13:15:22.778015 1114112 command_runner.go:130] >       "pinned": false
	I0603 13:15:22.778021 1114112 command_runner.go:130] >     },
	I0603 13:15:22.778037 1114112 command_runner.go:130] >     {
	I0603 13:15:22.778051 1114112 command_runner.go:130] >       "id": "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035",
	I0603 13:15:22.778060 1114112 command_runner.go:130] >       "repoTags": [
	I0603 13:15:22.778068 1114112 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.1"
	I0603 13:15:22.778077 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.778084 1114112 command_runner.go:130] >       "repoDigests": [
	I0603 13:15:22.778099 1114112 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036",
	I0603 13:15:22.778113 1114112 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"
	I0603 13:15:22.778123 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.778133 1114112 command_runner.go:130] >       "size": "63026504",
	I0603 13:15:22.778143 1114112 command_runner.go:130] >       "uid": {
	I0603 13:15:22.778152 1114112 command_runner.go:130] >         "value": "0"
	I0603 13:15:22.778158 1114112 command_runner.go:130] >       },
	I0603 13:15:22.778167 1114112 command_runner.go:130] >       "username": "",
	I0603 13:15:22.778173 1114112 command_runner.go:130] >       "spec": null,
	I0603 13:15:22.778182 1114112 command_runner.go:130] >       "pinned": false
	I0603 13:15:22.778187 1114112 command_runner.go:130] >     },
	I0603 13:15:22.778195 1114112 command_runner.go:130] >     {
	I0603 13:15:22.778204 1114112 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0603 13:15:22.778214 1114112 command_runner.go:130] >       "repoTags": [
	I0603 13:15:22.778225 1114112 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0603 13:15:22.778237 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.778246 1114112 command_runner.go:130] >       "repoDigests": [
	I0603 13:15:22.778256 1114112 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0603 13:15:22.778270 1114112 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0603 13:15:22.778279 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.778286 1114112 command_runner.go:130] >       "size": "750414",
	I0603 13:15:22.778295 1114112 command_runner.go:130] >       "uid": {
	I0603 13:15:22.778301 1114112 command_runner.go:130] >         "value": "65535"
	I0603 13:15:22.778308 1114112 command_runner.go:130] >       },
	I0603 13:15:22.778315 1114112 command_runner.go:130] >       "username": "",
	I0603 13:15:22.778323 1114112 command_runner.go:130] >       "spec": null,
	I0603 13:15:22.778327 1114112 command_runner.go:130] >       "pinned": true
	I0603 13:15:22.778333 1114112 command_runner.go:130] >     }
	I0603 13:15:22.778336 1114112 command_runner.go:130] >   ]
	I0603 13:15:22.778341 1114112 command_runner.go:130] > }
	I0603 13:15:22.778632 1114112 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 13:15:22.778650 1114112 crio.go:433] Images already preloaded, skipping extraction
	I0603 13:15:22.778734 1114112 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:15:22.812942 1114112 command_runner.go:130] > {
	I0603 13:15:22.812973 1114112 command_runner.go:130] >   "images": [
	I0603 13:15:22.812979 1114112 command_runner.go:130] >     {
	I0603 13:15:22.812991 1114112 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0603 13:15:22.812998 1114112 command_runner.go:130] >       "repoTags": [
	I0603 13:15:22.813010 1114112 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0603 13:15:22.813014 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.813018 1114112 command_runner.go:130] >       "repoDigests": [
	I0603 13:15:22.813027 1114112 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0603 13:15:22.813034 1114112 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0603 13:15:22.813038 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.813042 1114112 command_runner.go:130] >       "size": "65291810",
	I0603 13:15:22.813049 1114112 command_runner.go:130] >       "uid": null,
	I0603 13:15:22.813053 1114112 command_runner.go:130] >       "username": "",
	I0603 13:15:22.813062 1114112 command_runner.go:130] >       "spec": null,
	I0603 13:15:22.813068 1114112 command_runner.go:130] >       "pinned": false
	I0603 13:15:22.813072 1114112 command_runner.go:130] >     },
	I0603 13:15:22.813075 1114112 command_runner.go:130] >     {
	I0603 13:15:22.813081 1114112 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0603 13:15:22.813086 1114112 command_runner.go:130] >       "repoTags": [
	I0603 13:15:22.813096 1114112 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0603 13:15:22.813103 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.813106 1114112 command_runner.go:130] >       "repoDigests": [
	I0603 13:15:22.813115 1114112 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0603 13:15:22.813124 1114112 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0603 13:15:22.813130 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.813134 1114112 command_runner.go:130] >       "size": "65908273",
	I0603 13:15:22.813138 1114112 command_runner.go:130] >       "uid": null,
	I0603 13:15:22.813148 1114112 command_runner.go:130] >       "username": "",
	I0603 13:15:22.813154 1114112 command_runner.go:130] >       "spec": null,
	I0603 13:15:22.813158 1114112 command_runner.go:130] >       "pinned": false
	I0603 13:15:22.813163 1114112 command_runner.go:130] >     },
	I0603 13:15:22.813166 1114112 command_runner.go:130] >     {
	I0603 13:15:22.813174 1114112 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0603 13:15:22.813180 1114112 command_runner.go:130] >       "repoTags": [
	I0603 13:15:22.813185 1114112 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0603 13:15:22.813191 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.813195 1114112 command_runner.go:130] >       "repoDigests": [
	I0603 13:15:22.813204 1114112 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0603 13:15:22.813213 1114112 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0603 13:15:22.813219 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.813223 1114112 command_runner.go:130] >       "size": "1363676",
	I0603 13:15:22.813227 1114112 command_runner.go:130] >       "uid": null,
	I0603 13:15:22.813243 1114112 command_runner.go:130] >       "username": "",
	I0603 13:15:22.813250 1114112 command_runner.go:130] >       "spec": null,
	I0603 13:15:22.813254 1114112 command_runner.go:130] >       "pinned": false
	I0603 13:15:22.813258 1114112 command_runner.go:130] >     },
	I0603 13:15:22.813262 1114112 command_runner.go:130] >     {
	I0603 13:15:22.813272 1114112 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0603 13:15:22.813281 1114112 command_runner.go:130] >       "repoTags": [
	I0603 13:15:22.813292 1114112 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0603 13:15:22.813301 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.813311 1114112 command_runner.go:130] >       "repoDigests": [
	I0603 13:15:22.813325 1114112 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0603 13:15:22.813343 1114112 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0603 13:15:22.813350 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.813362 1114112 command_runner.go:130] >       "size": "31470524",
	I0603 13:15:22.813368 1114112 command_runner.go:130] >       "uid": null,
	I0603 13:15:22.813373 1114112 command_runner.go:130] >       "username": "",
	I0603 13:15:22.813379 1114112 command_runner.go:130] >       "spec": null,
	I0603 13:15:22.813383 1114112 command_runner.go:130] >       "pinned": false
	I0603 13:15:22.813388 1114112 command_runner.go:130] >     },
	I0603 13:15:22.813391 1114112 command_runner.go:130] >     {
	I0603 13:15:22.813400 1114112 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0603 13:15:22.813421 1114112 command_runner.go:130] >       "repoTags": [
	I0603 13:15:22.813429 1114112 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0603 13:15:22.813435 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.813443 1114112 command_runner.go:130] >       "repoDigests": [
	I0603 13:15:22.813452 1114112 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0603 13:15:22.813462 1114112 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0603 13:15:22.813467 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.813472 1114112 command_runner.go:130] >       "size": "61245718",
	I0603 13:15:22.813478 1114112 command_runner.go:130] >       "uid": null,
	I0603 13:15:22.813483 1114112 command_runner.go:130] >       "username": "nonroot",
	I0603 13:15:22.813491 1114112 command_runner.go:130] >       "spec": null,
	I0603 13:15:22.813495 1114112 command_runner.go:130] >       "pinned": false
	I0603 13:15:22.813501 1114112 command_runner.go:130] >     },
	I0603 13:15:22.813504 1114112 command_runner.go:130] >     {
	I0603 13:15:22.813513 1114112 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0603 13:15:22.813519 1114112 command_runner.go:130] >       "repoTags": [
	I0603 13:15:22.813524 1114112 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0603 13:15:22.813530 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.813534 1114112 command_runner.go:130] >       "repoDigests": [
	I0603 13:15:22.813540 1114112 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0603 13:15:22.813549 1114112 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0603 13:15:22.813555 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.813560 1114112 command_runner.go:130] >       "size": "150779692",
	I0603 13:15:22.813566 1114112 command_runner.go:130] >       "uid": {
	I0603 13:15:22.813570 1114112 command_runner.go:130] >         "value": "0"
	I0603 13:15:22.813576 1114112 command_runner.go:130] >       },
	I0603 13:15:22.813580 1114112 command_runner.go:130] >       "username": "",
	I0603 13:15:22.813587 1114112 command_runner.go:130] >       "spec": null,
	I0603 13:15:22.813597 1114112 command_runner.go:130] >       "pinned": false
	I0603 13:15:22.813603 1114112 command_runner.go:130] >     },
	I0603 13:15:22.813607 1114112 command_runner.go:130] >     {
	I0603 13:15:22.813615 1114112 command_runner.go:130] >       "id": "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a",
	I0603 13:15:22.813619 1114112 command_runner.go:130] >       "repoTags": [
	I0603 13:15:22.813624 1114112 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.1"
	I0603 13:15:22.813630 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.813633 1114112 command_runner.go:130] >       "repoDigests": [
	I0603 13:15:22.813643 1114112 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea",
	I0603 13:15:22.813652 1114112 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"
	I0603 13:15:22.813657 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.813662 1114112 command_runner.go:130] >       "size": "117601759",
	I0603 13:15:22.813670 1114112 command_runner.go:130] >       "uid": {
	I0603 13:15:22.813680 1114112 command_runner.go:130] >         "value": "0"
	I0603 13:15:22.813688 1114112 command_runner.go:130] >       },
	I0603 13:15:22.813698 1114112 command_runner.go:130] >       "username": "",
	I0603 13:15:22.813707 1114112 command_runner.go:130] >       "spec": null,
	I0603 13:15:22.813716 1114112 command_runner.go:130] >       "pinned": false
	I0603 13:15:22.813724 1114112 command_runner.go:130] >     },
	I0603 13:15:22.813733 1114112 command_runner.go:130] >     {
	I0603 13:15:22.813745 1114112 command_runner.go:130] >       "id": "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c",
	I0603 13:15:22.813754 1114112 command_runner.go:130] >       "repoTags": [
	I0603 13:15:22.813766 1114112 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.1"
	I0603 13:15:22.813775 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.813784 1114112 command_runner.go:130] >       "repoDigests": [
	I0603 13:15:22.813821 1114112 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52",
	I0603 13:15:22.813836 1114112 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"
	I0603 13:15:22.813842 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.813852 1114112 command_runner.go:130] >       "size": "112170310",
	I0603 13:15:22.813860 1114112 command_runner.go:130] >       "uid": {
	I0603 13:15:22.813866 1114112 command_runner.go:130] >         "value": "0"
	I0603 13:15:22.813874 1114112 command_runner.go:130] >       },
	I0603 13:15:22.813881 1114112 command_runner.go:130] >       "username": "",
	I0603 13:15:22.813890 1114112 command_runner.go:130] >       "spec": null,
	I0603 13:15:22.813896 1114112 command_runner.go:130] >       "pinned": false
	I0603 13:15:22.813901 1114112 command_runner.go:130] >     },
	I0603 13:15:22.813919 1114112 command_runner.go:130] >     {
	I0603 13:15:22.813933 1114112 command_runner.go:130] >       "id": "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd",
	I0603 13:15:22.813942 1114112 command_runner.go:130] >       "repoTags": [
	I0603 13:15:22.813951 1114112 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.1"
	I0603 13:15:22.813959 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.813966 1114112 command_runner.go:130] >       "repoDigests": [
	I0603 13:15:22.813981 1114112 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa",
	I0603 13:15:22.814001 1114112 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"
	I0603 13:15:22.814011 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.814017 1114112 command_runner.go:130] >       "size": "85933465",
	I0603 13:15:22.814026 1114112 command_runner.go:130] >       "uid": null,
	I0603 13:15:22.814033 1114112 command_runner.go:130] >       "username": "",
	I0603 13:15:22.814041 1114112 command_runner.go:130] >       "spec": null,
	I0603 13:15:22.814045 1114112 command_runner.go:130] >       "pinned": false
	I0603 13:15:22.814051 1114112 command_runner.go:130] >     },
	I0603 13:15:22.814054 1114112 command_runner.go:130] >     {
	I0603 13:15:22.814061 1114112 command_runner.go:130] >       "id": "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035",
	I0603 13:15:22.814067 1114112 command_runner.go:130] >       "repoTags": [
	I0603 13:15:22.814074 1114112 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.1"
	I0603 13:15:22.814082 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.814088 1114112 command_runner.go:130] >       "repoDigests": [
	I0603 13:15:22.814101 1114112 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036",
	I0603 13:15:22.814117 1114112 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"
	I0603 13:15:22.814125 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.814130 1114112 command_runner.go:130] >       "size": "63026504",
	I0603 13:15:22.814135 1114112 command_runner.go:130] >       "uid": {
	I0603 13:15:22.814139 1114112 command_runner.go:130] >         "value": "0"
	I0603 13:15:22.814143 1114112 command_runner.go:130] >       },
	I0603 13:15:22.814147 1114112 command_runner.go:130] >       "username": "",
	I0603 13:15:22.814151 1114112 command_runner.go:130] >       "spec": null,
	I0603 13:15:22.814155 1114112 command_runner.go:130] >       "pinned": false
	I0603 13:15:22.814161 1114112 command_runner.go:130] >     },
	I0603 13:15:22.814165 1114112 command_runner.go:130] >     {
	I0603 13:15:22.814170 1114112 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0603 13:15:22.814177 1114112 command_runner.go:130] >       "repoTags": [
	I0603 13:15:22.814181 1114112 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0603 13:15:22.814192 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.814199 1114112 command_runner.go:130] >       "repoDigests": [
	I0603 13:15:22.814206 1114112 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0603 13:15:22.814215 1114112 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0603 13:15:22.814221 1114112 command_runner.go:130] >       ],
	I0603 13:15:22.814225 1114112 command_runner.go:130] >       "size": "750414",
	I0603 13:15:22.814228 1114112 command_runner.go:130] >       "uid": {
	I0603 13:15:22.814238 1114112 command_runner.go:130] >         "value": "65535"
	I0603 13:15:22.814244 1114112 command_runner.go:130] >       },
	I0603 13:15:22.814248 1114112 command_runner.go:130] >       "username": "",
	I0603 13:15:22.814252 1114112 command_runner.go:130] >       "spec": null,
	I0603 13:15:22.814255 1114112 command_runner.go:130] >       "pinned": true
	I0603 13:15:22.814258 1114112 command_runner.go:130] >     }
	I0603 13:15:22.814261 1114112 command_runner.go:130] >   ]
	I0603 13:15:22.814265 1114112 command_runner.go:130] > }
	I0603 13:15:22.814424 1114112 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 13:15:22.814438 1114112 cache_images.go:84] Images are preloaded, skipping loading
	I0603 13:15:22.814447 1114112 kubeadm.go:928] updating node { 192.168.39.141 8443 v1.30.1 crio true true} ...
	I0603 13:15:22.814579 1114112 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-101468 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.141
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-101468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 13:15:22.814643 1114112 ssh_runner.go:195] Run: crio config
	I0603 13:15:22.848439 1114112 command_runner.go:130] ! time="2024-06-03 13:15:22.825434204Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0603 13:15:22.854503 1114112 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0603 13:15:22.861480 1114112 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0603 13:15:22.861502 1114112 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0603 13:15:22.861508 1114112 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0603 13:15:22.861511 1114112 command_runner.go:130] > #
	I0603 13:15:22.861519 1114112 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0603 13:15:22.861525 1114112 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0603 13:15:22.861531 1114112 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0603 13:15:22.861537 1114112 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0603 13:15:22.861547 1114112 command_runner.go:130] > # reload'.
	I0603 13:15:22.861553 1114112 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0603 13:15:22.861559 1114112 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0603 13:15:22.861565 1114112 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0603 13:15:22.861580 1114112 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0603 13:15:22.861588 1114112 command_runner.go:130] > [crio]
	I0603 13:15:22.861599 1114112 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0603 13:15:22.861609 1114112 command_runner.go:130] > # containers images, in this directory.
	I0603 13:15:22.861616 1114112 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0603 13:15:22.861631 1114112 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0603 13:15:22.861642 1114112 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0603 13:15:22.861653 1114112 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0603 13:15:22.861660 1114112 command_runner.go:130] > # imagestore = ""
	I0603 13:15:22.861670 1114112 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0603 13:15:22.861691 1114112 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0603 13:15:22.861698 1114112 command_runner.go:130] > storage_driver = "overlay"
	I0603 13:15:22.861703 1114112 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0603 13:15:22.861713 1114112 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0603 13:15:22.861723 1114112 command_runner.go:130] > storage_option = [
	I0603 13:15:22.861733 1114112 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0603 13:15:22.861741 1114112 command_runner.go:130] > ]
	I0603 13:15:22.861751 1114112 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0603 13:15:22.861757 1114112 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0603 13:15:22.861761 1114112 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0603 13:15:22.861766 1114112 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0603 13:15:22.861775 1114112 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0603 13:15:22.861779 1114112 command_runner.go:130] > # always happen on a node reboot
	I0603 13:15:22.861786 1114112 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0603 13:15:22.861798 1114112 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0603 13:15:22.861806 1114112 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0603 13:15:22.861811 1114112 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0603 13:15:22.861816 1114112 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0603 13:15:22.861823 1114112 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0603 13:15:22.861833 1114112 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0603 13:15:22.861838 1114112 command_runner.go:130] > # internal_wipe = true
	I0603 13:15:22.861846 1114112 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0603 13:15:22.861859 1114112 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0603 13:15:22.861865 1114112 command_runner.go:130] > # internal_repair = false
	I0603 13:15:22.861871 1114112 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0603 13:15:22.861876 1114112 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0603 13:15:22.861884 1114112 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0603 13:15:22.861889 1114112 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0603 13:15:22.861894 1114112 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0603 13:15:22.861900 1114112 command_runner.go:130] > [crio.api]
	I0603 13:15:22.861905 1114112 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0603 13:15:22.861912 1114112 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0603 13:15:22.861917 1114112 command_runner.go:130] > # IP address on which the stream server will listen.
	I0603 13:15:22.861924 1114112 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0603 13:15:22.861930 1114112 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0603 13:15:22.861937 1114112 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0603 13:15:22.861941 1114112 command_runner.go:130] > # stream_port = "0"
	I0603 13:15:22.861946 1114112 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0603 13:15:22.861953 1114112 command_runner.go:130] > # stream_enable_tls = false
	I0603 13:15:22.861959 1114112 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0603 13:15:22.861965 1114112 command_runner.go:130] > # stream_idle_timeout = ""
	I0603 13:15:22.861971 1114112 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0603 13:15:22.861977 1114112 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0603 13:15:22.861983 1114112 command_runner.go:130] > # minutes.
	I0603 13:15:22.861987 1114112 command_runner.go:130] > # stream_tls_cert = ""
	I0603 13:15:22.861996 1114112 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0603 13:15:22.862005 1114112 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0603 13:15:22.862008 1114112 command_runner.go:130] > # stream_tls_key = ""
	I0603 13:15:22.862014 1114112 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0603 13:15:22.862021 1114112 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0603 13:15:22.862039 1114112 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0603 13:15:22.862047 1114112 command_runner.go:130] > # stream_tls_ca = ""
	I0603 13:15:22.862053 1114112 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0603 13:15:22.862058 1114112 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0603 13:15:22.862064 1114112 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0603 13:15:22.862074 1114112 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0603 13:15:22.862080 1114112 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0603 13:15:22.862085 1114112 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0603 13:15:22.862093 1114112 command_runner.go:130] > [crio.runtime]
	I0603 13:15:22.862098 1114112 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0603 13:15:22.862103 1114112 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0603 13:15:22.862107 1114112 command_runner.go:130] > # "nofile=1024:2048"
	I0603 13:15:22.862112 1114112 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0603 13:15:22.862116 1114112 command_runner.go:130] > # default_ulimits = [
	I0603 13:15:22.862119 1114112 command_runner.go:130] > # ]
	I0603 13:15:22.862124 1114112 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0603 13:15:22.862129 1114112 command_runner.go:130] > # no_pivot = false
	I0603 13:15:22.862133 1114112 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0603 13:15:22.862139 1114112 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0603 13:15:22.862144 1114112 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0603 13:15:22.862149 1114112 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0603 13:15:22.862155 1114112 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0603 13:15:22.862161 1114112 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0603 13:15:22.862166 1114112 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0603 13:15:22.862170 1114112 command_runner.go:130] > # Cgroup setting for conmon
	I0603 13:15:22.862178 1114112 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0603 13:15:22.862182 1114112 command_runner.go:130] > conmon_cgroup = "pod"
	I0603 13:15:22.862187 1114112 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0603 13:15:22.862194 1114112 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0603 13:15:22.862201 1114112 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0603 13:15:22.862207 1114112 command_runner.go:130] > conmon_env = [
	I0603 13:15:22.862212 1114112 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0603 13:15:22.862215 1114112 command_runner.go:130] > ]
	I0603 13:15:22.862220 1114112 command_runner.go:130] > # Additional environment variables to set for all the
	I0603 13:15:22.862230 1114112 command_runner.go:130] > # containers. These are overridden if set in the
	I0603 13:15:22.862238 1114112 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0603 13:15:22.862242 1114112 command_runner.go:130] > # default_env = [
	I0603 13:15:22.862245 1114112 command_runner.go:130] > # ]
	I0603 13:15:22.862251 1114112 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0603 13:15:22.862260 1114112 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0603 13:15:22.862270 1114112 command_runner.go:130] > # selinux = false
	I0603 13:15:22.862278 1114112 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0603 13:15:22.862284 1114112 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0603 13:15:22.862291 1114112 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0603 13:15:22.862301 1114112 command_runner.go:130] > # seccomp_profile = ""
	I0603 13:15:22.862309 1114112 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0603 13:15:22.862314 1114112 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0603 13:15:22.862320 1114112 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0603 13:15:22.862326 1114112 command_runner.go:130] > # which might increase security.
	I0603 13:15:22.862331 1114112 command_runner.go:130] > # This option is currently deprecated,
	I0603 13:15:22.862336 1114112 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0603 13:15:22.862343 1114112 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0603 13:15:22.862349 1114112 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0603 13:15:22.862356 1114112 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0603 13:15:22.862362 1114112 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0603 13:15:22.862370 1114112 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0603 13:15:22.862375 1114112 command_runner.go:130] > # This option supports live configuration reload.
	I0603 13:15:22.862382 1114112 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0603 13:15:22.862387 1114112 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0603 13:15:22.862394 1114112 command_runner.go:130] > # the cgroup blockio controller.
	I0603 13:15:22.862398 1114112 command_runner.go:130] > # blockio_config_file = ""
	I0603 13:15:22.862404 1114112 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0603 13:15:22.862409 1114112 command_runner.go:130] > # blockio parameters.
	I0603 13:15:22.862412 1114112 command_runner.go:130] > # blockio_reload = false
	I0603 13:15:22.862418 1114112 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0603 13:15:22.862424 1114112 command_runner.go:130] > # irqbalance daemon.
	I0603 13:15:22.862429 1114112 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0603 13:15:22.862438 1114112 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0603 13:15:22.862444 1114112 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0603 13:15:22.862453 1114112 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0603 13:15:22.862458 1114112 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0603 13:15:22.862465 1114112 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0603 13:15:22.862471 1114112 command_runner.go:130] > # This option supports live configuration reload.
	I0603 13:15:22.862475 1114112 command_runner.go:130] > # rdt_config_file = ""
	I0603 13:15:22.862480 1114112 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0603 13:15:22.862484 1114112 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0603 13:15:22.862517 1114112 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0603 13:15:22.862524 1114112 command_runner.go:130] > # separate_pull_cgroup = ""
	I0603 13:15:22.862530 1114112 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0603 13:15:22.862535 1114112 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0603 13:15:22.862547 1114112 command_runner.go:130] > # will be added.
	I0603 13:15:22.862552 1114112 command_runner.go:130] > # default_capabilities = [
	I0603 13:15:22.862555 1114112 command_runner.go:130] > # 	"CHOWN",
	I0603 13:15:22.862561 1114112 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0603 13:15:22.862564 1114112 command_runner.go:130] > # 	"FSETID",
	I0603 13:15:22.862570 1114112 command_runner.go:130] > # 	"FOWNER",
	I0603 13:15:22.862573 1114112 command_runner.go:130] > # 	"SETGID",
	I0603 13:15:22.862577 1114112 command_runner.go:130] > # 	"SETUID",
	I0603 13:15:22.862580 1114112 command_runner.go:130] > # 	"SETPCAP",
	I0603 13:15:22.862584 1114112 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0603 13:15:22.862587 1114112 command_runner.go:130] > # 	"KILL",
	I0603 13:15:22.862591 1114112 command_runner.go:130] > # ]
	I0603 13:15:22.862600 1114112 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0603 13:15:22.862609 1114112 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0603 13:15:22.862613 1114112 command_runner.go:130] > # add_inheritable_capabilities = false
	I0603 13:15:22.862621 1114112 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0603 13:15:22.862626 1114112 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0603 13:15:22.862631 1114112 command_runner.go:130] > default_sysctls = [
	I0603 13:15:22.862638 1114112 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0603 13:15:22.862641 1114112 command_runner.go:130] > ]
	I0603 13:15:22.862646 1114112 command_runner.go:130] > # List of devices on the host that a
	I0603 13:15:22.862654 1114112 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0603 13:15:22.862658 1114112 command_runner.go:130] > # allowed_devices = [
	I0603 13:15:22.862668 1114112 command_runner.go:130] > # 	"/dev/fuse",
	I0603 13:15:22.862671 1114112 command_runner.go:130] > # ]
	I0603 13:15:22.862676 1114112 command_runner.go:130] > # List of additional devices. specified as
	I0603 13:15:22.862683 1114112 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0603 13:15:22.862690 1114112 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0603 13:15:22.862695 1114112 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0603 13:15:22.862701 1114112 command_runner.go:130] > # additional_devices = [
	I0603 13:15:22.862704 1114112 command_runner.go:130] > # ]
	I0603 13:15:22.862709 1114112 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0603 13:15:22.862715 1114112 command_runner.go:130] > # cdi_spec_dirs = [
	I0603 13:15:22.862718 1114112 command_runner.go:130] > # 	"/etc/cdi",
	I0603 13:15:22.862722 1114112 command_runner.go:130] > # 	"/var/run/cdi",
	I0603 13:15:22.862725 1114112 command_runner.go:130] > # ]
	I0603 13:15:22.862737 1114112 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0603 13:15:22.862745 1114112 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0603 13:15:22.862749 1114112 command_runner.go:130] > # Defaults to false.
	I0603 13:15:22.862756 1114112 command_runner.go:130] > # device_ownership_from_security_context = false
	I0603 13:15:22.862762 1114112 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0603 13:15:22.862770 1114112 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0603 13:15:22.862773 1114112 command_runner.go:130] > # hooks_dir = [
	I0603 13:15:22.862778 1114112 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0603 13:15:22.862784 1114112 command_runner.go:130] > # ]
	I0603 13:15:22.862789 1114112 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0603 13:15:22.862795 1114112 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0603 13:15:22.862802 1114112 command_runner.go:130] > # its default mounts from the following two files:
	I0603 13:15:22.862805 1114112 command_runner.go:130] > #
	I0603 13:15:22.862811 1114112 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0603 13:15:22.862818 1114112 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0603 13:15:22.862823 1114112 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0603 13:15:22.862829 1114112 command_runner.go:130] > #
	I0603 13:15:22.862834 1114112 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0603 13:15:22.862842 1114112 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0603 13:15:22.862849 1114112 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0603 13:15:22.862855 1114112 command_runner.go:130] > #      only add mounts it finds in this file.
	I0603 13:15:22.862858 1114112 command_runner.go:130] > #
	I0603 13:15:22.862862 1114112 command_runner.go:130] > # default_mounts_file = ""
	I0603 13:15:22.862867 1114112 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0603 13:15:22.862874 1114112 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0603 13:15:22.862878 1114112 command_runner.go:130] > pids_limit = 1024
	I0603 13:15:22.862884 1114112 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0603 13:15:22.862891 1114112 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0603 13:15:22.862897 1114112 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0603 13:15:22.862907 1114112 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0603 13:15:22.862911 1114112 command_runner.go:130] > # log_size_max = -1
	I0603 13:15:22.862917 1114112 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0603 13:15:22.862929 1114112 command_runner.go:130] > # log_to_journald = false
	I0603 13:15:22.862937 1114112 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0603 13:15:22.862942 1114112 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0603 13:15:22.862949 1114112 command_runner.go:130] > # Path to directory for container attach sockets.
	I0603 13:15:22.862958 1114112 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0603 13:15:22.862966 1114112 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0603 13:15:22.862970 1114112 command_runner.go:130] > # bind_mount_prefix = ""
	I0603 13:15:22.862977 1114112 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0603 13:15:22.862981 1114112 command_runner.go:130] > # read_only = false
	I0603 13:15:22.862988 1114112 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0603 13:15:22.862993 1114112 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0603 13:15:22.862998 1114112 command_runner.go:130] > # live configuration reload.
	I0603 13:15:22.863001 1114112 command_runner.go:130] > # log_level = "info"
	I0603 13:15:22.863006 1114112 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0603 13:15:22.863013 1114112 command_runner.go:130] > # This option supports live configuration reload.
	I0603 13:15:22.863017 1114112 command_runner.go:130] > # log_filter = ""
	I0603 13:15:22.863022 1114112 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0603 13:15:22.863031 1114112 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0603 13:15:22.863035 1114112 command_runner.go:130] > # separated by comma.
	I0603 13:15:22.863042 1114112 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0603 13:15:22.863049 1114112 command_runner.go:130] > # uid_mappings = ""
	I0603 13:15:22.863055 1114112 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0603 13:15:22.863065 1114112 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0603 13:15:22.863071 1114112 command_runner.go:130] > # separated by comma.
	I0603 13:15:22.863078 1114112 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0603 13:15:22.863084 1114112 command_runner.go:130] > # gid_mappings = ""
	I0603 13:15:22.863090 1114112 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0603 13:15:22.863097 1114112 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0603 13:15:22.863102 1114112 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0603 13:15:22.863112 1114112 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0603 13:15:22.863116 1114112 command_runner.go:130] > # minimum_mappable_uid = -1
	I0603 13:15:22.863123 1114112 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0603 13:15:22.863129 1114112 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0603 13:15:22.863137 1114112 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0603 13:15:22.863144 1114112 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0603 13:15:22.863151 1114112 command_runner.go:130] > # minimum_mappable_gid = -1
	I0603 13:15:22.863156 1114112 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0603 13:15:22.863164 1114112 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0603 13:15:22.863169 1114112 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0603 13:15:22.863176 1114112 command_runner.go:130] > # ctr_stop_timeout = 30
	I0603 13:15:22.863186 1114112 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0603 13:15:22.863194 1114112 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0603 13:15:22.863198 1114112 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0603 13:15:22.863203 1114112 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0603 13:15:22.863207 1114112 command_runner.go:130] > drop_infra_ctr = false
	I0603 13:15:22.863213 1114112 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0603 13:15:22.863220 1114112 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0603 13:15:22.863226 1114112 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0603 13:15:22.863231 1114112 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0603 13:15:22.863237 1114112 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0603 13:15:22.863243 1114112 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0603 13:15:22.863250 1114112 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0603 13:15:22.863256 1114112 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0603 13:15:22.863262 1114112 command_runner.go:130] > # shared_cpuset = ""
	I0603 13:15:22.863275 1114112 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0603 13:15:22.863283 1114112 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0603 13:15:22.863287 1114112 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0603 13:15:22.863297 1114112 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0603 13:15:22.863301 1114112 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0603 13:15:22.863306 1114112 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0603 13:15:22.863314 1114112 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0603 13:15:22.863318 1114112 command_runner.go:130] > # enable_criu_support = false
	I0603 13:15:22.863327 1114112 command_runner.go:130] > # Enable/disable the generation of the container,
	I0603 13:15:22.863333 1114112 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0603 13:15:22.863340 1114112 command_runner.go:130] > # enable_pod_events = false
	I0603 13:15:22.863346 1114112 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0603 13:15:22.863354 1114112 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0603 13:15:22.863359 1114112 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0603 13:15:22.863365 1114112 command_runner.go:130] > # default_runtime = "runc"
	I0603 13:15:22.863370 1114112 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0603 13:15:22.863378 1114112 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0603 13:15:22.863388 1114112 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0603 13:15:22.863396 1114112 command_runner.go:130] > # creation as a file is not desired either.
	I0603 13:15:22.863403 1114112 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0603 13:15:22.863408 1114112 command_runner.go:130] > # the hostname is being managed dynamically.
	I0603 13:15:22.863418 1114112 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0603 13:15:22.863426 1114112 command_runner.go:130] > # ]
	I0603 13:15:22.863434 1114112 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0603 13:15:22.863440 1114112 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0603 13:15:22.863448 1114112 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0603 13:15:22.863453 1114112 command_runner.go:130] > # Each entry in the table should follow the format:
	I0603 13:15:22.863458 1114112 command_runner.go:130] > #
	I0603 13:15:22.863462 1114112 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0603 13:15:22.863467 1114112 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0603 13:15:22.863516 1114112 command_runner.go:130] > # runtime_type = "oci"
	I0603 13:15:22.863523 1114112 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0603 13:15:22.863527 1114112 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0603 13:15:22.863531 1114112 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0603 13:15:22.863537 1114112 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0603 13:15:22.863541 1114112 command_runner.go:130] > # monitor_env = []
	I0603 13:15:22.863548 1114112 command_runner.go:130] > # privileged_without_host_devices = false
	I0603 13:15:22.863551 1114112 command_runner.go:130] > # allowed_annotations = []
	I0603 13:15:22.863557 1114112 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0603 13:15:22.863562 1114112 command_runner.go:130] > # Where:
	I0603 13:15:22.863567 1114112 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0603 13:15:22.863576 1114112 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0603 13:15:22.863582 1114112 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0603 13:15:22.863590 1114112 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0603 13:15:22.863594 1114112 command_runner.go:130] > #   in $PATH.
	I0603 13:15:22.863601 1114112 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0603 13:15:22.863606 1114112 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0603 13:15:22.863614 1114112 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0603 13:15:22.863618 1114112 command_runner.go:130] > #   state.
	I0603 13:15:22.863626 1114112 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0603 13:15:22.863631 1114112 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0603 13:15:22.863639 1114112 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0603 13:15:22.863644 1114112 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0603 13:15:22.863650 1114112 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0603 13:15:22.863657 1114112 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0603 13:15:22.863669 1114112 command_runner.go:130] > #   The currently recognized values are:
	I0603 13:15:22.863677 1114112 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0603 13:15:22.863684 1114112 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0603 13:15:22.863697 1114112 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0603 13:15:22.863705 1114112 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0603 13:15:22.863712 1114112 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0603 13:15:22.863720 1114112 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0603 13:15:22.863727 1114112 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0603 13:15:22.863735 1114112 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0603 13:15:22.863740 1114112 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0603 13:15:22.863748 1114112 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0603 13:15:22.863753 1114112 command_runner.go:130] > #   deprecated option "conmon".
	I0603 13:15:22.863760 1114112 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0603 13:15:22.863766 1114112 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0603 13:15:22.863773 1114112 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0603 13:15:22.863779 1114112 command_runner.go:130] > #   should be moved to the container's cgroup
	I0603 13:15:22.863786 1114112 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0603 13:15:22.863792 1114112 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0603 13:15:22.863799 1114112 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0603 13:15:22.863806 1114112 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0603 13:15:22.863809 1114112 command_runner.go:130] > #
	I0603 13:15:22.863814 1114112 command_runner.go:130] > # Using the seccomp notifier feature:
	I0603 13:15:22.863817 1114112 command_runner.go:130] > #
	I0603 13:15:22.863822 1114112 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0603 13:15:22.863829 1114112 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0603 13:15:22.863832 1114112 command_runner.go:130] > #
	I0603 13:15:22.863838 1114112 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0603 13:15:22.863846 1114112 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0603 13:15:22.863849 1114112 command_runner.go:130] > #
	I0603 13:15:22.863857 1114112 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0603 13:15:22.863860 1114112 command_runner.go:130] > # feature.
	I0603 13:15:22.863863 1114112 command_runner.go:130] > #
	I0603 13:15:22.863868 1114112 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0603 13:15:22.863874 1114112 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0603 13:15:22.863880 1114112 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0603 13:15:22.863889 1114112 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0603 13:15:22.863895 1114112 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0603 13:15:22.863900 1114112 command_runner.go:130] > #
	I0603 13:15:22.863906 1114112 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0603 13:15:22.863917 1114112 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0603 13:15:22.863922 1114112 command_runner.go:130] > #
	I0603 13:15:22.863927 1114112 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0603 13:15:22.863935 1114112 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0603 13:15:22.863938 1114112 command_runner.go:130] > #
	I0603 13:15:22.863944 1114112 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0603 13:15:22.863951 1114112 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0603 13:15:22.863955 1114112 command_runner.go:130] > # limitation.
	I0603 13:15:22.863959 1114112 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0603 13:15:22.863966 1114112 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0603 13:15:22.863970 1114112 command_runner.go:130] > runtime_type = "oci"
	I0603 13:15:22.863974 1114112 command_runner.go:130] > runtime_root = "/run/runc"
	I0603 13:15:22.863978 1114112 command_runner.go:130] > runtime_config_path = ""
	I0603 13:15:22.863983 1114112 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0603 13:15:22.863989 1114112 command_runner.go:130] > monitor_cgroup = "pod"
	I0603 13:15:22.863993 1114112 command_runner.go:130] > monitor_exec_cgroup = ""
	I0603 13:15:22.863997 1114112 command_runner.go:130] > monitor_env = [
	I0603 13:15:22.864002 1114112 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0603 13:15:22.864005 1114112 command_runner.go:130] > ]
	I0603 13:15:22.864009 1114112 command_runner.go:130] > privileged_without_host_devices = false
	I0603 13:15:22.864017 1114112 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0603 13:15:22.864022 1114112 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0603 13:15:22.864031 1114112 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0603 13:15:22.864038 1114112 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0603 13:15:22.864049 1114112 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0603 13:15:22.864057 1114112 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0603 13:15:22.864065 1114112 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0603 13:15:22.864075 1114112 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0603 13:15:22.864080 1114112 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0603 13:15:22.864087 1114112 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0603 13:15:22.864090 1114112 command_runner.go:130] > # Example:
	I0603 13:15:22.864094 1114112 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0603 13:15:22.864098 1114112 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0603 13:15:22.864103 1114112 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0603 13:15:22.864108 1114112 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0603 13:15:22.864111 1114112 command_runner.go:130] > # cpuset = 0
	I0603 13:15:22.864119 1114112 command_runner.go:130] > # cpushares = "0-1"
	I0603 13:15:22.864122 1114112 command_runner.go:130] > # Where:
	I0603 13:15:22.864126 1114112 command_runner.go:130] > # The workload name is workload-type.
	I0603 13:15:22.864133 1114112 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0603 13:15:22.864138 1114112 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0603 13:15:22.864145 1114112 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0603 13:15:22.864152 1114112 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0603 13:15:22.864157 1114112 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0603 13:15:22.864162 1114112 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0603 13:15:22.864168 1114112 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0603 13:15:22.864174 1114112 command_runner.go:130] > # Default value is set to true
	I0603 13:15:22.864178 1114112 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0603 13:15:22.864184 1114112 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0603 13:15:22.864190 1114112 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0603 13:15:22.864194 1114112 command_runner.go:130] > # Default value is set to 'false'
	I0603 13:15:22.864200 1114112 command_runner.go:130] > # disable_hostport_mapping = false
	I0603 13:15:22.864206 1114112 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0603 13:15:22.864211 1114112 command_runner.go:130] > #
	I0603 13:15:22.864216 1114112 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0603 13:15:22.864223 1114112 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0603 13:15:22.864229 1114112 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0603 13:15:22.864237 1114112 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0603 13:15:22.864242 1114112 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0603 13:15:22.864248 1114112 command_runner.go:130] > [crio.image]
	I0603 13:15:22.864254 1114112 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0603 13:15:22.864261 1114112 command_runner.go:130] > # default_transport = "docker://"
	I0603 13:15:22.864270 1114112 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0603 13:15:22.864278 1114112 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0603 13:15:22.864282 1114112 command_runner.go:130] > # global_auth_file = ""
	I0603 13:15:22.864287 1114112 command_runner.go:130] > # The image used to instantiate infra containers.
	I0603 13:15:22.864292 1114112 command_runner.go:130] > # This option supports live configuration reload.
	I0603 13:15:22.864297 1114112 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0603 13:15:22.864306 1114112 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0603 13:15:22.864311 1114112 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0603 13:15:22.864318 1114112 command_runner.go:130] > # This option supports live configuration reload.
	I0603 13:15:22.864322 1114112 command_runner.go:130] > # pause_image_auth_file = ""
	I0603 13:15:22.864333 1114112 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0603 13:15:22.864341 1114112 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0603 13:15:22.864347 1114112 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0603 13:15:22.864355 1114112 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0603 13:15:22.864359 1114112 command_runner.go:130] > # pause_command = "/pause"
	I0603 13:15:22.864364 1114112 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0603 13:15:22.864374 1114112 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0603 13:15:22.864380 1114112 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0603 13:15:22.864387 1114112 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0603 13:15:22.864394 1114112 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0603 13:15:22.864400 1114112 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0603 13:15:22.864406 1114112 command_runner.go:130] > # pinned_images = [
	I0603 13:15:22.864409 1114112 command_runner.go:130] > # ]
	I0603 13:15:22.864414 1114112 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0603 13:15:22.864423 1114112 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0603 13:15:22.864429 1114112 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0603 13:15:22.864437 1114112 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0603 13:15:22.864441 1114112 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0603 13:15:22.864446 1114112 command_runner.go:130] > # signature_policy = ""
	I0603 13:15:22.864452 1114112 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0603 13:15:22.864460 1114112 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0603 13:15:22.864466 1114112 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0603 13:15:22.864474 1114112 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0603 13:15:22.864479 1114112 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0603 13:15:22.864483 1114112 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0603 13:15:22.864491 1114112 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0603 13:15:22.864497 1114112 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0603 13:15:22.864503 1114112 command_runner.go:130] > # changing them here.
	I0603 13:15:22.864507 1114112 command_runner.go:130] > # insecure_registries = [
	I0603 13:15:22.864512 1114112 command_runner.go:130] > # ]
	I0603 13:15:22.864518 1114112 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0603 13:15:22.864525 1114112 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0603 13:15:22.864529 1114112 command_runner.go:130] > # image_volumes = "mkdir"
	I0603 13:15:22.864534 1114112 command_runner.go:130] > # Temporary directory to use for storing big files
	I0603 13:15:22.864539 1114112 command_runner.go:130] > # big_files_temporary_dir = ""
	I0603 13:15:22.864547 1114112 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0603 13:15:22.864555 1114112 command_runner.go:130] > # CNI plugins.
	I0603 13:15:22.864561 1114112 command_runner.go:130] > [crio.network]
	I0603 13:15:22.864567 1114112 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0603 13:15:22.864574 1114112 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0603 13:15:22.864578 1114112 command_runner.go:130] > # cni_default_network = ""
	I0603 13:15:22.864583 1114112 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0603 13:15:22.864590 1114112 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0603 13:15:22.864595 1114112 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0603 13:15:22.864600 1114112 command_runner.go:130] > # plugin_dirs = [
	I0603 13:15:22.864604 1114112 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0603 13:15:22.864610 1114112 command_runner.go:130] > # ]
	I0603 13:15:22.864615 1114112 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0603 13:15:22.864621 1114112 command_runner.go:130] > [crio.metrics]
	I0603 13:15:22.864626 1114112 command_runner.go:130] > # Globally enable or disable metrics support.
	I0603 13:15:22.864631 1114112 command_runner.go:130] > enable_metrics = true
	I0603 13:15:22.864636 1114112 command_runner.go:130] > # Specify enabled metrics collectors.
	I0603 13:15:22.864641 1114112 command_runner.go:130] > # Per default all metrics are enabled.
	I0603 13:15:22.864647 1114112 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0603 13:15:22.864655 1114112 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0603 13:15:22.864660 1114112 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0603 13:15:22.864667 1114112 command_runner.go:130] > # metrics_collectors = [
	I0603 13:15:22.864670 1114112 command_runner.go:130] > # 	"operations",
	I0603 13:15:22.864675 1114112 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0603 13:15:22.864679 1114112 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0603 13:15:22.864685 1114112 command_runner.go:130] > # 	"operations_errors",
	I0603 13:15:22.864689 1114112 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0603 13:15:22.864695 1114112 command_runner.go:130] > # 	"image_pulls_by_name",
	I0603 13:15:22.864699 1114112 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0603 13:15:22.864703 1114112 command_runner.go:130] > # 	"image_pulls_failures",
	I0603 13:15:22.864707 1114112 command_runner.go:130] > # 	"image_pulls_successes",
	I0603 13:15:22.864713 1114112 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0603 13:15:22.864717 1114112 command_runner.go:130] > # 	"image_layer_reuse",
	I0603 13:15:22.864721 1114112 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0603 13:15:22.864725 1114112 command_runner.go:130] > # 	"containers_oom_total",
	I0603 13:15:22.864729 1114112 command_runner.go:130] > # 	"containers_oom",
	I0603 13:15:22.864733 1114112 command_runner.go:130] > # 	"processes_defunct",
	I0603 13:15:22.864741 1114112 command_runner.go:130] > # 	"operations_total",
	I0603 13:15:22.864747 1114112 command_runner.go:130] > # 	"operations_latency_seconds",
	I0603 13:15:22.864752 1114112 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0603 13:15:22.864758 1114112 command_runner.go:130] > # 	"operations_errors_total",
	I0603 13:15:22.864762 1114112 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0603 13:15:22.864766 1114112 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0603 13:15:22.864772 1114112 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0603 13:15:22.864776 1114112 command_runner.go:130] > # 	"image_pulls_success_total",
	I0603 13:15:22.864780 1114112 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0603 13:15:22.864787 1114112 command_runner.go:130] > # 	"containers_oom_count_total",
	I0603 13:15:22.864791 1114112 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0603 13:15:22.864798 1114112 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0603 13:15:22.864801 1114112 command_runner.go:130] > # ]
	I0603 13:15:22.864805 1114112 command_runner.go:130] > # The port on which the metrics server will listen.
	I0603 13:15:22.864815 1114112 command_runner.go:130] > # metrics_port = 9090
	I0603 13:15:22.864823 1114112 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0603 13:15:22.864827 1114112 command_runner.go:130] > # metrics_socket = ""
	I0603 13:15:22.864833 1114112 command_runner.go:130] > # The certificate for the secure metrics server.
	I0603 13:15:22.864841 1114112 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0603 13:15:22.864846 1114112 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0603 13:15:22.864853 1114112 command_runner.go:130] > # certificate on any modification event.
	I0603 13:15:22.864856 1114112 command_runner.go:130] > # metrics_cert = ""
	I0603 13:15:22.864861 1114112 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0603 13:15:22.864867 1114112 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0603 13:15:22.864871 1114112 command_runner.go:130] > # metrics_key = ""
	I0603 13:15:22.864879 1114112 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0603 13:15:22.864882 1114112 command_runner.go:130] > [crio.tracing]
	I0603 13:15:22.864890 1114112 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0603 13:15:22.864894 1114112 command_runner.go:130] > # enable_tracing = false
	I0603 13:15:22.864901 1114112 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0603 13:15:22.864905 1114112 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0603 13:15:22.864914 1114112 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0603 13:15:22.864919 1114112 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0603 13:15:22.864925 1114112 command_runner.go:130] > # CRI-O NRI configuration.
	I0603 13:15:22.864929 1114112 command_runner.go:130] > [crio.nri]
	I0603 13:15:22.864935 1114112 command_runner.go:130] > # Globally enable or disable NRI.
	I0603 13:15:22.864943 1114112 command_runner.go:130] > # enable_nri = false
	I0603 13:15:22.864950 1114112 command_runner.go:130] > # NRI socket to listen on.
	I0603 13:15:22.864954 1114112 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0603 13:15:22.864958 1114112 command_runner.go:130] > # NRI plugin directory to use.
	I0603 13:15:22.864964 1114112 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0603 13:15:22.864969 1114112 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0603 13:15:22.864976 1114112 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0603 13:15:22.864981 1114112 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0603 13:15:22.864987 1114112 command_runner.go:130] > # nri_disable_connections = false
	I0603 13:15:22.864992 1114112 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0603 13:15:22.864999 1114112 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0603 13:15:22.865003 1114112 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0603 13:15:22.865009 1114112 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0603 13:15:22.865015 1114112 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0603 13:15:22.865019 1114112 command_runner.go:130] > [crio.stats]
	I0603 13:15:22.865025 1114112 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0603 13:15:22.865033 1114112 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0603 13:15:22.865036 1114112 command_runner.go:130] > # stats_collection_period = 0
	I0603 13:15:22.865195 1114112 cni.go:84] Creating CNI manager for ""
	I0603 13:15:22.865210 1114112 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0603 13:15:22.865220 1114112 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 13:15:22.865241 1114112 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.141 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-101468 NodeName:multinode-101468 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.141"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.141 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 13:15:22.865394 1114112 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.141
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-101468"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.141
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.141"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 13:15:22.865465 1114112 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 13:15:22.875999 1114112 command_runner.go:130] > kubeadm
	I0603 13:15:22.876022 1114112 command_runner.go:130] > kubectl
	I0603 13:15:22.876027 1114112 command_runner.go:130] > kubelet
	I0603 13:15:22.876099 1114112 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 13:15:22.876176 1114112 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 13:15:22.885466 1114112 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0603 13:15:22.902666 1114112 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 13:15:22.919466 1114112 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0603 13:15:22.937367 1114112 ssh_runner.go:195] Run: grep 192.168.39.141	control-plane.minikube.internal$ /etc/hosts
	I0603 13:15:22.941397 1114112 command_runner.go:130] > 192.168.39.141	control-plane.minikube.internal
	I0603 13:15:22.941528 1114112 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:15:23.100868 1114112 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:15:23.116099 1114112 certs.go:68] Setting up /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/multinode-101468 for IP: 192.168.39.141
	I0603 13:15:23.116159 1114112 certs.go:194] generating shared ca certs ...
	I0603 13:15:23.116185 1114112 certs.go:226] acquiring lock for ca certs: {Name:mkeec5aabce7c9540fcb31b78e4f96c2851d54f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:15:23.116372 1114112 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key
	I0603 13:15:23.116412 1114112 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key
	I0603 13:15:23.116423 1114112 certs.go:256] generating profile certs ...
	I0603 13:15:23.116513 1114112 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/multinode-101468/client.key
	I0603 13:15:23.116565 1114112 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/multinode-101468/apiserver.key.6effd4cc
	I0603 13:15:23.116598 1114112 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/multinode-101468/proxy-client.key
	I0603 13:15:23.116609 1114112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0603 13:15:23.116620 1114112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0603 13:15:23.116637 1114112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0603 13:15:23.116649 1114112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0603 13:15:23.116660 1114112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/multinode-101468/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0603 13:15:23.116673 1114112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/multinode-101468/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0603 13:15:23.116684 1114112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/multinode-101468/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0603 13:15:23.116700 1114112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/multinode-101468/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0603 13:15:23.116767 1114112 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem (1338 bytes)
	W0603 13:15:23.116818 1114112 certs.go:480] ignoring /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251_empty.pem, impossibly tiny 0 bytes
	I0603 13:15:23.116832 1114112 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 13:15:23.116861 1114112 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem (1078 bytes)
	I0603 13:15:23.116890 1114112 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem (1123 bytes)
	I0603 13:15:23.116908 1114112 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem (1675 bytes)
	I0603 13:15:23.116955 1114112 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:15:23.116980 1114112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> /usr/share/ca-certificates/10862512.pem
	I0603 13:15:23.116993 1114112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:15:23.117008 1114112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem -> /usr/share/ca-certificates/1086251.pem
	I0603 13:15:23.117927 1114112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 13:15:23.143292 1114112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 13:15:23.168452 1114112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 13:15:23.192252 1114112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0603 13:15:23.215918 1114112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/multinode-101468/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0603 13:15:23.240345 1114112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/multinode-101468/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 13:15:23.264637 1114112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/multinode-101468/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 13:15:23.288382 1114112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/multinode-101468/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0603 13:15:23.312328 1114112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /usr/share/ca-certificates/10862512.pem (1708 bytes)
	I0603 13:15:23.336713 1114112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 13:15:23.361876 1114112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem --> /usr/share/ca-certificates/1086251.pem (1338 bytes)
	I0603 13:15:23.386167 1114112 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 13:15:23.402851 1114112 ssh_runner.go:195] Run: openssl version
	I0603 13:15:23.408734 1114112 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0603 13:15:23.408816 1114112 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10862512.pem && ln -fs /usr/share/ca-certificates/10862512.pem /etc/ssl/certs/10862512.pem"
	I0603 13:15:23.420054 1114112 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10862512.pem
	I0603 13:15:23.424809 1114112 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun  3 12:37 /usr/share/ca-certificates/10862512.pem
	I0603 13:15:23.424861 1114112 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:37 /usr/share/ca-certificates/10862512.pem
	I0603 13:15:23.424899 1114112 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10862512.pem
	I0603 13:15:23.430501 1114112 command_runner.go:130] > 3ec20f2e
	I0603 13:15:23.430566 1114112 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10862512.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 13:15:23.440411 1114112 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 13:15:23.452168 1114112 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:15:23.456967 1114112 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun  3 12:24 /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:15:23.457107 1114112 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:24 /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:15:23.457167 1114112 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:15:23.463299 1114112 command_runner.go:130] > b5213941
	I0603 13:15:23.464596 1114112 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 13:15:23.476647 1114112 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1086251.pem && ln -fs /usr/share/ca-certificates/1086251.pem /etc/ssl/certs/1086251.pem"
	I0603 13:15:23.488187 1114112 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1086251.pem
	I0603 13:15:23.492719 1114112 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun  3 12:37 /usr/share/ca-certificates/1086251.pem
	I0603 13:15:23.492754 1114112 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:37 /usr/share/ca-certificates/1086251.pem
	I0603 13:15:23.492802 1114112 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1086251.pem
	I0603 13:15:23.498540 1114112 command_runner.go:130] > 51391683
	I0603 13:15:23.498617 1114112 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1086251.pem /etc/ssl/certs/51391683.0"
	I0603 13:15:23.510152 1114112 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 13:15:23.515090 1114112 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 13:15:23.515128 1114112 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0603 13:15:23.515138 1114112 command_runner.go:130] > Device: 253,1	Inode: 6292502     Links: 1
	I0603 13:15:23.515148 1114112 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0603 13:15:23.515177 1114112 command_runner.go:130] > Access: 2024-06-03 13:09:22.392722678 +0000
	I0603 13:15:23.515190 1114112 command_runner.go:130] > Modify: 2024-06-03 13:09:22.392722678 +0000
	I0603 13:15:23.515197 1114112 command_runner.go:130] > Change: 2024-06-03 13:09:22.392722678 +0000
	I0603 13:15:23.515209 1114112 command_runner.go:130] >  Birth: 2024-06-03 13:09:22.392722678 +0000
	I0603 13:15:23.515281 1114112 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 13:15:23.521592 1114112 command_runner.go:130] > Certificate will not expire
	I0603 13:15:23.521865 1114112 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 13:15:23.527790 1114112 command_runner.go:130] > Certificate will not expire
	I0603 13:15:23.528027 1114112 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 13:15:23.534195 1114112 command_runner.go:130] > Certificate will not expire
	I0603 13:15:23.534392 1114112 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 13:15:23.540324 1114112 command_runner.go:130] > Certificate will not expire
	I0603 13:15:23.540383 1114112 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 13:15:23.546213 1114112 command_runner.go:130] > Certificate will not expire
	I0603 13:15:23.546429 1114112 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 13:15:23.552649 1114112 command_runner.go:130] > Certificate will not expire
	I0603 13:15:23.552727 1114112 kubeadm.go:391] StartCluster: {Name:multinode-101468 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:multinode-101468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.203 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:15:23.552840 1114112 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 13:15:23.552896 1114112 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:15:23.598377 1114112 command_runner.go:130] > 4aaed31d9690e67af1e8a3189c7ad89bbe7cca30dd59d49fa24f78fbb9b81166
	I0603 13:15:23.598404 1114112 command_runner.go:130] > c41d1ef9ae6ed41e317322b6b6f9ecdde78b0252da19619c1a79af409962ccf3
	I0603 13:15:23.598411 1114112 command_runner.go:130] > b5fb5fac18c2746c85ef619629aa9f8e67e8c840e0ebaf7de43627d03ef00e55
	I0603 13:15:23.598419 1114112 command_runner.go:130] > 4c205814428f5f446bf31b3c1eb05d88a0df1b650bc2eb6dac437cfc1aac5cb7
	I0603 13:15:23.598424 1114112 command_runner.go:130] > 21a5bccaa9cf36906929cabd4c0566494209b0dd42c22fbe5ca3dc90836ea727
	I0603 13:15:23.598429 1114112 command_runner.go:130] > d685e8439e32392d6f330b06bfe5a68dc490239dfa52a2263107ab0486ca22d3
	I0603 13:15:23.598440 1114112 command_runner.go:130] > 796bbd6b016f5db78dda5e2dd3aa3a11c30982b1456a74336ec89e55dcf5f94f
	I0603 13:15:23.598450 1114112 command_runner.go:130] > e9be4b439e8723094acf920c06dddcd4a0b5d64b13a048e5f569dc51d0fab096
	I0603 13:15:23.598483 1114112 cri.go:89] found id: "4aaed31d9690e67af1e8a3189c7ad89bbe7cca30dd59d49fa24f78fbb9b81166"
	I0603 13:15:23.598492 1114112 cri.go:89] found id: "c41d1ef9ae6ed41e317322b6b6f9ecdde78b0252da19619c1a79af409962ccf3"
	I0603 13:15:23.598495 1114112 cri.go:89] found id: "b5fb5fac18c2746c85ef619629aa9f8e67e8c840e0ebaf7de43627d03ef00e55"
	I0603 13:15:23.598498 1114112 cri.go:89] found id: "4c205814428f5f446bf31b3c1eb05d88a0df1b650bc2eb6dac437cfc1aac5cb7"
	I0603 13:15:23.598501 1114112 cri.go:89] found id: "21a5bccaa9cf36906929cabd4c0566494209b0dd42c22fbe5ca3dc90836ea727"
	I0603 13:15:23.598506 1114112 cri.go:89] found id: "d685e8439e32392d6f330b06bfe5a68dc490239dfa52a2263107ab0486ca22d3"
	I0603 13:15:23.598509 1114112 cri.go:89] found id: "796bbd6b016f5db78dda5e2dd3aa3a11c30982b1456a74336ec89e55dcf5f94f"
	I0603 13:15:23.598512 1114112 cri.go:89] found id: "e9be4b439e8723094acf920c06dddcd4a0b5d64b13a048e5f569dc51d0fab096"
	I0603 13:15:23.598514 1114112 cri.go:89] found id: ""
	I0603 13:15:23.598559 1114112 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jun 03 13:19:08 multinode-101468 crio[2896]: time="2024-06-03 13:19:08.132563023Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717420748132541622,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143025,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cee27d6d-c7f4-4503-85a3-a4daa829363b name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 13:19:08 multinode-101468 crio[2896]: time="2024-06-03 13:19:08.133205124Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fe2fc3a0-3eb2-4f14-97c0-9a25e1f3196f name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:19:08 multinode-101468 crio[2896]: time="2024-06-03 13:19:08.133280239Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fe2fc3a0-3eb2-4f14-97c0-9a25e1f3196f name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:19:08 multinode-101468 crio[2896]: time="2024-06-03 13:19:08.133685535Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:638078678bbb1e7359164bfde6c512c483000ef1fb524416d4f2c0817749b67d,PodSandboxId:65d39c69b35e974aabef43493e6ff16b27ce52472d1b264650f930122df5ff76,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717420563566846044,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7jrcp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a0d546e-6072-497f-8464-3a2dd172f9a3,},Annotations:map[string]string{io.kubernetes.container.hash: ce8be3dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:981940dc117e5b39129bdfede644eebffb008799a316d3bfa0f1984889408b6c,PodSandboxId:420c4b77ca003abeb4ca2cc9c79777203d60c304f197ce46a6d8691c7d7e0029,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717420529993150557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rszqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ceb550ef-f06f-425c-b564-f4ad51d298bc,},Annotations:map[string]string{io.kubernetes.container.hash: 5f040b62,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a98e88b0a3e80fcaff5145157d273db81ab5e79b0361e5bf5d87a7f94af5219,PodSandboxId:243827bd1bbb9cce2948cb638088914ba16c5a0b89e7be7b5b6b4fdd27151e60,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1717420529963896439,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-m96bv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e7c090a-031c-483b-b89d-6192f
0b73a9d,},Annotations:map[string]string{io.kubernetes.container.hash: c3f2c2e3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f420f7e5b26fef5489931b9a2b3ce0b4bc0f6cd832ae70950acf8e34fc8f97c,PodSandboxId:267760408b1b8ef656a779d0b188e5b04ae9c32b3b6a9d326fac2e2d5e567dbc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717420529881369545,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bf865e3-3171-4447-a928-3f7bcde9b7c4,},An
notations:map[string]string{io.kubernetes.container.hash: cb13b890,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cda936f669af5731768b8c429bbe487aa9cdf8a5510ea785c0150229ba2c5f0d,PodSandboxId:da05292c8449d5c902f541bec9bc609284234cbe5e2d19f1eaef1ce8ad6b9a1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717420529804680902,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nf6c2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b1fbac-04e0-46c6-a2cd-8befd0343e0e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 7f2dd8b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe63357eb594d2e010697e48431e7f3c3eea9a1aaea980335c0d4335033da8ca,PodSandboxId:01c9270f89191bd2de7d3adf3e0a0a2d705d439b925ca9a2562e02bd46baf958,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717420526047790762,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8642d3d47b20a69d006a8efccbbe2d84,},Annotations:map[string]string{io.kubernetes.container.hash: e28d7e08,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b7fd9adda334d55768d0d9b6cf77daedde0203b28a375a75eb3fc3c344c52a3,PodSandboxId:2840e96fa2e7090f356f5e7c41aa3c8731c9901af22ff231660c9962c4e8bf3e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717420526000046093,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e547dd6860d1022394e38f43085b7,},Annotations:map[string]string{io.kubernetes.container.hash: c125c438,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:115ca08701ae57c4ebbda3e53cd4d8ac85cc2e414e2c662a45e0f7bf8e8a8ddb,PodSandboxId:1737d2a651d4666ebccc320b9251e96666b9e18973f1b0578464b033366ec903,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717420526017502749,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac3cdbe5a6f72ed950e19c2ab2acb01,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64361fea21d48d186fcc54b1dbcb7e9ebc1a6a1a5ca1d3014d7af495415caa31,PodSandboxId:9cf1b716a8eca5ec118f3498e95cb8548791359877cef31c354c19a230d6f67f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717420525955565407,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7f804e707df88666558ffa84b5d48ff,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e5ab5496d7e7db8f5b2d7ea36ba64a84ede2f508b16a2fc00edb87740393109,PodSandboxId:c293a70ba7ceac8adf622b552619290e65c8d4abff507481189dfb62d5ebf39c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1717420229782857320,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7jrcp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a0d546e-6072-497f-8464-3a2dd172f9a3,},Annotations:map[string]string{io.kubernetes.container.hash: ce8be3dc,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4aaed31d9690e67af1e8a3189c7ad89bbe7cca30dd59d49fa24f78fbb9b81166,PodSandboxId:4d9065a030ffe63bfe7748d1d106db329870299e20d9a8773f45bef910578630,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717420189613417242,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rszqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ceb550ef-f06f-425c-b564-f4ad51d298bc,},Annotations:map[string]string{io.kubernetes.container.hash: 5f040b62,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c41d1ef9ae6ed41e317322b6b6f9ecdde78b0252da19619c1a79af409962ccf3,PodSandboxId:a4b34884aaaa17d07560def26c9f359984ef0f7155d017a2c1b19a6b3c07f6e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717420189575251486,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 9bf865e3-3171-4447-a928-3f7bcde9b7c4,},Annotations:map[string]string{io.kubernetes.container.hash: cb13b890,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5fb5fac18c2746c85ef619629aa9f8e67e8c840e0ebaf7de43627d03ef00e55,PodSandboxId:c3ae5b1239c127ea1c3d266bdbc5ee258abe17a07155f4fbeb527a0ffd88d2e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717420188328971472,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-m96bv,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 3e7c090a-031c-483b-b89d-6192f0b73a9d,},Annotations:map[string]string{io.kubernetes.container.hash: c3f2c2e3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c205814428f5f446bf31b3c1eb05d88a0df1b650bc2eb6dac437cfc1aac5cb7,PodSandboxId:e055bef89aef572e5003403c6061f1c6923792ca977d474bb6508c5370028764,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717420185928954873,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nf6c2,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 10b1fbac-04e0-46c6-a2cd-8befd0343e0e,},Annotations:map[string]string{io.kubernetes.container.hash: 7f2dd8b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d685e8439e32392d6f330b06bfe5a68dc490239dfa52a2263107ab0486ca22d3,PodSandboxId:f10085f6aa1f8124745fba1d55bd4818fdd9515f0fbbac5b35d9856397c00530,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717420166857354713,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8642d3d47b20a69d006a8efccbbe2d
84,},Annotations:map[string]string{io.kubernetes.container.hash: e28d7e08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21a5bccaa9cf36906929cabd4c0566494209b0dd42c22fbe5ca3dc90836ea727,PodSandboxId:af0205261b8d4f59896594a7f99964fcde0a17d73a703f933043c87a0b24de8c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717420166908479492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7f804e707df886665
58ffa84b5d48ff,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:796bbd6b016f5db78dda5e2dd3aa3a11c30982b1456a74336ec89e55dcf5f94f,PodSandboxId:af3bfbe386473bcfc6b51ddb30ff65b002f1ab37171c0c16b7b5c30f4f5b1899,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717420166856585006,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac3cdbe5a6f72ed950e19c2ab2acb01,
},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9be4b439e8723094acf920c06dddcd4a0b5d64b13a048e5f569dc51d0fab096,PodSandboxId:f9da6a04531a7f21581fc7febdacaa4e365226a02593c378dc3afdd315a8b302,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717420166811627955,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e547dd6860d1022394e38f43085b7,},Annotations:m
ap[string]string{io.kubernetes.container.hash: c125c438,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fe2fc3a0-3eb2-4f14-97c0-9a25e1f3196f name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:19:08 multinode-101468 crio[2896]: time="2024-06-03 13:19:08.177604061Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a5774ba9-1bbd-49ac-b76f-d5c28faeda77 name=/runtime.v1.RuntimeService/Version
	Jun 03 13:19:08 multinode-101468 crio[2896]: time="2024-06-03 13:19:08.177693636Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a5774ba9-1bbd-49ac-b76f-d5c28faeda77 name=/runtime.v1.RuntimeService/Version
	Jun 03 13:19:08 multinode-101468 crio[2896]: time="2024-06-03 13:19:08.179000388Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=db7b732b-daa0-4bc8-ba72-ab5355b7dae4 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 13:19:08 multinode-101468 crio[2896]: time="2024-06-03 13:19:08.179704523Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717420748179648279,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143025,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=db7b732b-daa0-4bc8-ba72-ab5355b7dae4 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 13:19:08 multinode-101468 crio[2896]: time="2024-06-03 13:19:08.180355877Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4cdff9ca-ec8a-4922-84da-d1a95def7cfb name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:19:08 multinode-101468 crio[2896]: time="2024-06-03 13:19:08.180431604Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4cdff9ca-ec8a-4922-84da-d1a95def7cfb name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:19:08 multinode-101468 crio[2896]: time="2024-06-03 13:19:08.180826933Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:638078678bbb1e7359164bfde6c512c483000ef1fb524416d4f2c0817749b67d,PodSandboxId:65d39c69b35e974aabef43493e6ff16b27ce52472d1b264650f930122df5ff76,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717420563566846044,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7jrcp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a0d546e-6072-497f-8464-3a2dd172f9a3,},Annotations:map[string]string{io.kubernetes.container.hash: ce8be3dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:981940dc117e5b39129bdfede644eebffb008799a316d3bfa0f1984889408b6c,PodSandboxId:420c4b77ca003abeb4ca2cc9c79777203d60c304f197ce46a6d8691c7d7e0029,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717420529993150557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rszqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ceb550ef-f06f-425c-b564-f4ad51d298bc,},Annotations:map[string]string{io.kubernetes.container.hash: 5f040b62,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a98e88b0a3e80fcaff5145157d273db81ab5e79b0361e5bf5d87a7f94af5219,PodSandboxId:243827bd1bbb9cce2948cb638088914ba16c5a0b89e7be7b5b6b4fdd27151e60,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1717420529963896439,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-m96bv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e7c090a-031c-483b-b89d-6192f
0b73a9d,},Annotations:map[string]string{io.kubernetes.container.hash: c3f2c2e3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f420f7e5b26fef5489931b9a2b3ce0b4bc0f6cd832ae70950acf8e34fc8f97c,PodSandboxId:267760408b1b8ef656a779d0b188e5b04ae9c32b3b6a9d326fac2e2d5e567dbc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717420529881369545,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bf865e3-3171-4447-a928-3f7bcde9b7c4,},An
notations:map[string]string{io.kubernetes.container.hash: cb13b890,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cda936f669af5731768b8c429bbe487aa9cdf8a5510ea785c0150229ba2c5f0d,PodSandboxId:da05292c8449d5c902f541bec9bc609284234cbe5e2d19f1eaef1ce8ad6b9a1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717420529804680902,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nf6c2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b1fbac-04e0-46c6-a2cd-8befd0343e0e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 7f2dd8b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe63357eb594d2e010697e48431e7f3c3eea9a1aaea980335c0d4335033da8ca,PodSandboxId:01c9270f89191bd2de7d3adf3e0a0a2d705d439b925ca9a2562e02bd46baf958,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717420526047790762,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8642d3d47b20a69d006a8efccbbe2d84,},Annotations:map[string]string{io.kubernetes.container.hash: e28d7e08,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b7fd9adda334d55768d0d9b6cf77daedde0203b28a375a75eb3fc3c344c52a3,PodSandboxId:2840e96fa2e7090f356f5e7c41aa3c8731c9901af22ff231660c9962c4e8bf3e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717420526000046093,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e547dd6860d1022394e38f43085b7,},Annotations:map[string]string{io.kubernetes.container.hash: c125c438,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:115ca08701ae57c4ebbda3e53cd4d8ac85cc2e414e2c662a45e0f7bf8e8a8ddb,PodSandboxId:1737d2a651d4666ebccc320b9251e96666b9e18973f1b0578464b033366ec903,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717420526017502749,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac3cdbe5a6f72ed950e19c2ab2acb01,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64361fea21d48d186fcc54b1dbcb7e9ebc1a6a1a5ca1d3014d7af495415caa31,PodSandboxId:9cf1b716a8eca5ec118f3498e95cb8548791359877cef31c354c19a230d6f67f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717420525955565407,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7f804e707df88666558ffa84b5d48ff,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e5ab5496d7e7db8f5b2d7ea36ba64a84ede2f508b16a2fc00edb87740393109,PodSandboxId:c293a70ba7ceac8adf622b552619290e65c8d4abff507481189dfb62d5ebf39c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1717420229782857320,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7jrcp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a0d546e-6072-497f-8464-3a2dd172f9a3,},Annotations:map[string]string{io.kubernetes.container.hash: ce8be3dc,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4aaed31d9690e67af1e8a3189c7ad89bbe7cca30dd59d49fa24f78fbb9b81166,PodSandboxId:4d9065a030ffe63bfe7748d1d106db329870299e20d9a8773f45bef910578630,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717420189613417242,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rszqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ceb550ef-f06f-425c-b564-f4ad51d298bc,},Annotations:map[string]string{io.kubernetes.container.hash: 5f040b62,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c41d1ef9ae6ed41e317322b6b6f9ecdde78b0252da19619c1a79af409962ccf3,PodSandboxId:a4b34884aaaa17d07560def26c9f359984ef0f7155d017a2c1b19a6b3c07f6e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717420189575251486,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 9bf865e3-3171-4447-a928-3f7bcde9b7c4,},Annotations:map[string]string{io.kubernetes.container.hash: cb13b890,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5fb5fac18c2746c85ef619629aa9f8e67e8c840e0ebaf7de43627d03ef00e55,PodSandboxId:c3ae5b1239c127ea1c3d266bdbc5ee258abe17a07155f4fbeb527a0ffd88d2e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717420188328971472,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-m96bv,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 3e7c090a-031c-483b-b89d-6192f0b73a9d,},Annotations:map[string]string{io.kubernetes.container.hash: c3f2c2e3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c205814428f5f446bf31b3c1eb05d88a0df1b650bc2eb6dac437cfc1aac5cb7,PodSandboxId:e055bef89aef572e5003403c6061f1c6923792ca977d474bb6508c5370028764,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717420185928954873,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nf6c2,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 10b1fbac-04e0-46c6-a2cd-8befd0343e0e,},Annotations:map[string]string{io.kubernetes.container.hash: 7f2dd8b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d685e8439e32392d6f330b06bfe5a68dc490239dfa52a2263107ab0486ca22d3,PodSandboxId:f10085f6aa1f8124745fba1d55bd4818fdd9515f0fbbac5b35d9856397c00530,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717420166857354713,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8642d3d47b20a69d006a8efccbbe2d
84,},Annotations:map[string]string{io.kubernetes.container.hash: e28d7e08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21a5bccaa9cf36906929cabd4c0566494209b0dd42c22fbe5ca3dc90836ea727,PodSandboxId:af0205261b8d4f59896594a7f99964fcde0a17d73a703f933043c87a0b24de8c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717420166908479492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7f804e707df886665
58ffa84b5d48ff,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:796bbd6b016f5db78dda5e2dd3aa3a11c30982b1456a74336ec89e55dcf5f94f,PodSandboxId:af3bfbe386473bcfc6b51ddb30ff65b002f1ab37171c0c16b7b5c30f4f5b1899,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717420166856585006,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac3cdbe5a6f72ed950e19c2ab2acb01,
},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9be4b439e8723094acf920c06dddcd4a0b5d64b13a048e5f569dc51d0fab096,PodSandboxId:f9da6a04531a7f21581fc7febdacaa4e365226a02593c378dc3afdd315a8b302,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717420166811627955,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e547dd6860d1022394e38f43085b7,},Annotations:m
ap[string]string{io.kubernetes.container.hash: c125c438,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4cdff9ca-ec8a-4922-84da-d1a95def7cfb name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:19:08 multinode-101468 crio[2896]: time="2024-06-03 13:19:08.223133850Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e61fe138-5cbb-4a55-9bfb-2781560dd65b name=/runtime.v1.RuntimeService/Version
	Jun 03 13:19:08 multinode-101468 crio[2896]: time="2024-06-03 13:19:08.223210325Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e61fe138-5cbb-4a55-9bfb-2781560dd65b name=/runtime.v1.RuntimeService/Version
	Jun 03 13:19:08 multinode-101468 crio[2896]: time="2024-06-03 13:19:08.224332116Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=28449e60-0550-437b-ac34-1371afd72afb name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 13:19:08 multinode-101468 crio[2896]: time="2024-06-03 13:19:08.224777028Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717420748224751124,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143025,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=28449e60-0550-437b-ac34-1371afd72afb name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 13:19:08 multinode-101468 crio[2896]: time="2024-06-03 13:19:08.225305553Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ac14805f-7d1b-433b-91d4-d6827f166a4a name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:19:08 multinode-101468 crio[2896]: time="2024-06-03 13:19:08.225387042Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ac14805f-7d1b-433b-91d4-d6827f166a4a name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:19:08 multinode-101468 crio[2896]: time="2024-06-03 13:19:08.225745453Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:638078678bbb1e7359164bfde6c512c483000ef1fb524416d4f2c0817749b67d,PodSandboxId:65d39c69b35e974aabef43493e6ff16b27ce52472d1b264650f930122df5ff76,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717420563566846044,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7jrcp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a0d546e-6072-497f-8464-3a2dd172f9a3,},Annotations:map[string]string{io.kubernetes.container.hash: ce8be3dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:981940dc117e5b39129bdfede644eebffb008799a316d3bfa0f1984889408b6c,PodSandboxId:420c4b77ca003abeb4ca2cc9c79777203d60c304f197ce46a6d8691c7d7e0029,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717420529993150557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rszqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ceb550ef-f06f-425c-b564-f4ad51d298bc,},Annotations:map[string]string{io.kubernetes.container.hash: 5f040b62,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a98e88b0a3e80fcaff5145157d273db81ab5e79b0361e5bf5d87a7f94af5219,PodSandboxId:243827bd1bbb9cce2948cb638088914ba16c5a0b89e7be7b5b6b4fdd27151e60,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1717420529963896439,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-m96bv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e7c090a-031c-483b-b89d-6192f
0b73a9d,},Annotations:map[string]string{io.kubernetes.container.hash: c3f2c2e3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f420f7e5b26fef5489931b9a2b3ce0b4bc0f6cd832ae70950acf8e34fc8f97c,PodSandboxId:267760408b1b8ef656a779d0b188e5b04ae9c32b3b6a9d326fac2e2d5e567dbc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717420529881369545,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bf865e3-3171-4447-a928-3f7bcde9b7c4,},An
notations:map[string]string{io.kubernetes.container.hash: cb13b890,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cda936f669af5731768b8c429bbe487aa9cdf8a5510ea785c0150229ba2c5f0d,PodSandboxId:da05292c8449d5c902f541bec9bc609284234cbe5e2d19f1eaef1ce8ad6b9a1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717420529804680902,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nf6c2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b1fbac-04e0-46c6-a2cd-8befd0343e0e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 7f2dd8b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe63357eb594d2e010697e48431e7f3c3eea9a1aaea980335c0d4335033da8ca,PodSandboxId:01c9270f89191bd2de7d3adf3e0a0a2d705d439b925ca9a2562e02bd46baf958,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717420526047790762,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8642d3d47b20a69d006a8efccbbe2d84,},Annotations:map[string]string{io.kubernetes.container.hash: e28d7e08,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b7fd9adda334d55768d0d9b6cf77daedde0203b28a375a75eb3fc3c344c52a3,PodSandboxId:2840e96fa2e7090f356f5e7c41aa3c8731c9901af22ff231660c9962c4e8bf3e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717420526000046093,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e547dd6860d1022394e38f43085b7,},Annotations:map[string]string{io.kubernetes.container.hash: c125c438,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:115ca08701ae57c4ebbda3e53cd4d8ac85cc2e414e2c662a45e0f7bf8e8a8ddb,PodSandboxId:1737d2a651d4666ebccc320b9251e96666b9e18973f1b0578464b033366ec903,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717420526017502749,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac3cdbe5a6f72ed950e19c2ab2acb01,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64361fea21d48d186fcc54b1dbcb7e9ebc1a6a1a5ca1d3014d7af495415caa31,PodSandboxId:9cf1b716a8eca5ec118f3498e95cb8548791359877cef31c354c19a230d6f67f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717420525955565407,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7f804e707df88666558ffa84b5d48ff,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e5ab5496d7e7db8f5b2d7ea36ba64a84ede2f508b16a2fc00edb87740393109,PodSandboxId:c293a70ba7ceac8adf622b552619290e65c8d4abff507481189dfb62d5ebf39c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1717420229782857320,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7jrcp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a0d546e-6072-497f-8464-3a2dd172f9a3,},Annotations:map[string]string{io.kubernetes.container.hash: ce8be3dc,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4aaed31d9690e67af1e8a3189c7ad89bbe7cca30dd59d49fa24f78fbb9b81166,PodSandboxId:4d9065a030ffe63bfe7748d1d106db329870299e20d9a8773f45bef910578630,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717420189613417242,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rszqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ceb550ef-f06f-425c-b564-f4ad51d298bc,},Annotations:map[string]string{io.kubernetes.container.hash: 5f040b62,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c41d1ef9ae6ed41e317322b6b6f9ecdde78b0252da19619c1a79af409962ccf3,PodSandboxId:a4b34884aaaa17d07560def26c9f359984ef0f7155d017a2c1b19a6b3c07f6e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717420189575251486,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 9bf865e3-3171-4447-a928-3f7bcde9b7c4,},Annotations:map[string]string{io.kubernetes.container.hash: cb13b890,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5fb5fac18c2746c85ef619629aa9f8e67e8c840e0ebaf7de43627d03ef00e55,PodSandboxId:c3ae5b1239c127ea1c3d266bdbc5ee258abe17a07155f4fbeb527a0ffd88d2e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717420188328971472,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-m96bv,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 3e7c090a-031c-483b-b89d-6192f0b73a9d,},Annotations:map[string]string{io.kubernetes.container.hash: c3f2c2e3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c205814428f5f446bf31b3c1eb05d88a0df1b650bc2eb6dac437cfc1aac5cb7,PodSandboxId:e055bef89aef572e5003403c6061f1c6923792ca977d474bb6508c5370028764,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717420185928954873,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nf6c2,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 10b1fbac-04e0-46c6-a2cd-8befd0343e0e,},Annotations:map[string]string{io.kubernetes.container.hash: 7f2dd8b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d685e8439e32392d6f330b06bfe5a68dc490239dfa52a2263107ab0486ca22d3,PodSandboxId:f10085f6aa1f8124745fba1d55bd4818fdd9515f0fbbac5b35d9856397c00530,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717420166857354713,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8642d3d47b20a69d006a8efccbbe2d
84,},Annotations:map[string]string{io.kubernetes.container.hash: e28d7e08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21a5bccaa9cf36906929cabd4c0566494209b0dd42c22fbe5ca3dc90836ea727,PodSandboxId:af0205261b8d4f59896594a7f99964fcde0a17d73a703f933043c87a0b24de8c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717420166908479492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7f804e707df886665
58ffa84b5d48ff,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:796bbd6b016f5db78dda5e2dd3aa3a11c30982b1456a74336ec89e55dcf5f94f,PodSandboxId:af3bfbe386473bcfc6b51ddb30ff65b002f1ab37171c0c16b7b5c30f4f5b1899,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717420166856585006,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac3cdbe5a6f72ed950e19c2ab2acb01,
},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9be4b439e8723094acf920c06dddcd4a0b5d64b13a048e5f569dc51d0fab096,PodSandboxId:f9da6a04531a7f21581fc7febdacaa4e365226a02593c378dc3afdd315a8b302,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717420166811627955,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e547dd6860d1022394e38f43085b7,},Annotations:m
ap[string]string{io.kubernetes.container.hash: c125c438,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ac14805f-7d1b-433b-91d4-d6827f166a4a name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:19:08 multinode-101468 crio[2896]: time="2024-06-03 13:19:08.269037885Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=787f64f2-0b4a-4891-8edb-f5bf6d3290e9 name=/runtime.v1.RuntimeService/Version
	Jun 03 13:19:08 multinode-101468 crio[2896]: time="2024-06-03 13:19:08.269162302Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=787f64f2-0b4a-4891-8edb-f5bf6d3290e9 name=/runtime.v1.RuntimeService/Version
	Jun 03 13:19:08 multinode-101468 crio[2896]: time="2024-06-03 13:19:08.270804920Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9f11ef75-78ef-4261-8909-c965a3ccc76b name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 13:19:08 multinode-101468 crio[2896]: time="2024-06-03 13:19:08.273306002Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717420748273275697,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143025,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9f11ef75-78ef-4261-8909-c965a3ccc76b name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 13:19:08 multinode-101468 crio[2896]: time="2024-06-03 13:19:08.275963553Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d18219da-e72f-4dca-a42f-ffc9ffcc2597 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:19:08 multinode-101468 crio[2896]: time="2024-06-03 13:19:08.276016850Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d18219da-e72f-4dca-a42f-ffc9ffcc2597 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:19:08 multinode-101468 crio[2896]: time="2024-06-03 13:19:08.276408772Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:638078678bbb1e7359164bfde6c512c483000ef1fb524416d4f2c0817749b67d,PodSandboxId:65d39c69b35e974aabef43493e6ff16b27ce52472d1b264650f930122df5ff76,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717420563566846044,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7jrcp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a0d546e-6072-497f-8464-3a2dd172f9a3,},Annotations:map[string]string{io.kubernetes.container.hash: ce8be3dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:981940dc117e5b39129bdfede644eebffb008799a316d3bfa0f1984889408b6c,PodSandboxId:420c4b77ca003abeb4ca2cc9c79777203d60c304f197ce46a6d8691c7d7e0029,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717420529993150557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rszqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ceb550ef-f06f-425c-b564-f4ad51d298bc,},Annotations:map[string]string{io.kubernetes.container.hash: 5f040b62,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a98e88b0a3e80fcaff5145157d273db81ab5e79b0361e5bf5d87a7f94af5219,PodSandboxId:243827bd1bbb9cce2948cb638088914ba16c5a0b89e7be7b5b6b4fdd27151e60,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1717420529963896439,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-m96bv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e7c090a-031c-483b-b89d-6192f
0b73a9d,},Annotations:map[string]string{io.kubernetes.container.hash: c3f2c2e3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f420f7e5b26fef5489931b9a2b3ce0b4bc0f6cd832ae70950acf8e34fc8f97c,PodSandboxId:267760408b1b8ef656a779d0b188e5b04ae9c32b3b6a9d326fac2e2d5e567dbc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717420529881369545,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bf865e3-3171-4447-a928-3f7bcde9b7c4,},An
notations:map[string]string{io.kubernetes.container.hash: cb13b890,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cda936f669af5731768b8c429bbe487aa9cdf8a5510ea785c0150229ba2c5f0d,PodSandboxId:da05292c8449d5c902f541bec9bc609284234cbe5e2d19f1eaef1ce8ad6b9a1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717420529804680902,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nf6c2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b1fbac-04e0-46c6-a2cd-8befd0343e0e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 7f2dd8b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe63357eb594d2e010697e48431e7f3c3eea9a1aaea980335c0d4335033da8ca,PodSandboxId:01c9270f89191bd2de7d3adf3e0a0a2d705d439b925ca9a2562e02bd46baf958,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717420526047790762,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8642d3d47b20a69d006a8efccbbe2d84,},Annotations:map[string]string{io.kubernetes.container.hash: e28d7e08,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b7fd9adda334d55768d0d9b6cf77daedde0203b28a375a75eb3fc3c344c52a3,PodSandboxId:2840e96fa2e7090f356f5e7c41aa3c8731c9901af22ff231660c9962c4e8bf3e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717420526000046093,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e547dd6860d1022394e38f43085b7,},Annotations:map[string]string{io.kubernetes.container.hash: c125c438,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:115ca08701ae57c4ebbda3e53cd4d8ac85cc2e414e2c662a45e0f7bf8e8a8ddb,PodSandboxId:1737d2a651d4666ebccc320b9251e96666b9e18973f1b0578464b033366ec903,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717420526017502749,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac3cdbe5a6f72ed950e19c2ab2acb01,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64361fea21d48d186fcc54b1dbcb7e9ebc1a6a1a5ca1d3014d7af495415caa31,PodSandboxId:9cf1b716a8eca5ec118f3498e95cb8548791359877cef31c354c19a230d6f67f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717420525955565407,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7f804e707df88666558ffa84b5d48ff,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e5ab5496d7e7db8f5b2d7ea36ba64a84ede2f508b16a2fc00edb87740393109,PodSandboxId:c293a70ba7ceac8adf622b552619290e65c8d4abff507481189dfb62d5ebf39c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1717420229782857320,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-7jrcp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a0d546e-6072-497f-8464-3a2dd172f9a3,},Annotations:map[string]string{io.kubernetes.container.hash: ce8be3dc,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4aaed31d9690e67af1e8a3189c7ad89bbe7cca30dd59d49fa24f78fbb9b81166,PodSandboxId:4d9065a030ffe63bfe7748d1d106db329870299e20d9a8773f45bef910578630,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717420189613417242,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rszqr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ceb550ef-f06f-425c-b564-f4ad51d298bc,},Annotations:map[string]string{io.kubernetes.container.hash: 5f040b62,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c41d1ef9ae6ed41e317322b6b6f9ecdde78b0252da19619c1a79af409962ccf3,PodSandboxId:a4b34884aaaa17d07560def26c9f359984ef0f7155d017a2c1b19a6b3c07f6e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717420189575251486,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 9bf865e3-3171-4447-a928-3f7bcde9b7c4,},Annotations:map[string]string{io.kubernetes.container.hash: cb13b890,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5fb5fac18c2746c85ef619629aa9f8e67e8c840e0ebaf7de43627d03ef00e55,PodSandboxId:c3ae5b1239c127ea1c3d266bdbc5ee258abe17a07155f4fbeb527a0ffd88d2e9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717420188328971472,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-m96bv,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 3e7c090a-031c-483b-b89d-6192f0b73a9d,},Annotations:map[string]string{io.kubernetes.container.hash: c3f2c2e3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c205814428f5f446bf31b3c1eb05d88a0df1b650bc2eb6dac437cfc1aac5cb7,PodSandboxId:e055bef89aef572e5003403c6061f1c6923792ca977d474bb6508c5370028764,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717420185928954873,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nf6c2,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 10b1fbac-04e0-46c6-a2cd-8befd0343e0e,},Annotations:map[string]string{io.kubernetes.container.hash: 7f2dd8b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d685e8439e32392d6f330b06bfe5a68dc490239dfa52a2263107ab0486ca22d3,PodSandboxId:f10085f6aa1f8124745fba1d55bd4818fdd9515f0fbbac5b35d9856397c00530,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717420166857354713,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8642d3d47b20a69d006a8efccbbe2d
84,},Annotations:map[string]string{io.kubernetes.container.hash: e28d7e08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21a5bccaa9cf36906929cabd4c0566494209b0dd42c22fbe5ca3dc90836ea727,PodSandboxId:af0205261b8d4f59896594a7f99964fcde0a17d73a703f933043c87a0b24de8c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717420166908479492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7f804e707df886665
58ffa84b5d48ff,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:796bbd6b016f5db78dda5e2dd3aa3a11c30982b1456a74336ec89e55dcf5f94f,PodSandboxId:af3bfbe386473bcfc6b51ddb30ff65b002f1ab37171c0c16b7b5c30f4f5b1899,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717420166856585006,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac3cdbe5a6f72ed950e19c2ab2acb01,
},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9be4b439e8723094acf920c06dddcd4a0b5d64b13a048e5f569dc51d0fab096,PodSandboxId:f9da6a04531a7f21581fc7febdacaa4e365226a02593c378dc3afdd315a8b302,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717420166811627955,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-101468,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e547dd6860d1022394e38f43085b7,},Annotations:m
ap[string]string{io.kubernetes.container.hash: c125c438,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d18219da-e72f-4dca-a42f-ffc9ffcc2597 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	638078678bbb1       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   65d39c69b35e9       busybox-fc5497c4f-7jrcp
	981940dc117e5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   1                   420c4b77ca003       coredns-7db6d8ff4d-rszqr
	7a98e88b0a3e8       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      3 minutes ago       Running             kindnet-cni               1                   243827bd1bbb9       kindnet-m96bv
	5f420f7e5b26f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       1                   267760408b1b8       storage-provisioner
	cda936f669af5       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      3 minutes ago       Running             kube-proxy                1                   da05292c8449d       kube-proxy-nf6c2
	fe63357eb594d       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      3 minutes ago       Running             etcd                      1                   01c9270f89191       etcd-multinode-101468
	115ca08701ae5       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      3 minutes ago       Running             kube-scheduler            1                   1737d2a651d46       kube-scheduler-multinode-101468
	2b7fd9adda334       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      3 minutes ago       Running             kube-apiserver            1                   2840e96fa2e70       kube-apiserver-multinode-101468
	64361fea21d48       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      3 minutes ago       Running             kube-controller-manager   1                   9cf1b716a8eca       kube-controller-manager-multinode-101468
	0e5ab5496d7e7       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   8 minutes ago       Exited              busybox                   0                   c293a70ba7cea       busybox-fc5497c4f-7jrcp
	4aaed31d9690e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      9 minutes ago       Exited              coredns                   0                   4d9065a030ffe       coredns-7db6d8ff4d-rszqr
	c41d1ef9ae6ed       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       0                   a4b34884aaaa1       storage-provisioner
	b5fb5fac18c27       docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266    9 minutes ago       Exited              kindnet-cni               0                   c3ae5b1239c12       kindnet-m96bv
	4c205814428f5       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      9 minutes ago       Exited              kube-proxy                0                   e055bef89aef5       kube-proxy-nf6c2
	21a5bccaa9cf3       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      9 minutes ago       Exited              kube-controller-manager   0                   af0205261b8d4       kube-controller-manager-multinode-101468
	d685e8439e323       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      9 minutes ago       Exited              etcd                      0                   f10085f6aa1f8       etcd-multinode-101468
	796bbd6b016f5       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      9 minutes ago       Exited              kube-scheduler            0                   af3bfbe386473       kube-scheduler-multinode-101468
	e9be4b439e872       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      9 minutes ago       Exited              kube-apiserver            0                   f9da6a04531a7       kube-apiserver-multinode-101468
	
	
	==> coredns [4aaed31d9690e67af1e8a3189c7ad89bbe7cca30dd59d49fa24f78fbb9b81166] <==
	[INFO] 10.244.0.3:43959 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001862491s
	[INFO] 10.244.0.3:57291 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000086461s
	[INFO] 10.244.0.3:40828 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081709s
	[INFO] 10.244.0.3:50719 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001335634s
	[INFO] 10.244.0.3:45764 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000050319s
	[INFO] 10.244.0.3:41720 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000030767s
	[INFO] 10.244.0.3:35115 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000042304s
	[INFO] 10.244.1.2:57848 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012116s
	[INFO] 10.244.1.2:36461 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000101992s
	[INFO] 10.244.1.2:55584 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00008513s
	[INFO] 10.244.1.2:39933 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076652s
	[INFO] 10.244.0.3:52564 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114075s
	[INFO] 10.244.0.3:37895 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000068288s
	[INFO] 10.244.0.3:50037 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000049075s
	[INFO] 10.244.0.3:60385 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000111106s
	[INFO] 10.244.1.2:38500 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128494s
	[INFO] 10.244.1.2:59854 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000155604s
	[INFO] 10.244.1.2:48098 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000090347s
	[INFO] 10.244.1.2:47118 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000113579s
	[INFO] 10.244.0.3:41061 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096684s
	[INFO] 10.244.0.3:48514 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000101317s
	[INFO] 10.244.0.3:33790 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000052214s
	[INFO] 10.244.0.3:52582 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000071463s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [981940dc117e5b39129bdfede644eebffb008799a316d3bfa0f1984889408b6c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:33921 - 634 "HINFO IN 8246024549565837961.4546759840687616933. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.05002989s
	
	
	==> describe nodes <==
	Name:               multinode-101468
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-101468
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	                    minikube.k8s.io/name=multinode-101468
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_03T13_09_32_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 13:09:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-101468
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 13:19:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 13:15:29 +0000   Mon, 03 Jun 2024 13:09:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 13:15:29 +0000   Mon, 03 Jun 2024 13:09:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 13:15:29 +0000   Mon, 03 Jun 2024 13:09:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 13:15:29 +0000   Mon, 03 Jun 2024 13:09:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.141
	  Hostname:    multinode-101468
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fbce10a053614ea7b4edc56b16e8c1e3
	  System UUID:                fbce10a0-5361-4ea7-b4ed-c56b16e8c1e3
	  Boot ID:                    5dc59376-86a3-4f14-bf29-4db523acb769
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-7jrcp                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m40s
	  kube-system                 coredns-7db6d8ff4d-rszqr                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m23s
	  kube-system                 etcd-multinode-101468                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m36s
	  kube-system                 kindnet-m96bv                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m23s
	  kube-system                 kube-apiserver-multinode-101468             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m38s
	  kube-system                 kube-controller-manager-multinode-101468    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m36s
	  kube-system                 kube-proxy-nf6c2                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m23s
	  kube-system                 kube-scheduler-multinode-101468             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m36s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m22s                  kube-proxy       
	  Normal  Starting                 3m38s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  9m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m42s (x8 over 9m43s)  kubelet          Node multinode-101468 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m42s (x8 over 9m43s)  kubelet          Node multinode-101468 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m42s (x7 over 9m43s)  kubelet          Node multinode-101468 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  9m37s                  kubelet          Node multinode-101468 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  9m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    9m37s                  kubelet          Node multinode-101468 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m37s                  kubelet          Node multinode-101468 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m37s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           9m24s                  node-controller  Node multinode-101468 event: Registered Node multinode-101468 in Controller
	  Normal  NodeReady                9m19s                  kubelet          Node multinode-101468 status is now: NodeReady
	  Normal  Starting                 3m43s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m43s (x8 over 3m43s)  kubelet          Node multinode-101468 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m43s (x8 over 3m43s)  kubelet          Node multinode-101468 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m43s (x7 over 3m43s)  kubelet          Node multinode-101468 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m26s                  node-controller  Node multinode-101468 event: Registered Node multinode-101468 in Controller
	
	
	Name:               multinode-101468-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-101468-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	                    minikube.k8s.io/name=multinode-101468
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_03T13_16_08_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 13:16:07 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-101468-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 13:16:48 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 03 Jun 2024 13:16:38 +0000   Mon, 03 Jun 2024 13:17:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 03 Jun 2024 13:16:38 +0000   Mon, 03 Jun 2024 13:17:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 03 Jun 2024 13:16:38 +0000   Mon, 03 Jun 2024 13:17:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 03 Jun 2024 13:16:38 +0000   Mon, 03 Jun 2024 13:17:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.17
	  Hostname:    multinode-101468-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 091f000580784a52ac949dd19c81c1ff
	  System UUID:                091f0005-8078-4a52-ac94-9dd19c81c1ff
	  Boot ID:                    3671d987-c1a9-4829-8b8b-3bef68dcee08
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-hjfgd    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m6s
	  kube-system                 kindnet-2lwvt              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m50s
	  kube-system                 kube-proxy-zq896           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m45s                  kube-proxy       
	  Normal  Starting                 2m56s                  kube-proxy       
	  Normal  NodeHasNoDiskPressure    8m50s (x2 over 8m50s)  kubelet          Node multinode-101468-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m50s (x2 over 8m50s)  kubelet          Node multinode-101468-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m50s (x2 over 8m50s)  kubelet          Node multinode-101468-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                8m42s                  kubelet          Node multinode-101468-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m1s (x2 over 3m1s)    kubelet          Node multinode-101468-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m1s (x2 over 3m1s)    kubelet          Node multinode-101468-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m1s (x2 over 3m1s)    kubelet          Node multinode-101468-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m1s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m56s                  node-controller  Node multinode-101468-m02 event: Registered Node multinode-101468-m02 in Controller
	  Normal  NodeReady                2m53s                  kubelet          Node multinode-101468-m02 status is now: NodeReady
	  Normal  NodeNotReady             96s                    node-controller  Node multinode-101468-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +6.483250] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.055742] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060969] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.162613] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.134263] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.280125] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +4.259872] systemd-fstab-generator[760]: Ignoring "noauto" option for root device
	[  +4.506475] systemd-fstab-generator[944]: Ignoring "noauto" option for root device
	[  +0.062202] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.471282] systemd-fstab-generator[1286]: Ignoring "noauto" option for root device
	[  +0.084758] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.482107] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.675168] systemd-fstab-generator[1474]: Ignoring "noauto" option for root device
	[  +5.250697] kauditd_printk_skb: 80 callbacks suppressed
	[Jun 3 13:15] systemd-fstab-generator[2813]: Ignoring "noauto" option for root device
	[  +0.143642] systemd-fstab-generator[2826]: Ignoring "noauto" option for root device
	[  +0.182463] systemd-fstab-generator[2840]: Ignoring "noauto" option for root device
	[  +0.155772] systemd-fstab-generator[2852]: Ignoring "noauto" option for root device
	[  +0.321514] systemd-fstab-generator[2880]: Ignoring "noauto" option for root device
	[  +0.741085] systemd-fstab-generator[2978]: Ignoring "noauto" option for root device
	[  +2.051083] systemd-fstab-generator[3103]: Ignoring "noauto" option for root device
	[  +4.641857] kauditd_printk_skb: 184 callbacks suppressed
	[ +12.709565] kauditd_printk_skb: 32 callbacks suppressed
	[  +0.213684] systemd-fstab-generator[3923]: Ignoring "noauto" option for root device
	[Jun 3 13:16] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [d685e8439e32392d6f330b06bfe5a68dc490239dfa52a2263107ab0486ca22d3] <==
	{"level":"info","ts":"2024-06-03T13:09:27.698377Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2024-06-03T13:10:18.503406Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.19218ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8343920919220089995 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-101468-m02.17d58096d5530ffd\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-101468-m02.17d58096d5530ffd\" value_size:642 lease:8343920919220089018 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-06-03T13:10:18.503807Z","caller":"traceutil/trace.go:171","msg":"trace[1232507046] transaction","detail":"{read_only:false; response_revision:470; number_of_response:1; }","duration":"256.775489ms","start":"2024-06-03T13:10:18.246985Z","end":"2024-06-03T13:10:18.50376Z","steps":["trace[1232507046] 'process raft request'  (duration: 131.331476ms)","trace[1232507046] 'compare'  (duration: 124.100984ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-03T13:10:18.504616Z","caller":"traceutil/trace.go:171","msg":"trace[1636408013] transaction","detail":"{read_only:false; response_revision:471; number_of_response:1; }","duration":"175.00846ms","start":"2024-06-03T13:10:18.329583Z","end":"2024-06-03T13:10:18.504592Z","steps":["trace[1636408013] 'process raft request'  (duration: 174.045088ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T13:10:22.85024Z","caller":"traceutil/trace.go:171","msg":"trace[550781007] transaction","detail":"{read_only:false; response_revision:505; number_of_response:1; }","duration":"150.752963ms","start":"2024-06-03T13:10:22.699427Z","end":"2024-06-03T13:10:22.85018Z","steps":["trace[550781007] 'process raft request'  (duration: 150.598237ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T13:10:22.912768Z","caller":"traceutil/trace.go:171","msg":"trace[2141742139] transaction","detail":"{read_only:false; response_revision:506; number_of_response:1; }","duration":"132.93847ms","start":"2024-06-03T13:10:22.779813Z","end":"2024-06-03T13:10:22.912752Z","steps":["trace[2141742139] 'process raft request'  (duration: 132.840839ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T13:10:57.835395Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.050474ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8343920919220090360 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-101468-m03.17d5809ffe106274\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-101468-m03.17d5809ffe106274\" value_size:640 lease:8343920919220090165 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-06-03T13:10:57.835917Z","caller":"traceutil/trace.go:171","msg":"trace[1418118352] transaction","detail":"{read_only:false; response_revision:589; number_of_response:1; }","duration":"227.069098ms","start":"2024-06-03T13:10:57.608827Z","end":"2024-06-03T13:10:57.835896Z","steps":["trace[1418118352] 'process raft request'  (duration: 121.386696ms)","trace[1418118352] 'compare'  (duration: 104.707903ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-03T13:10:57.836134Z","caller":"traceutil/trace.go:171","msg":"trace[1797267974] transaction","detail":"{read_only:false; response_revision:590; number_of_response:1; }","duration":"161.315102ms","start":"2024-06-03T13:10:57.674743Z","end":"2024-06-03T13:10:57.836058Z","steps":["trace[1797267974] 'process raft request'  (duration: 161.15068ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T13:10:57.836254Z","caller":"traceutil/trace.go:171","msg":"trace[1173506688] linearizableReadLoop","detail":"{readStateIndex:620; appliedIndex:619; }","duration":"188.810089ms","start":"2024-06-03T13:10:57.647435Z","end":"2024-06-03T13:10:57.836245Z","steps":["trace[1173506688] 'read index received'  (duration: 82.732802ms)","trace[1173506688] 'applied index is now lower than readState.Index'  (duration: 106.076446ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-03T13:10:57.836468Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"189.023952ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-101468-m03\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-06-03T13:10:57.836527Z","caller":"traceutil/trace.go:171","msg":"trace[1269450301] range","detail":"{range_begin:/registry/minions/multinode-101468-m03; range_end:; response_count:1; response_revision:590; }","duration":"189.120434ms","start":"2024-06-03T13:10:57.647396Z","end":"2024-06-03T13:10:57.836516Z","steps":["trace[1269450301] 'agreement among raft nodes before linearized reading'  (duration: 188.948124ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T13:10:57.836498Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.89141ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas/\" range_end:\"/registry/resourcequotas0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-03T13:10:57.836617Z","caller":"traceutil/trace.go:171","msg":"trace[1983427030] range","detail":"{range_begin:/registry/resourcequotas/; range_end:/registry/resourcequotas0; response_count:0; response_revision:590; }","duration":"138.038172ms","start":"2024-06-03T13:10:57.698569Z","end":"2024-06-03T13:10:57.836607Z","steps":["trace[1983427030] 'agreement among raft nodes before linearized reading'  (duration: 137.905816ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T13:11:40.266758Z","caller":"traceutil/trace.go:171","msg":"trace[1282143115] transaction","detail":"{read_only:false; response_revision:700; number_of_response:1; }","duration":"115.697577ms","start":"2024-06-03T13:11:40.15104Z","end":"2024-06-03T13:11:40.266737Z","steps":["trace[1282143115] 'process raft request'  (duration: 115.575772ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T13:13:50.155986Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-06-03T13:13:50.156168Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-101468","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.141:2380"],"advertise-client-urls":["https://192.168.39.141:2379"]}
	{"level":"warn","ts":"2024-06-03T13:13:50.156316Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-03T13:13:50.156424Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-03T13:13:50.240135Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.141:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-03T13:13:50.240185Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.141:2379: use of closed network connection"}
	{"level":"info","ts":"2024-06-03T13:13:50.24031Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"2398e045949c73cb","current-leader-member-id":"2398e045949c73cb"}
	{"level":"info","ts":"2024-06-03T13:13:50.243278Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.141:2380"}
	{"level":"info","ts":"2024-06-03T13:13:50.243414Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.141:2380"}
	{"level":"info","ts":"2024-06-03T13:13:50.243425Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-101468","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.141:2380"],"advertise-client-urls":["https://192.168.39.141:2379"]}
	
	
	==> etcd [fe63357eb594d2e010697e48431e7f3c3eea9a1aaea980335c0d4335033da8ca] <==
	{"level":"info","ts":"2024-06-03T13:15:26.533517Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-03T13:15:26.53377Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2398e045949c73cb switched to configuration voters=(2565046577238143947)"}
	{"level":"info","ts":"2024-06-03T13:15:26.533847Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"bf8381628c3e4cea","local-member-id":"2398e045949c73cb","added-peer-id":"2398e045949c73cb","added-peer-peer-urls":["https://192.168.39.141:2380"]}
	{"level":"info","ts":"2024-06-03T13:15:26.533963Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bf8381628c3e4cea","local-member-id":"2398e045949c73cb","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T13:15:26.534006Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T13:15:26.584197Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-03T13:15:26.584457Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"2398e045949c73cb","initial-advertise-peer-urls":["https://192.168.39.141:2380"],"listen-peer-urls":["https://192.168.39.141:2380"],"advertise-client-urls":["https://192.168.39.141:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.141:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-03T13:15:26.58451Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-03T13:15:26.584621Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.141:2380"}
	{"level":"info","ts":"2024-06-03T13:15:26.584647Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.141:2380"}
	{"level":"info","ts":"2024-06-03T13:15:27.684665Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2398e045949c73cb is starting a new election at term 2"}
	{"level":"info","ts":"2024-06-03T13:15:27.684725Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2398e045949c73cb became pre-candidate at term 2"}
	{"level":"info","ts":"2024-06-03T13:15:27.684749Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2398e045949c73cb received MsgPreVoteResp from 2398e045949c73cb at term 2"}
	{"level":"info","ts":"2024-06-03T13:15:27.684762Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2398e045949c73cb became candidate at term 3"}
	{"level":"info","ts":"2024-06-03T13:15:27.684768Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2398e045949c73cb received MsgVoteResp from 2398e045949c73cb at term 3"}
	{"level":"info","ts":"2024-06-03T13:15:27.684776Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2398e045949c73cb became leader at term 3"}
	{"level":"info","ts":"2024-06-03T13:15:27.684784Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2398e045949c73cb elected leader 2398e045949c73cb at term 3"}
	{"level":"info","ts":"2024-06-03T13:15:27.690551Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"2398e045949c73cb","local-member-attributes":"{Name:multinode-101468 ClientURLs:[https://192.168.39.141:2379]}","request-path":"/0/members/2398e045949c73cb/attributes","cluster-id":"bf8381628c3e4cea","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-03T13:15:27.690732Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-03T13:15:27.690768Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-03T13:15:27.691228Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-03T13:15:27.691301Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-03T13:15:27.693027Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.141:2379"}
	{"level":"info","ts":"2024-06-03T13:15:27.693198Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-03T13:16:37.999867Z","caller":"traceutil/trace.go:171","msg":"trace[1945236411] transaction","detail":"{read_only:false; response_revision:1096; number_of_response:1; }","duration":"132.54954ms","start":"2024-06-03T13:16:37.867255Z","end":"2024-06-03T13:16:37.999805Z","steps":["trace[1945236411] 'process raft request'  (duration: 132.333333ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:19:08 up 10 min,  0 users,  load average: 0.13, 0.19, 0.12
	Linux multinode-101468 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [7a98e88b0a3e80fcaff5145157d273db81ab5e79b0361e5bf5d87a7f94af5219] <==
	I0603 13:18:00.936428       1 main.go:250] Node multinode-101468-m02 has CIDR [10.244.1.0/24] 
	I0603 13:18:10.941733       1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
	I0603 13:18:10.941841       1 main.go:227] handling current node
	I0603 13:18:10.941873       1 main.go:223] Handling node with IPs: map[192.168.39.17:{}]
	I0603 13:18:10.941896       1 main.go:250] Node multinode-101468-m02 has CIDR [10.244.1.0/24] 
	I0603 13:18:20.947816       1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
	I0603 13:18:20.947866       1 main.go:227] handling current node
	I0603 13:18:20.947877       1 main.go:223] Handling node with IPs: map[192.168.39.17:{}]
	I0603 13:18:20.947883       1 main.go:250] Node multinode-101468-m02 has CIDR [10.244.1.0/24] 
	I0603 13:18:30.952476       1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
	I0603 13:18:30.952592       1 main.go:227] handling current node
	I0603 13:18:30.952617       1 main.go:223] Handling node with IPs: map[192.168.39.17:{}]
	I0603 13:18:30.952635       1 main.go:250] Node multinode-101468-m02 has CIDR [10.244.1.0/24] 
	I0603 13:18:40.964825       1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
	I0603 13:18:40.964867       1 main.go:227] handling current node
	I0603 13:18:40.964877       1 main.go:223] Handling node with IPs: map[192.168.39.17:{}]
	I0603 13:18:40.964882       1 main.go:250] Node multinode-101468-m02 has CIDR [10.244.1.0/24] 
	I0603 13:18:50.969921       1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
	I0603 13:18:50.969963       1 main.go:227] handling current node
	I0603 13:18:50.969973       1 main.go:223] Handling node with IPs: map[192.168.39.17:{}]
	I0603 13:18:50.969978       1 main.go:250] Node multinode-101468-m02 has CIDR [10.244.1.0/24] 
	I0603 13:19:00.976870       1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
	I0603 13:19:00.977162       1 main.go:227] handling current node
	I0603 13:19:00.977253       1 main.go:223] Handling node with IPs: map[192.168.39.17:{}]
	I0603 13:19:00.977322       1 main.go:250] Node multinode-101468-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [b5fb5fac18c2746c85ef619629aa9f8e67e8c840e0ebaf7de43627d03ef00e55] <==
	I0603 13:13:09.094408       1 main.go:250] Node multinode-101468-m03 has CIDR [10.244.3.0/24] 
	I0603 13:13:19.107719       1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
	I0603 13:13:19.107825       1 main.go:227] handling current node
	I0603 13:13:19.107854       1 main.go:223] Handling node with IPs: map[192.168.39.17:{}]
	I0603 13:13:19.107878       1 main.go:250] Node multinode-101468-m02 has CIDR [10.244.1.0/24] 
	I0603 13:13:19.107995       1 main.go:223] Handling node with IPs: map[192.168.39.203:{}]
	I0603 13:13:19.108017       1 main.go:250] Node multinode-101468-m03 has CIDR [10.244.3.0/24] 
	I0603 13:13:29.121049       1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
	I0603 13:13:29.121211       1 main.go:227] handling current node
	I0603 13:13:29.121235       1 main.go:223] Handling node with IPs: map[192.168.39.17:{}]
	I0603 13:13:29.121253       1 main.go:250] Node multinode-101468-m02 has CIDR [10.244.1.0/24] 
	I0603 13:13:29.121457       1 main.go:223] Handling node with IPs: map[192.168.39.203:{}]
	I0603 13:13:29.121526       1 main.go:250] Node multinode-101468-m03 has CIDR [10.244.3.0/24] 
	I0603 13:13:39.132519       1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
	I0603 13:13:39.132641       1 main.go:227] handling current node
	I0603 13:13:39.132669       1 main.go:223] Handling node with IPs: map[192.168.39.17:{}]
	I0603 13:13:39.132732       1 main.go:250] Node multinode-101468-m02 has CIDR [10.244.1.0/24] 
	I0603 13:13:39.132987       1 main.go:223] Handling node with IPs: map[192.168.39.203:{}]
	I0603 13:13:39.133031       1 main.go:250] Node multinode-101468-m03 has CIDR [10.244.3.0/24] 
	I0603 13:13:49.142793       1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
	I0603 13:13:49.142982       1 main.go:227] handling current node
	I0603 13:13:49.143007       1 main.go:223] Handling node with IPs: map[192.168.39.17:{}]
	I0603 13:13:49.143025       1 main.go:250] Node multinode-101468-m02 has CIDR [10.244.1.0/24] 
	I0603 13:13:49.143300       1 main.go:223] Handling node with IPs: map[192.168.39.203:{}]
	I0603 13:13:49.143386       1 main.go:250] Node multinode-101468-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [2b7fd9adda334d55768d0d9b6cf77daedde0203b28a375a75eb3fc3c344c52a3] <==
	I0603 13:15:29.071919       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0603 13:15:29.071958       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0603 13:15:29.075806       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0603 13:15:29.076174       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0603 13:15:29.076297       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0603 13:15:29.079829       1 shared_informer.go:320] Caches are synced for configmaps
	I0603 13:15:29.079945       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0603 13:15:29.082326       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0603 13:15:29.082359       1 policy_source.go:224] refreshing policies
	I0603 13:15:29.082761       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0603 13:15:29.084623       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0603 13:15:29.099222       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0603 13:15:29.111458       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0603 13:15:29.111658       1 aggregator.go:165] initial CRD sync complete...
	I0603 13:15:29.111741       1 autoregister_controller.go:141] Starting autoregister controller
	I0603 13:15:29.111764       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0603 13:15:29.111787       1 cache.go:39] Caches are synced for autoregister controller
	I0603 13:15:29.988652       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0603 13:15:31.287751       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0603 13:15:31.414271       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0603 13:15:31.434275       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0603 13:15:31.504934       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0603 13:15:31.512154       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0603 13:15:42.220467       1 controller.go:615] quota admission added evaluator for: endpoints
	I0603 13:15:42.375140       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [e9be4b439e8723094acf920c06dddcd4a0b5d64b13a048e5f569dc51d0fab096] <==
	I0603 13:09:30.795762       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0603 13:09:30.845667       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0603 13:09:30.853049       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.141]
	I0603 13:09:30.854192       1 controller.go:615] quota admission added evaluator for: endpoints
	I0603 13:09:30.859442       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0603 13:09:31.210410       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0603 13:09:31.950757       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0603 13:09:31.973294       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0603 13:09:31.990437       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0603 13:09:44.998329       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0603 13:09:45.248648       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0603 13:10:30.913763       1 conn.go:339] Error on socket receive: read tcp 192.168.39.141:8443->192.168.39.1:46902: use of closed network connection
	E0603 13:10:31.101347       1 conn.go:339] Error on socket receive: read tcp 192.168.39.141:8443->192.168.39.1:46908: use of closed network connection
	E0603 13:10:31.289955       1 conn.go:339] Error on socket receive: read tcp 192.168.39.141:8443->192.168.39.1:46912: use of closed network connection
	E0603 13:10:31.470192       1 conn.go:339] Error on socket receive: read tcp 192.168.39.141:8443->192.168.39.1:46928: use of closed network connection
	E0603 13:10:31.638249       1 conn.go:339] Error on socket receive: read tcp 192.168.39.141:8443->192.168.39.1:46942: use of closed network connection
	E0603 13:10:31.811516       1 conn.go:339] Error on socket receive: read tcp 192.168.39.141:8443->192.168.39.1:46970: use of closed network connection
	E0603 13:10:32.109797       1 conn.go:339] Error on socket receive: read tcp 192.168.39.141:8443->192.168.39.1:47008: use of closed network connection
	E0603 13:10:32.285163       1 conn.go:339] Error on socket receive: read tcp 192.168.39.141:8443->192.168.39.1:47016: use of closed network connection
	E0603 13:10:32.455863       1 conn.go:339] Error on socket receive: read tcp 192.168.39.141:8443->192.168.39.1:47036: use of closed network connection
	E0603 13:10:32.622052       1 conn.go:339] Error on socket receive: read tcp 192.168.39.141:8443->192.168.39.1:47064: use of closed network connection
	I0603 13:13:50.169539       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0603 13:13:50.171280       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0603 13:13:50.183333       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0603 13:13:50.186392       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	
	
	==> kube-controller-manager [21a5bccaa9cf36906929cabd4c0566494209b0dd42c22fbe5ca3dc90836ea727] <==
	I0603 13:09:50.072308       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="99.122µs"
	I0603 13:10:18.508308       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101468-m02\" does not exist"
	I0603 13:10:18.522942       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-101468-m02" podCIDRs=["10.244.1.0/24"]
	I0603 13:10:19.458602       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-101468-m02"
	I0603 13:10:26.343271       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101468-m02"
	I0603 13:10:28.635160       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.393334ms"
	I0603 13:10:28.649994       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.774352ms"
	I0603 13:10:28.650124       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.514µs"
	I0603 13:10:30.178888       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.939236ms"
	I0603 13:10:30.179047       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.064µs"
	I0603 13:10:30.396424       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.615312ms"
	I0603 13:10:30.396666       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.202µs"
	I0603 13:10:57.839441       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101468-m03\" does not exist"
	I0603 13:10:57.839579       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101468-m02"
	I0603 13:10:57.851258       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-101468-m03" podCIDRs=["10.244.2.0/24"]
	I0603 13:10:59.475364       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-101468-m03"
	I0603 13:11:06.998222       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101468-m02"
	I0603 13:11:35.981884       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101468-m02"
	I0603 13:11:37.038348       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101468-m03\" does not exist"
	I0603 13:11:37.038639       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101468-m02"
	I0603 13:11:37.063299       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-101468-m03" podCIDRs=["10.244.3.0/24"]
	I0603 13:11:43.756484       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101468-m02"
	I0603 13:12:24.529125       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101468-m03"
	I0603 13:12:24.604492       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.821312ms"
	I0603 13:12:24.605144       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="125.26µs"
	
	
	==> kube-controller-manager [64361fea21d48d186fcc54b1dbcb7e9ebc1a6a1a5ca1d3014d7af495415caa31] <==
	I0603 13:16:07.453053       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-101468-m02" podCIDRs=["10.244.1.0/24"]
	I0603 13:16:09.341747       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.183µs"
	I0603 13:16:09.382375       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.511µs"
	I0603 13:16:09.395676       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.495µs"
	I0603 13:16:09.402974       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="71.147µs"
	I0603 13:16:09.410623       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="66.693µs"
	I0603 13:16:09.414842       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.478µs"
	I0603 13:16:12.138188       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.866µs"
	I0603 13:16:15.446884       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101468-m02"
	I0603 13:16:15.462178       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.926µs"
	I0603 13:16:15.481835       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.592µs"
	I0603 13:16:16.891955       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.310445ms"
	I0603 13:16:16.894457       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="144.375µs"
	I0603 13:16:33.656003       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101468-m02"
	I0603 13:16:34.635851       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101468-m02"
	I0603 13:16:34.635950       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-101468-m03\" does not exist"
	I0603 13:16:34.648208       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-101468-m03" podCIDRs=["10.244.2.0/24"]
	I0603 13:16:41.445487       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101468-m02"
	I0603 13:16:46.983825       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-101468-m02"
	I0603 13:17:32.390277       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="21.03076ms"
	I0603 13:17:32.390913       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.624µs"
	I0603 13:17:42.201275       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-jd5x2"
	I0603 13:17:42.228126       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-jd5x2"
	I0603 13:17:42.228171       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-vhd2b"
	I0603 13:17:42.253615       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-vhd2b"
	
	
	==> kube-proxy [4c205814428f5f446bf31b3c1eb05d88a0df1b650bc2eb6dac437cfc1aac5cb7] <==
	I0603 13:09:46.123830       1 server_linux.go:69] "Using iptables proxy"
	I0603 13:09:46.140223       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.141"]
	I0603 13:09:46.180500       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 13:09:46.180536       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 13:09:46.180551       1 server_linux.go:165] "Using iptables Proxier"
	I0603 13:09:46.183544       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 13:09:46.183839       1 server.go:872] "Version info" version="v1.30.1"
	I0603 13:09:46.183890       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 13:09:46.185218       1 config.go:192] "Starting service config controller"
	I0603 13:09:46.185317       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 13:09:46.185361       1 config.go:101] "Starting endpoint slice config controller"
	I0603 13:09:46.185378       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 13:09:46.185862       1 config.go:319] "Starting node config controller"
	I0603 13:09:46.185903       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 13:09:46.286003       1 shared_informer.go:320] Caches are synced for node config
	I0603 13:09:46.286052       1 shared_informer.go:320] Caches are synced for service config
	I0603 13:09:46.286143       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [cda936f669af5731768b8c429bbe487aa9cdf8a5510ea785c0150229ba2c5f0d] <==
	I0603 13:15:30.121030       1 server_linux.go:69] "Using iptables proxy"
	I0603 13:15:30.135591       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.141"]
	I0603 13:15:30.240538       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 13:15:30.240639       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 13:15:30.240656       1 server_linux.go:165] "Using iptables Proxier"
	I0603 13:15:30.245192       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 13:15:30.245392       1 server.go:872] "Version info" version="v1.30.1"
	I0603 13:15:30.245424       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 13:15:30.246963       1 config.go:192] "Starting service config controller"
	I0603 13:15:30.247035       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 13:15:30.247106       1 config.go:101] "Starting endpoint slice config controller"
	I0603 13:15:30.247128       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 13:15:30.253424       1 config.go:319] "Starting node config controller"
	I0603 13:15:30.253454       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 13:15:30.348335       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 13:15:30.348410       1 shared_informer.go:320] Caches are synced for service config
	I0603 13:15:30.353976       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [115ca08701ae57c4ebbda3e53cd4d8ac85cc2e414e2c662a45e0f7bf8e8a8ddb] <==
	I0603 13:15:27.090809       1 serving.go:380] Generated self-signed cert in-memory
	W0603 13:15:29.048930       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0603 13:15:29.049119       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0603 13:15:29.049151       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0603 13:15:29.049255       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0603 13:15:29.088776       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0603 13:15:29.088823       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 13:15:29.090701       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0603 13:15:29.090879       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0603 13:15:29.090926       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 13:15:29.090958       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 13:15:29.191172       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [796bbd6b016f5db78dda5e2dd3aa3a11c30982b1456a74336ec89e55dcf5f94f] <==
	E0603 13:09:29.250866       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0603 13:09:29.250894       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0603 13:09:29.250922       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0603 13:09:29.250948       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0603 13:09:29.250987       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0603 13:09:30.070403       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0603 13:09:30.070458       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0603 13:09:30.188875       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0603 13:09:30.188955       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0603 13:09:30.207966       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0603 13:09:30.208159       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0603 13:09:30.214937       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0603 13:09:30.215103       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0603 13:09:30.219013       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0603 13:09:30.219157       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0603 13:09:30.325391       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0603 13:09:30.325529       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0603 13:09:30.351841       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0603 13:09:30.351962       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0603 13:09:30.372929       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0603 13:09:30.373012       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0603 13:09:30.711267       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0603 13:09:30.711871       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 13:09:32.725475       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0603 13:13:50.165820       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jun 03 13:15:29 multinode-101468 kubelet[3110]: I0603 13:15:29.259691    3110 topology_manager.go:215] "Topology Admit Handler" podUID="ceb550ef-f06f-425c-b564-f4ad51d298bc" podNamespace="kube-system" podName="coredns-7db6d8ff4d-rszqr"
	Jun 03 13:15:29 multinode-101468 kubelet[3110]: I0603 13:15:29.259731    3110 topology_manager.go:215] "Topology Admit Handler" podUID="7a0d546e-6072-497f-8464-3a2dd172f9a3" podNamespace="default" podName="busybox-fc5497c4f-7jrcp"
	Jun 03 13:15:29 multinode-101468 kubelet[3110]: I0603 13:15:29.274322    3110 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jun 03 13:15:29 multinode-101468 kubelet[3110]: I0603 13:15:29.374900    3110 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/3e7c090a-031c-483b-b89d-6192f0b73a9d-cni-cfg\") pod \"kindnet-m96bv\" (UID: \"3e7c090a-031c-483b-b89d-6192f0b73a9d\") " pod="kube-system/kindnet-m96bv"
	Jun 03 13:15:29 multinode-101468 kubelet[3110]: I0603 13:15:29.375523    3110 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e7c090a-031c-483b-b89d-6192f0b73a9d-xtables-lock\") pod \"kindnet-m96bv\" (UID: \"3e7c090a-031c-483b-b89d-6192f0b73a9d\") " pod="kube-system/kindnet-m96bv"
	Jun 03 13:15:29 multinode-101468 kubelet[3110]: I0603 13:15:29.375724    3110 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e7c090a-031c-483b-b89d-6192f0b73a9d-lib-modules\") pod \"kindnet-m96bv\" (UID: \"3e7c090a-031c-483b-b89d-6192f0b73a9d\") " pod="kube-system/kindnet-m96bv"
	Jun 03 13:15:29 multinode-101468 kubelet[3110]: I0603 13:15:29.375805    3110 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/10b1fbac-04e0-46c6-a2cd-8befd0343e0e-lib-modules\") pod \"kube-proxy-nf6c2\" (UID: \"10b1fbac-04e0-46c6-a2cd-8befd0343e0e\") " pod="kube-system/kube-proxy-nf6c2"
	Jun 03 13:15:29 multinode-101468 kubelet[3110]: I0603 13:15:29.376197    3110 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/10b1fbac-04e0-46c6-a2cd-8befd0343e0e-xtables-lock\") pod \"kube-proxy-nf6c2\" (UID: \"10b1fbac-04e0-46c6-a2cd-8befd0343e0e\") " pod="kube-system/kube-proxy-nf6c2"
	Jun 03 13:15:29 multinode-101468 kubelet[3110]: I0603 13:15:29.376507    3110 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9bf865e3-3171-4447-a928-3f7bcde9b7c4-tmp\") pod \"storage-provisioner\" (UID: \"9bf865e3-3171-4447-a928-3f7bcde9b7c4\") " pod="kube-system/storage-provisioner"
	Jun 03 13:15:29 multinode-101468 kubelet[3110]: E0603 13:15:29.442819    3110 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-multinode-101468\" already exists" pod="kube-system/kube-apiserver-multinode-101468"
	Jun 03 13:16:25 multinode-101468 kubelet[3110]: E0603 13:16:25.368543    3110 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 13:16:25 multinode-101468 kubelet[3110]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 13:16:25 multinode-101468 kubelet[3110]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 13:16:25 multinode-101468 kubelet[3110]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 13:16:25 multinode-101468 kubelet[3110]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 13:17:25 multinode-101468 kubelet[3110]: E0603 13:17:25.362046    3110 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 13:17:25 multinode-101468 kubelet[3110]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 13:17:25 multinode-101468 kubelet[3110]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 13:17:25 multinode-101468 kubelet[3110]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 13:17:25 multinode-101468 kubelet[3110]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 13:18:25 multinode-101468 kubelet[3110]: E0603 13:18:25.362214    3110 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 13:18:25 multinode-101468 kubelet[3110]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 13:18:25 multinode-101468 kubelet[3110]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 13:18:25 multinode-101468 kubelet[3110]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 13:18:25 multinode-101468 kubelet[3110]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0603 13:19:07.845504 1116350 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19011-1078924/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-101468 -n multinode-101468
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-101468 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.39s)

                                                
                                    
x
+
TestPreload (283.82s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-169498 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0603 13:24:58.228857 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/functional-093300/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-169498 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m22.897227411s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-169498 image pull gcr.io/k8s-minikube/busybox
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-169498
E0603 13:27:28.591624 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.crt: no such file or directory
E0603 13:27:45.542541 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.crt: no such file or directory
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-169498: exit status 82 (2m0.473987703s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-169498"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-169498 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-06-03 13:27:52.354805904 +0000 UTC m=+3824.559364421
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-169498 -n test-preload-169498
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-169498 -n test-preload-169498: exit status 3 (18.635452622s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0603 13:28:10.985838 1119333 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.220:22: connect: no route to host
	E0603 13:28:10.985867 1119333 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.220:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-169498" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-169498" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-169498
--- FAIL: TestPreload (283.82s)

                                                
                                    
x
+
TestKubernetesUpgrade (380.04s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-423965 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-423965 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m57.416376945s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-423965] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19011
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19011-1078924/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19011-1078924/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-423965" primary control-plane node in "kubernetes-upgrade-423965" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0603 13:30:02.948887 1120415 out.go:291] Setting OutFile to fd 1 ...
	I0603 13:30:02.949151 1120415 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:30:02.949185 1120415 out.go:304] Setting ErrFile to fd 2...
	I0603 13:30:02.949196 1120415 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:30:02.949439 1120415 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
	I0603 13:30:02.950034 1120415 out.go:298] Setting JSON to false
	I0603 13:30:02.951132 1120415 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":15150,"bootTime":1717406253,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 13:30:02.951194 1120415 start.go:139] virtualization: kvm guest
	I0603 13:30:02.954119 1120415 out.go:177] * [kubernetes-upgrade-423965] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 13:30:02.956526 1120415 out.go:177]   - MINIKUBE_LOCATION=19011
	I0603 13:30:02.955856 1120415 notify.go:220] Checking for updates...
	I0603 13:30:02.957836 1120415 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 13:30:02.959235 1120415 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 13:30:02.961321 1120415 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 13:30:02.963734 1120415 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 13:30:02.966562 1120415 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 13:30:02.968374 1120415 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 13:30:03.011768 1120415 out.go:177] * Using the kvm2 driver based on user configuration
	I0603 13:30:03.013310 1120415 start.go:297] selected driver: kvm2
	I0603 13:30:03.013334 1120415 start.go:901] validating driver "kvm2" against <nil>
	I0603 13:30:03.013355 1120415 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 13:30:03.014343 1120415 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 13:30:03.033606 1120415 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19011-1078924/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0603 13:30:03.051321 1120415 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0603 13:30:03.051388 1120415 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0603 13:30:03.051673 1120415 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0603 13:30:03.051713 1120415 cni.go:84] Creating CNI manager for ""
	I0603 13:30:03.051723 1120415 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:30:03.051734 1120415 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0603 13:30:03.051811 1120415 start.go:340] cluster config:
	{Name:kubernetes-upgrade-423965 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-423965 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:30:03.051915 1120415 iso.go:125] acquiring lock: {Name:mka26d6a83f88b83737ccc78b57cc462fbe70fe1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 13:30:03.053681 1120415 out.go:177] * Starting "kubernetes-upgrade-423965" primary control-plane node in "kubernetes-upgrade-423965" cluster
	I0603 13:30:03.055347 1120415 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0603 13:30:03.055405 1120415 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0603 13:30:03.055421 1120415 cache.go:56] Caching tarball of preloaded images
	I0603 13:30:03.055509 1120415 preload.go:173] Found /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 13:30:03.055520 1120415 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0603 13:30:03.055982 1120415 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kubernetes-upgrade-423965/config.json ...
	I0603 13:30:03.056017 1120415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kubernetes-upgrade-423965/config.json: {Name:mk2f62bfad05462b32a0b4a254120d2c3e8afc81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:30:03.056191 1120415 start.go:360] acquireMachinesLock for kubernetes-upgrade-423965: {Name:mk20baaab39609d00406b78ad309423511e633ec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 13:30:28.714750 1120415 start.go:364] duration metric: took 25.658524187s to acquireMachinesLock for "kubernetes-upgrade-423965"
	I0603 13:30:28.714826 1120415 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-423965 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-423965 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 13:30:28.714927 1120415 start.go:125] createHost starting for "" (driver="kvm2")
	I0603 13:30:28.717176 1120415 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0603 13:30:28.717427 1120415 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:30:28.717487 1120415 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:30:28.735250 1120415 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43859
	I0603 13:30:28.735666 1120415 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:30:28.736200 1120415 main.go:141] libmachine: Using API Version  1
	I0603 13:30:28.736221 1120415 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:30:28.736576 1120415 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:30:28.736774 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetMachineName
	I0603 13:30:28.736911 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .DriverName
	I0603 13:30:28.737068 1120415 start.go:159] libmachine.API.Create for "kubernetes-upgrade-423965" (driver="kvm2")
	I0603 13:30:28.737096 1120415 client.go:168] LocalClient.Create starting
	I0603 13:30:28.737122 1120415 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem
	I0603 13:30:28.737152 1120415 main.go:141] libmachine: Decoding PEM data...
	I0603 13:30:28.737172 1120415 main.go:141] libmachine: Parsing certificate...
	I0603 13:30:28.737242 1120415 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem
	I0603 13:30:28.737262 1120415 main.go:141] libmachine: Decoding PEM data...
	I0603 13:30:28.737271 1120415 main.go:141] libmachine: Parsing certificate...
	I0603 13:30:28.737291 1120415 main.go:141] libmachine: Running pre-create checks...
	I0603 13:30:28.737300 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .PreCreateCheck
	I0603 13:30:28.737694 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetConfigRaw
	I0603 13:30:28.738138 1120415 main.go:141] libmachine: Creating machine...
	I0603 13:30:28.738152 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .Create
	I0603 13:30:28.738328 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Creating KVM machine...
	I0603 13:30:28.739508 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | found existing default KVM network
	I0603 13:30:28.740599 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | I0603 13:30:28.740415 1120727 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:e9:6b:7f} reservation:<nil>}
	I0603 13:30:28.741472 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | I0603 13:30:28.741370 1120727 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000254350}
	I0603 13:30:28.741497 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | created network xml: 
	I0603 13:30:28.741509 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | <network>
	I0603 13:30:28.741520 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG |   <name>mk-kubernetes-upgrade-423965</name>
	I0603 13:30:28.741531 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG |   <dns enable='no'/>
	I0603 13:30:28.741542 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG |   
	I0603 13:30:28.741551 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0603 13:30:28.741559 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG |     <dhcp>
	I0603 13:30:28.741572 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0603 13:30:28.741589 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG |     </dhcp>
	I0603 13:30:28.741614 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG |   </ip>
	I0603 13:30:28.741628 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG |   
	I0603 13:30:28.741641 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | </network>
	I0603 13:30:28.741651 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | 
	I0603 13:30:28.747122 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | trying to create private KVM network mk-kubernetes-upgrade-423965 192.168.50.0/24...
	I0603 13:30:28.821535 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | private KVM network mk-kubernetes-upgrade-423965 192.168.50.0/24 created
	I0603 13:30:28.821572 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | I0603 13:30:28.821507 1120727 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 13:30:28.821601 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Setting up store path in /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/kubernetes-upgrade-423965 ...
	I0603 13:30:28.821618 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Building disk image from file:///home/jenkins/minikube-integration/19011-1078924/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso
	I0603 13:30:28.821759 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Downloading /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19011-1078924/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0603 13:30:29.069524 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | I0603 13:30:29.069393 1120727 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/kubernetes-upgrade-423965/id_rsa...
	I0603 13:30:29.252427 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | I0603 13:30:29.252233 1120727 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/kubernetes-upgrade-423965/kubernetes-upgrade-423965.rawdisk...
	I0603 13:30:29.252487 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | Writing magic tar header
	I0603 13:30:29.252510 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/kubernetes-upgrade-423965 (perms=drwx------)
	I0603 13:30:29.252530 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924/.minikube/machines (perms=drwxr-xr-x)
	I0603 13:30:29.252541 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924/.minikube (perms=drwxr-xr-x)
	I0603 13:30:29.252552 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924 (perms=drwxrwxr-x)
	I0603 13:30:29.252562 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0603 13:30:29.252574 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0603 13:30:29.252587 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | Writing SSH key tar header
	I0603 13:30:29.252595 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Creating domain...
	I0603 13:30:29.252653 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | I0603 13:30:29.252362 1120727 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/kubernetes-upgrade-423965 ...
	I0603 13:30:29.252689 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/kubernetes-upgrade-423965
	I0603 13:30:29.252703 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines
	I0603 13:30:29.252725 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 13:30:29.252742 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924
	I0603 13:30:29.252755 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0603 13:30:29.252771 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | Checking permissions on dir: /home/jenkins
	I0603 13:30:29.252784 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | Checking permissions on dir: /home
	I0603 13:30:29.252807 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | Skipping /home - not owner
	I0603 13:30:29.253872 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) define libvirt domain using xml: 
	I0603 13:30:29.253894 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) <domain type='kvm'>
	I0603 13:30:29.253904 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965)   <name>kubernetes-upgrade-423965</name>
	I0603 13:30:29.253917 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965)   <memory unit='MiB'>2200</memory>
	I0603 13:30:29.253928 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965)   <vcpu>2</vcpu>
	I0603 13:30:29.253935 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965)   <features>
	I0603 13:30:29.253947 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965)     <acpi/>
	I0603 13:30:29.253962 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965)     <apic/>
	I0603 13:30:29.253972 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965)     <pae/>
	I0603 13:30:29.253977 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965)     
	I0603 13:30:29.253991 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965)   </features>
	I0603 13:30:29.254004 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965)   <cpu mode='host-passthrough'>
	I0603 13:30:29.254015 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965)   
	I0603 13:30:29.254023 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965)   </cpu>
	I0603 13:30:29.254048 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965)   <os>
	I0603 13:30:29.254067 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965)     <type>hvm</type>
	I0603 13:30:29.254074 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965)     <boot dev='cdrom'/>
	I0603 13:30:29.254081 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965)     <boot dev='hd'/>
	I0603 13:30:29.254087 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965)     <bootmenu enable='no'/>
	I0603 13:30:29.254094 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965)   </os>
	I0603 13:30:29.254102 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965)   <devices>
	I0603 13:30:29.254113 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965)     <disk type='file' device='cdrom'>
	I0603 13:30:29.254139 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965)       <source file='/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/kubernetes-upgrade-423965/boot2docker.iso'/>
	I0603 13:30:29.254159 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965)       <target dev='hdc' bus='scsi'/>
	I0603 13:30:29.254168 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965)       <readonly/>
	I0603 13:30:29.254176 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965)     </disk>
	I0603 13:30:29.254182 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965)     <disk type='file' device='disk'>
	I0603 13:30:29.254191 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0603 13:30:29.254200 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965)       <source file='/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/kubernetes-upgrade-423965/kubernetes-upgrade-423965.rawdisk'/>
	I0603 13:30:29.254211 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965)       <target dev='hda' bus='virtio'/>
	I0603 13:30:29.254220 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965)     </disk>
	I0603 13:30:29.254240 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965)     <interface type='network'>
	I0603 13:30:29.254255 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965)       <source network='mk-kubernetes-upgrade-423965'/>
	I0603 13:30:29.254266 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965)       <model type='virtio'/>
	I0603 13:30:29.254274 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965)     </interface>
	I0603 13:30:29.254280 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965)     <interface type='network'>
	I0603 13:30:29.254286 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965)       <source network='default'/>
	I0603 13:30:29.254292 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965)       <model type='virtio'/>
	I0603 13:30:29.254301 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965)     </interface>
	I0603 13:30:29.254316 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965)     <serial type='pty'>
	I0603 13:30:29.254328 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965)       <target port='0'/>
	I0603 13:30:29.254338 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965)     </serial>
	I0603 13:30:29.254347 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965)     <console type='pty'>
	I0603 13:30:29.254358 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965)       <target type='serial' port='0'/>
	I0603 13:30:29.254369 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965)     </console>
	I0603 13:30:29.254374 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965)     <rng model='virtio'>
	I0603 13:30:29.254410 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965)       <backend model='random'>/dev/random</backend>
	I0603 13:30:29.254430 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965)     </rng>
	I0603 13:30:29.254457 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965)     
	I0603 13:30:29.254471 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965)     
	I0603 13:30:29.254485 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965)   </devices>
	I0603 13:30:29.254492 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) </domain>
	I0603 13:30:29.254507 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) 
	I0603 13:30:29.263180 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined MAC address 52:54:00:c0:d9:ad in network default
	I0603 13:30:29.263746 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Ensuring networks are active...
	I0603 13:30:29.263771 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:30:29.264565 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Ensuring network default is active
	I0603 13:30:29.264900 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Ensuring network mk-kubernetes-upgrade-423965 is active
	I0603 13:30:29.265546 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Getting domain xml...
	I0603 13:30:29.266427 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Creating domain...
	I0603 13:30:30.563383 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Waiting to get IP...
	I0603 13:30:30.564502 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:30:30.564986 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | unable to find current IP address of domain kubernetes-upgrade-423965 in network mk-kubernetes-upgrade-423965
	I0603 13:30:30.565017 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | I0603 13:30:30.564966 1120727 retry.go:31] will retry after 285.744304ms: waiting for machine to come up
	I0603 13:30:30.852770 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:30:30.853236 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | unable to find current IP address of domain kubernetes-upgrade-423965 in network mk-kubernetes-upgrade-423965
	I0603 13:30:30.853264 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | I0603 13:30:30.853187 1120727 retry.go:31] will retry after 265.50632ms: waiting for machine to come up
	I0603 13:30:31.120713 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:30:31.121511 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | unable to find current IP address of domain kubernetes-upgrade-423965 in network mk-kubernetes-upgrade-423965
	I0603 13:30:31.121545 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | I0603 13:30:31.121454 1120727 retry.go:31] will retry after 344.318473ms: waiting for machine to come up
	I0603 13:30:31.468316 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:30:31.469070 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | unable to find current IP address of domain kubernetes-upgrade-423965 in network mk-kubernetes-upgrade-423965
	I0603 13:30:31.469102 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | I0603 13:30:31.469026 1120727 retry.go:31] will retry after 598.52ms: waiting for machine to come up
	I0603 13:30:32.068886 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:30:32.069478 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | unable to find current IP address of domain kubernetes-upgrade-423965 in network mk-kubernetes-upgrade-423965
	I0603 13:30:32.069513 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | I0603 13:30:32.069395 1120727 retry.go:31] will retry after 621.492164ms: waiting for machine to come up
	I0603 13:30:32.692475 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:30:32.693004 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | unable to find current IP address of domain kubernetes-upgrade-423965 in network mk-kubernetes-upgrade-423965
	I0603 13:30:32.693063 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | I0603 13:30:32.692962 1120727 retry.go:31] will retry after 883.72699ms: waiting for machine to come up
	I0603 13:30:33.578226 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:30:33.578805 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | unable to find current IP address of domain kubernetes-upgrade-423965 in network mk-kubernetes-upgrade-423965
	I0603 13:30:33.578837 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | I0603 13:30:33.578715 1120727 retry.go:31] will retry after 761.139944ms: waiting for machine to come up
	I0603 13:30:34.341032 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:30:34.341445 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | unable to find current IP address of domain kubernetes-upgrade-423965 in network mk-kubernetes-upgrade-423965
	I0603 13:30:34.341473 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | I0603 13:30:34.341383 1120727 retry.go:31] will retry after 1.472611699s: waiting for machine to come up
	I0603 13:30:35.816271 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:30:35.816778 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | unable to find current IP address of domain kubernetes-upgrade-423965 in network mk-kubernetes-upgrade-423965
	I0603 13:30:35.816810 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | I0603 13:30:35.816710 1120727 retry.go:31] will retry after 1.152584185s: waiting for machine to come up
	I0603 13:30:36.970623 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:30:36.971083 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | unable to find current IP address of domain kubernetes-upgrade-423965 in network mk-kubernetes-upgrade-423965
	I0603 13:30:36.971114 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | I0603 13:30:36.971025 1120727 retry.go:31] will retry after 1.590167985s: waiting for machine to come up
	I0603 13:30:38.564115 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:30:38.564569 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | unable to find current IP address of domain kubernetes-upgrade-423965 in network mk-kubernetes-upgrade-423965
	I0603 13:30:38.564594 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | I0603 13:30:38.564490 1120727 retry.go:31] will retry after 2.285967279s: waiting for machine to come up
	I0603 13:30:40.853631 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:30:40.854080 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | unable to find current IP address of domain kubernetes-upgrade-423965 in network mk-kubernetes-upgrade-423965
	I0603 13:30:40.854113 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | I0603 13:30:40.854023 1120727 retry.go:31] will retry after 3.095649821s: waiting for machine to come up
	I0603 13:30:43.951633 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:30:43.952181 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | unable to find current IP address of domain kubernetes-upgrade-423965 in network mk-kubernetes-upgrade-423965
	I0603 13:30:43.952215 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | I0603 13:30:43.952121 1120727 retry.go:31] will retry after 3.478298132s: waiting for machine to come up
	I0603 13:30:47.434569 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:30:47.435075 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | unable to find current IP address of domain kubernetes-upgrade-423965 in network mk-kubernetes-upgrade-423965
	I0603 13:30:47.435104 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | I0603 13:30:47.435016 1120727 retry.go:31] will retry after 4.622453564s: waiting for machine to come up
	I0603 13:30:52.061320 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:30:52.061707 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Found IP for machine: 192.168.50.64
	I0603 13:30:52.061735 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Reserving static IP address...
	I0603 13:30:52.061751 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has current primary IP address 192.168.50.64 and MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:30:52.062162 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-423965", mac: "52:54:00:cf:e2:9b", ip: "192.168.50.64"} in network mk-kubernetes-upgrade-423965
	I0603 13:30:52.138416 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | Getting to WaitForSSH function...
	I0603 13:30:52.138453 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Reserved static IP address: 192.168.50.64
	I0603 13:30:52.138467 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Waiting for SSH to be available...
	I0603 13:30:52.141073 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:30:52.141683 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:e2:9b", ip: ""} in network mk-kubernetes-upgrade-423965: {Iface:virbr2 ExpiryTime:2024-06-03 14:30:43 +0000 UTC Type:0 Mac:52:54:00:cf:e2:9b Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:minikube Clientid:01:52:54:00:cf:e2:9b}
	I0603 13:30:52.141709 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined IP address 192.168.50.64 and MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:30:52.141864 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | Using SSH client type: external
	I0603 13:30:52.141888 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | Using SSH private key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/kubernetes-upgrade-423965/id_rsa (-rw-------)
	I0603 13:30:52.141916 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.64 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/kubernetes-upgrade-423965/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 13:30:52.141943 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | About to run SSH command:
	I0603 13:30:52.141959 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | exit 0
	I0603 13:30:52.274496 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | SSH cmd err, output: <nil>: 
	I0603 13:30:52.274768 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) KVM machine creation complete!
	I0603 13:30:52.275140 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetConfigRaw
	I0603 13:30:52.275746 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .DriverName
	I0603 13:30:52.275979 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .DriverName
	I0603 13:30:52.276211 1120415 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0603 13:30:52.276229 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetState
	I0603 13:30:52.277732 1120415 main.go:141] libmachine: Detecting operating system of created instance...
	I0603 13:30:52.277746 1120415 main.go:141] libmachine: Waiting for SSH to be available...
	I0603 13:30:52.277751 1120415 main.go:141] libmachine: Getting to WaitForSSH function...
	I0603 13:30:52.277757 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHHostname
	I0603 13:30:52.280437 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:30:52.280775 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:e2:9b", ip: ""} in network mk-kubernetes-upgrade-423965: {Iface:virbr2 ExpiryTime:2024-06-03 14:30:43 +0000 UTC Type:0 Mac:52:54:00:cf:e2:9b Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-423965 Clientid:01:52:54:00:cf:e2:9b}
	I0603 13:30:52.280798 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined IP address 192.168.50.64 and MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:30:52.280953 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHPort
	I0603 13:30:52.281186 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHKeyPath
	I0603 13:30:52.281389 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHKeyPath
	I0603 13:30:52.281571 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHUsername
	I0603 13:30:52.281753 1120415 main.go:141] libmachine: Using SSH client type: native
	I0603 13:30:52.282032 1120415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.64 22 <nil> <nil>}
	I0603 13:30:52.282061 1120415 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0603 13:30:52.396987 1120415 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 13:30:52.397013 1120415 main.go:141] libmachine: Detecting the provisioner...
	I0603 13:30:52.397021 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHHostname
	I0603 13:30:52.399842 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:30:52.400329 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:e2:9b", ip: ""} in network mk-kubernetes-upgrade-423965: {Iface:virbr2 ExpiryTime:2024-06-03 14:30:43 +0000 UTC Type:0 Mac:52:54:00:cf:e2:9b Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-423965 Clientid:01:52:54:00:cf:e2:9b}
	I0603 13:30:52.400361 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined IP address 192.168.50.64 and MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:30:52.400635 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHPort
	I0603 13:30:52.400924 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHKeyPath
	I0603 13:30:52.401103 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHKeyPath
	I0603 13:30:52.401237 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHUsername
	I0603 13:30:52.401434 1120415 main.go:141] libmachine: Using SSH client type: native
	I0603 13:30:52.401613 1120415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.64 22 <nil> <nil>}
	I0603 13:30:52.401625 1120415 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0603 13:30:52.514497 1120415 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0603 13:30:52.514581 1120415 main.go:141] libmachine: found compatible host: buildroot
	I0603 13:30:52.514589 1120415 main.go:141] libmachine: Provisioning with buildroot...
	I0603 13:30:52.514597 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetMachineName
	I0603 13:30:52.514864 1120415 buildroot.go:166] provisioning hostname "kubernetes-upgrade-423965"
	I0603 13:30:52.514898 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetMachineName
	I0603 13:30:52.515108 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHHostname
	I0603 13:30:52.518074 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:30:52.518459 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:e2:9b", ip: ""} in network mk-kubernetes-upgrade-423965: {Iface:virbr2 ExpiryTime:2024-06-03 14:30:43 +0000 UTC Type:0 Mac:52:54:00:cf:e2:9b Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-423965 Clientid:01:52:54:00:cf:e2:9b}
	I0603 13:30:52.518508 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined IP address 192.168.50.64 and MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:30:52.518673 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHPort
	I0603 13:30:52.518880 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHKeyPath
	I0603 13:30:52.519171 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHKeyPath
	I0603 13:30:52.519310 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHUsername
	I0603 13:30:52.519515 1120415 main.go:141] libmachine: Using SSH client type: native
	I0603 13:30:52.519693 1120415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.64 22 <nil> <nil>}
	I0603 13:30:52.519706 1120415 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-423965 && echo "kubernetes-upgrade-423965" | sudo tee /etc/hostname
	I0603 13:30:52.652514 1120415 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-423965
	
	I0603 13:30:52.652557 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHHostname
	I0603 13:30:52.655648 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:30:52.656052 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:e2:9b", ip: ""} in network mk-kubernetes-upgrade-423965: {Iface:virbr2 ExpiryTime:2024-06-03 14:30:43 +0000 UTC Type:0 Mac:52:54:00:cf:e2:9b Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-423965 Clientid:01:52:54:00:cf:e2:9b}
	I0603 13:30:52.656092 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined IP address 192.168.50.64 and MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:30:52.656216 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHPort
	I0603 13:30:52.656446 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHKeyPath
	I0603 13:30:52.656625 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHKeyPath
	I0603 13:30:52.656816 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHUsername
	I0603 13:30:52.657022 1120415 main.go:141] libmachine: Using SSH client type: native
	I0603 13:30:52.657215 1120415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.64 22 <nil> <nil>}
	I0603 13:30:52.657240 1120415 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-423965' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-423965/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-423965' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 13:30:52.779731 1120415 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 13:30:52.779769 1120415 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19011-1078924/.minikube CaCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19011-1078924/.minikube}
	I0603 13:30:52.779816 1120415 buildroot.go:174] setting up certificates
	I0603 13:30:52.779828 1120415 provision.go:84] configureAuth start
	I0603 13:30:52.779840 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetMachineName
	I0603 13:30:52.780155 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetIP
	I0603 13:30:52.783157 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:30:52.783580 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:e2:9b", ip: ""} in network mk-kubernetes-upgrade-423965: {Iface:virbr2 ExpiryTime:2024-06-03 14:30:43 +0000 UTC Type:0 Mac:52:54:00:cf:e2:9b Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-423965 Clientid:01:52:54:00:cf:e2:9b}
	I0603 13:30:52.783605 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined IP address 192.168.50.64 and MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:30:52.783835 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHHostname
	I0603 13:30:52.786389 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:30:52.786707 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:e2:9b", ip: ""} in network mk-kubernetes-upgrade-423965: {Iface:virbr2 ExpiryTime:2024-06-03 14:30:43 +0000 UTC Type:0 Mac:52:54:00:cf:e2:9b Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-423965 Clientid:01:52:54:00:cf:e2:9b}
	I0603 13:30:52.786737 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined IP address 192.168.50.64 and MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:30:52.786893 1120415 provision.go:143] copyHostCerts
	I0603 13:30:52.786971 1120415 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem, removing ...
	I0603 13:30:52.786991 1120415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 13:30:52.787062 1120415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem (1078 bytes)
	I0603 13:30:52.787170 1120415 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem, removing ...
	I0603 13:30:52.787181 1120415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 13:30:52.787212 1120415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem (1123 bytes)
	I0603 13:30:52.787292 1120415 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem, removing ...
	I0603 13:30:52.787309 1120415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 13:30:52.787337 1120415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem (1675 bytes)
	I0603 13:30:52.787407 1120415 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-423965 san=[127.0.0.1 192.168.50.64 kubernetes-upgrade-423965 localhost minikube]
	I0603 13:30:52.938578 1120415 provision.go:177] copyRemoteCerts
	I0603 13:30:52.938669 1120415 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 13:30:52.938701 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHHostname
	I0603 13:30:52.941677 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:30:52.942023 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:e2:9b", ip: ""} in network mk-kubernetes-upgrade-423965: {Iface:virbr2 ExpiryTime:2024-06-03 14:30:43 +0000 UTC Type:0 Mac:52:54:00:cf:e2:9b Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-423965 Clientid:01:52:54:00:cf:e2:9b}
	I0603 13:30:52.942049 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined IP address 192.168.50.64 and MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:30:52.942308 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHPort
	I0603 13:30:52.942590 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHKeyPath
	I0603 13:30:52.942788 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHUsername
	I0603 13:30:52.942996 1120415 sshutil.go:53] new ssh client: &{IP:192.168.50.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/kubernetes-upgrade-423965/id_rsa Username:docker}
	I0603 13:30:53.032354 1120415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 13:30:53.058797 1120415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0603 13:30:53.085857 1120415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 13:30:53.114146 1120415 provision.go:87] duration metric: took 334.302579ms to configureAuth
	I0603 13:30:53.114175 1120415 buildroot.go:189] setting minikube options for container-runtime
	I0603 13:30:53.114383 1120415 config.go:182] Loaded profile config "kubernetes-upgrade-423965": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0603 13:30:53.114475 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHHostname
	I0603 13:30:53.117478 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:30:53.117867 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:e2:9b", ip: ""} in network mk-kubernetes-upgrade-423965: {Iface:virbr2 ExpiryTime:2024-06-03 14:30:43 +0000 UTC Type:0 Mac:52:54:00:cf:e2:9b Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-423965 Clientid:01:52:54:00:cf:e2:9b}
	I0603 13:30:53.117897 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined IP address 192.168.50.64 and MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:30:53.118107 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHPort
	I0603 13:30:53.118310 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHKeyPath
	I0603 13:30:53.118516 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHKeyPath
	I0603 13:30:53.118713 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHUsername
	I0603 13:30:53.118947 1120415 main.go:141] libmachine: Using SSH client type: native
	I0603 13:30:53.119138 1120415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.64 22 <nil> <nil>}
	I0603 13:30:53.119160 1120415 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 13:30:53.433670 1120415 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 13:30:53.433714 1120415 main.go:141] libmachine: Checking connection to Docker...
	I0603 13:30:53.433763 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetURL
	I0603 13:30:53.435198 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | Using libvirt version 6000000
	I0603 13:30:53.437316 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:30:53.437681 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:e2:9b", ip: ""} in network mk-kubernetes-upgrade-423965: {Iface:virbr2 ExpiryTime:2024-06-03 14:30:43 +0000 UTC Type:0 Mac:52:54:00:cf:e2:9b Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-423965 Clientid:01:52:54:00:cf:e2:9b}
	I0603 13:30:53.437712 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined IP address 192.168.50.64 and MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:30:53.437894 1120415 main.go:141] libmachine: Docker is up and running!
	I0603 13:30:53.437908 1120415 main.go:141] libmachine: Reticulating splines...
	I0603 13:30:53.437915 1120415 client.go:171] duration metric: took 24.700811574s to LocalClient.Create
	I0603 13:30:53.437948 1120415 start.go:167] duration metric: took 24.700871744s to libmachine.API.Create "kubernetes-upgrade-423965"
	I0603 13:30:53.437957 1120415 start.go:293] postStartSetup for "kubernetes-upgrade-423965" (driver="kvm2")
	I0603 13:30:53.437967 1120415 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 13:30:53.437984 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .DriverName
	I0603 13:30:53.438283 1120415 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 13:30:53.438310 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHHostname
	I0603 13:30:53.440547 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:30:53.440878 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:e2:9b", ip: ""} in network mk-kubernetes-upgrade-423965: {Iface:virbr2 ExpiryTime:2024-06-03 14:30:43 +0000 UTC Type:0 Mac:52:54:00:cf:e2:9b Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-423965 Clientid:01:52:54:00:cf:e2:9b}
	I0603 13:30:53.440900 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined IP address 192.168.50.64 and MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:30:53.441039 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHPort
	I0603 13:30:53.441224 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHKeyPath
	I0603 13:30:53.441389 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHUsername
	I0603 13:30:53.441546 1120415 sshutil.go:53] new ssh client: &{IP:192.168.50.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/kubernetes-upgrade-423965/id_rsa Username:docker}
	I0603 13:30:53.528021 1120415 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 13:30:53.532442 1120415 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 13:30:53.532463 1120415 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/addons for local assets ...
	I0603 13:30:53.532521 1120415 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/files for local assets ...
	I0603 13:30:53.532589 1120415 filesync.go:149] local asset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> 10862512.pem in /etc/ssl/certs
	I0603 13:30:53.532688 1120415 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 13:30:53.542556 1120415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:30:53.567977 1120415 start.go:296] duration metric: took 130.004203ms for postStartSetup
	I0603 13:30:53.568035 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetConfigRaw
	I0603 13:30:53.568622 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetIP
	I0603 13:30:53.571475 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:30:53.571886 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:e2:9b", ip: ""} in network mk-kubernetes-upgrade-423965: {Iface:virbr2 ExpiryTime:2024-06-03 14:30:43 +0000 UTC Type:0 Mac:52:54:00:cf:e2:9b Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-423965 Clientid:01:52:54:00:cf:e2:9b}
	I0603 13:30:53.571920 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined IP address 192.168.50.64 and MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:30:53.572163 1120415 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kubernetes-upgrade-423965/config.json ...
	I0603 13:30:53.572378 1120415 start.go:128] duration metric: took 24.857417697s to createHost
	I0603 13:30:53.572407 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHHostname
	I0603 13:30:53.575106 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:30:53.575412 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:e2:9b", ip: ""} in network mk-kubernetes-upgrade-423965: {Iface:virbr2 ExpiryTime:2024-06-03 14:30:43 +0000 UTC Type:0 Mac:52:54:00:cf:e2:9b Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-423965 Clientid:01:52:54:00:cf:e2:9b}
	I0603 13:30:53.575451 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined IP address 192.168.50.64 and MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:30:53.575548 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHPort
	I0603 13:30:53.575756 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHKeyPath
	I0603 13:30:53.575934 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHKeyPath
	I0603 13:30:53.576084 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHUsername
	I0603 13:30:53.576276 1120415 main.go:141] libmachine: Using SSH client type: native
	I0603 13:30:53.576452 1120415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.64 22 <nil> <nil>}
	I0603 13:30:53.576462 1120415 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0603 13:30:53.694462 1120415 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717421453.667916953
	
	I0603 13:30:53.694496 1120415 fix.go:216] guest clock: 1717421453.667916953
	I0603 13:30:53.694505 1120415 fix.go:229] Guest: 2024-06-03 13:30:53.667916953 +0000 UTC Remote: 2024-06-03 13:30:53.572394077 +0000 UTC m=+50.674053424 (delta=95.522876ms)
	I0603 13:30:53.694547 1120415 fix.go:200] guest clock delta is within tolerance: 95.522876ms
	I0603 13:30:53.694554 1120415 start.go:83] releasing machines lock for "kubernetes-upgrade-423965", held for 24.979765492s
	I0603 13:30:53.694578 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .DriverName
	I0603 13:30:53.694917 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetIP
	I0603 13:30:53.698118 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:30:53.698516 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:e2:9b", ip: ""} in network mk-kubernetes-upgrade-423965: {Iface:virbr2 ExpiryTime:2024-06-03 14:30:43 +0000 UTC Type:0 Mac:52:54:00:cf:e2:9b Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-423965 Clientid:01:52:54:00:cf:e2:9b}
	I0603 13:30:53.698550 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined IP address 192.168.50.64 and MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:30:53.698760 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .DriverName
	I0603 13:30:53.699321 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .DriverName
	I0603 13:30:53.699510 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .DriverName
	I0603 13:30:53.699671 1120415 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 13:30:53.699733 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHHostname
	I0603 13:30:53.699832 1120415 ssh_runner.go:195] Run: cat /version.json
	I0603 13:30:53.699884 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHHostname
	I0603 13:30:53.702697 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:30:53.703087 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:e2:9b", ip: ""} in network mk-kubernetes-upgrade-423965: {Iface:virbr2 ExpiryTime:2024-06-03 14:30:43 +0000 UTC Type:0 Mac:52:54:00:cf:e2:9b Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-423965 Clientid:01:52:54:00:cf:e2:9b}
	I0603 13:30:53.703115 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined IP address 192.168.50.64 and MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:30:53.703134 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:30:53.703292 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHPort
	I0603 13:30:53.703480 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHKeyPath
	I0603 13:30:53.703632 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:e2:9b", ip: ""} in network mk-kubernetes-upgrade-423965: {Iface:virbr2 ExpiryTime:2024-06-03 14:30:43 +0000 UTC Type:0 Mac:52:54:00:cf:e2:9b Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-423965 Clientid:01:52:54:00:cf:e2:9b}
	I0603 13:30:53.703656 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined IP address 192.168.50.64 and MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:30:53.703695 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHUsername
	I0603 13:30:53.703871 1120415 sshutil.go:53] new ssh client: &{IP:192.168.50.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/kubernetes-upgrade-423965/id_rsa Username:docker}
	I0603 13:30:53.703888 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHPort
	I0603 13:30:53.704063 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHKeyPath
	I0603 13:30:53.704220 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHUsername
	I0603 13:30:53.704447 1120415 sshutil.go:53] new ssh client: &{IP:192.168.50.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/kubernetes-upgrade-423965/id_rsa Username:docker}
	I0603 13:30:53.786543 1120415 ssh_runner.go:195] Run: systemctl --version
	I0603 13:30:53.817632 1120415 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 13:30:53.979457 1120415 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 13:30:53.986083 1120415 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 13:30:53.986160 1120415 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 13:30:54.004995 1120415 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 13:30:54.005025 1120415 start.go:494] detecting cgroup driver to use...
	I0603 13:30:54.005093 1120415 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 13:30:54.023006 1120415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 13:30:54.037903 1120415 docker.go:217] disabling cri-docker service (if available) ...
	I0603 13:30:54.037986 1120415 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 13:30:54.053629 1120415 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 13:30:54.068272 1120415 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 13:30:54.196308 1120415 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 13:30:54.355651 1120415 docker.go:233] disabling docker service ...
	I0603 13:30:54.355747 1120415 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 13:30:54.371141 1120415 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 13:30:54.385618 1120415 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 13:30:54.530321 1120415 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 13:30:54.649290 1120415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 13:30:54.664803 1120415 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 13:30:54.684557 1120415 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0603 13:30:54.684640 1120415 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:30:54.695651 1120415 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 13:30:54.695717 1120415 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:30:54.706419 1120415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:30:54.718768 1120415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:30:54.731889 1120415 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 13:30:54.742777 1120415 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 13:30:54.751979 1120415 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 13:30:54.752064 1120415 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 13:30:54.765652 1120415 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 13:30:54.775740 1120415 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:30:54.892536 1120415 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 13:30:55.050098 1120415 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 13:30:55.050186 1120415 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 13:30:55.056040 1120415 start.go:562] Will wait 60s for crictl version
	I0603 13:30:55.056140 1120415 ssh_runner.go:195] Run: which crictl
	I0603 13:30:55.060736 1120415 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 13:30:55.104248 1120415 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 13:30:55.104337 1120415 ssh_runner.go:195] Run: crio --version
	I0603 13:30:55.139223 1120415 ssh_runner.go:195] Run: crio --version
	I0603 13:30:55.170993 1120415 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0603 13:30:55.172286 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetIP
	I0603 13:30:55.175160 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:30:55.177100 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:e2:9b", ip: ""} in network mk-kubernetes-upgrade-423965: {Iface:virbr2 ExpiryTime:2024-06-03 14:30:43 +0000 UTC Type:0 Mac:52:54:00:cf:e2:9b Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-423965 Clientid:01:52:54:00:cf:e2:9b}
	I0603 13:30:55.177143 1120415 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined IP address 192.168.50.64 and MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:30:55.177163 1120415 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0603 13:30:55.183009 1120415 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:30:55.196908 1120415 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-423965 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-423965 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.64 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 13:30:55.197048 1120415 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0603 13:30:55.197116 1120415 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:30:55.237352 1120415 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0603 13:30:55.237462 1120415 ssh_runner.go:195] Run: which lz4
	I0603 13:30:55.242983 1120415 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0603 13:30:55.248901 1120415 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 13:30:55.248947 1120415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0603 13:30:57.122272 1120415 crio.go:462] duration metric: took 1.879339125s to copy over tarball
	I0603 13:30:57.122354 1120415 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 13:31:00.176432 1120415 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.054035706s)
	I0603 13:31:00.176482 1120415 crio.go:469] duration metric: took 3.054168911s to extract the tarball
	I0603 13:31:00.176494 1120415 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 13:31:00.222383 1120415 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:31:00.401138 1120415 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0603 13:31:00.401171 1120415 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0603 13:31:00.401252 1120415 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:31:00.401275 1120415 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0603 13:31:00.401296 1120415 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 13:31:00.401318 1120415 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0603 13:31:00.401384 1120415 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 13:31:00.401395 1120415 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 13:31:00.401626 1120415 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0603 13:31:00.401850 1120415 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 13:31:00.402996 1120415 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0603 13:31:00.403009 1120415 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 13:31:00.403024 1120415 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0603 13:31:00.403006 1120415 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0603 13:31:00.403343 1120415 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 13:31:00.403350 1120415 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 13:31:00.403419 1120415 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:31:00.403423 1120415 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 13:31:00.601220 1120415 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 13:31:00.610775 1120415 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0603 13:31:00.610867 1120415 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0603 13:31:00.617974 1120415 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0603 13:31:00.620265 1120415 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:31:00.638389 1120415 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0603 13:31:00.710446 1120415 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0603 13:31:00.710499 1120415 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 13:31:00.710552 1120415 ssh_runner.go:195] Run: which crictl
	I0603 13:31:00.739571 1120415 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0603 13:31:00.800106 1120415 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0603 13:31:00.811471 1120415 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0603 13:31:00.811522 1120415 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 13:31:00.811573 1120415 ssh_runner.go:195] Run: which crictl
	I0603 13:31:00.811716 1120415 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0603 13:31:00.811764 1120415 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0603 13:31:00.811808 1120415 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0603 13:31:00.811851 1120415 ssh_runner.go:195] Run: which crictl
	I0603 13:31:00.811775 1120415 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 13:31:00.811927 1120415 ssh_runner.go:195] Run: which crictl
	I0603 13:31:00.878719 1120415 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0603 13:31:00.878779 1120415 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0603 13:31:00.878826 1120415 ssh_runner.go:195] Run: which crictl
	I0603 13:31:00.878876 1120415 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 13:31:00.898515 1120415 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0603 13:31:00.898587 1120415 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0603 13:31:00.898627 1120415 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0603 13:31:00.898638 1120415 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0603 13:31:00.898670 1120415 ssh_runner.go:195] Run: which crictl
	I0603 13:31:00.898682 1120415 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0603 13:31:00.898591 1120415 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 13:31:00.898744 1120415 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0603 13:31:00.898767 1120415 ssh_runner.go:195] Run: which crictl
	I0603 13:31:00.898833 1120415 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0603 13:31:00.973019 1120415 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0603 13:31:01.024624 1120415 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0603 13:31:01.024754 1120415 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0603 13:31:01.024852 1120415 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0603 13:31:01.024917 1120415 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0603 13:31:01.025130 1120415 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0603 13:31:01.025183 1120415 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0603 13:31:01.071373 1120415 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0603 13:31:01.078532 1120415 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0603 13:31:01.078616 1120415 cache_images.go:92] duration metric: took 677.428025ms to LoadCachedImages
	W0603 13:31:01.078716 1120415 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0603 13:31:01.078734 1120415 kubeadm.go:928] updating node { 192.168.50.64 8443 v1.20.0 crio true true} ...
	I0603 13:31:01.078861 1120415 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-423965 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.64
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-423965 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 13:31:01.078925 1120415 ssh_runner.go:195] Run: crio config
	I0603 13:31:01.140552 1120415 cni.go:84] Creating CNI manager for ""
	I0603 13:31:01.140582 1120415 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:31:01.140595 1120415 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 13:31:01.140623 1120415 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.64 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-423965 NodeName:kubernetes-upgrade-423965 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.64"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.64 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0603 13:31:01.140806 1120415 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.64
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-423965"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.64
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.64"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 13:31:01.140920 1120415 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0603 13:31:01.153424 1120415 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 13:31:01.153516 1120415 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 13:31:01.165522 1120415 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0603 13:31:01.186707 1120415 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 13:31:01.208722 1120415 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0603 13:31:01.232505 1120415 ssh_runner.go:195] Run: grep 192.168.50.64	control-plane.minikube.internal$ /etc/hosts
	I0603 13:31:01.236844 1120415 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.64	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:31:01.253144 1120415 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:31:01.385574 1120415 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:31:01.408001 1120415 certs.go:68] Setting up /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kubernetes-upgrade-423965 for IP: 192.168.50.64
	I0603 13:31:01.408032 1120415 certs.go:194] generating shared ca certs ...
	I0603 13:31:01.408056 1120415 certs.go:226] acquiring lock for ca certs: {Name:mkeec5aabce7c9540fcb31b78e4f96c2851d54f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:31:01.408239 1120415 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key
	I0603 13:31:01.408293 1120415 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key
	I0603 13:31:01.408314 1120415 certs.go:256] generating profile certs ...
	I0603 13:31:01.408396 1120415 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kubernetes-upgrade-423965/client.key
	I0603 13:31:01.408419 1120415 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kubernetes-upgrade-423965/client.crt with IP's: []
	I0603 13:31:01.693983 1120415 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kubernetes-upgrade-423965/client.crt ...
	I0603 13:31:01.694017 1120415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kubernetes-upgrade-423965/client.crt: {Name:mk7b26245ce109839f0d89105dc7bf5b73167592 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:31:01.694194 1120415 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kubernetes-upgrade-423965/client.key ...
	I0603 13:31:01.694211 1120415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kubernetes-upgrade-423965/client.key: {Name:mkf6e9b7699cccbf7963eee2f7b83a51a37a4f91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:31:01.694307 1120415 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kubernetes-upgrade-423965/apiserver.key.5b2d753b
	I0603 13:31:01.694332 1120415 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kubernetes-upgrade-423965/apiserver.crt.5b2d753b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.64]
	I0603 13:31:01.870570 1120415 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kubernetes-upgrade-423965/apiserver.crt.5b2d753b ...
	I0603 13:31:01.870606 1120415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kubernetes-upgrade-423965/apiserver.crt.5b2d753b: {Name:mk73948e4af1aa39102b556992454edb83800280 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:31:01.870788 1120415 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kubernetes-upgrade-423965/apiserver.key.5b2d753b ...
	I0603 13:31:01.870809 1120415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kubernetes-upgrade-423965/apiserver.key.5b2d753b: {Name:mk625ac12f658eefbca35b5644eb5deb4f4b37b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:31:01.870882 1120415 certs.go:381] copying /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kubernetes-upgrade-423965/apiserver.crt.5b2d753b -> /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kubernetes-upgrade-423965/apiserver.crt
	I0603 13:31:01.870980 1120415 certs.go:385] copying /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kubernetes-upgrade-423965/apiserver.key.5b2d753b -> /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kubernetes-upgrade-423965/apiserver.key
	I0603 13:31:01.871039 1120415 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kubernetes-upgrade-423965/proxy-client.key
	I0603 13:31:01.871060 1120415 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kubernetes-upgrade-423965/proxy-client.crt with IP's: []
	I0603 13:31:02.022470 1120415 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kubernetes-upgrade-423965/proxy-client.crt ...
	I0603 13:31:02.022506 1120415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kubernetes-upgrade-423965/proxy-client.crt: {Name:mk1e1ab51d955ff6ad622c71e994a538243a8b20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:31:02.022690 1120415 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kubernetes-upgrade-423965/proxy-client.key ...
	I0603 13:31:02.022709 1120415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kubernetes-upgrade-423965/proxy-client.key: {Name:mkd83b27822d7b1afb99f06cea06f227e9a72555 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:31:02.022891 1120415 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem (1338 bytes)
	W0603 13:31:02.022937 1120415 certs.go:480] ignoring /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251_empty.pem, impossibly tiny 0 bytes
	I0603 13:31:02.022945 1120415 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 13:31:02.022965 1120415 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem (1078 bytes)
	I0603 13:31:02.022987 1120415 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem (1123 bytes)
	I0603 13:31:02.023006 1120415 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem (1675 bytes)
	I0603 13:31:02.023047 1120415 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:31:02.024484 1120415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 13:31:02.053311 1120415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 13:31:02.083240 1120415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 13:31:02.109924 1120415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0603 13:31:02.136075 1120415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kubernetes-upgrade-423965/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0603 13:31:02.163281 1120415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kubernetes-upgrade-423965/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 13:31:02.191416 1120415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kubernetes-upgrade-423965/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 13:31:02.219287 1120415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kubernetes-upgrade-423965/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0603 13:31:02.254489 1120415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /usr/share/ca-certificates/10862512.pem (1708 bytes)
	I0603 13:31:02.293175 1120415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 13:31:02.328899 1120415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem --> /usr/share/ca-certificates/1086251.pem (1338 bytes)
	I0603 13:31:02.367844 1120415 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 13:31:02.389840 1120415 ssh_runner.go:195] Run: openssl version
	I0603 13:31:02.395935 1120415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10862512.pem && ln -fs /usr/share/ca-certificates/10862512.pem /etc/ssl/certs/10862512.pem"
	I0603 13:31:02.408871 1120415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10862512.pem
	I0603 13:31:02.414123 1120415 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:37 /usr/share/ca-certificates/10862512.pem
	I0603 13:31:02.414202 1120415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10862512.pem
	I0603 13:31:02.420355 1120415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10862512.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 13:31:02.432352 1120415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 13:31:02.443917 1120415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:31:02.448361 1120415 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:24 /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:31:02.448435 1120415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:31:02.454134 1120415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 13:31:02.465569 1120415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1086251.pem && ln -fs /usr/share/ca-certificates/1086251.pem /etc/ssl/certs/1086251.pem"
	I0603 13:31:02.478235 1120415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1086251.pem
	I0603 13:31:02.484286 1120415 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:37 /usr/share/ca-certificates/1086251.pem
	I0603 13:31:02.484374 1120415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1086251.pem
	I0603 13:31:02.490216 1120415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1086251.pem /etc/ssl/certs/51391683.0"
	I0603 13:31:02.501942 1120415 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 13:31:02.506209 1120415 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 13:31:02.506280 1120415 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-423965 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-423965 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.64 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:31:02.506372 1120415 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 13:31:02.506434 1120415 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:31:02.550474 1120415 cri.go:89] found id: ""
	I0603 13:31:02.550551 1120415 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0603 13:31:02.561431 1120415 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 13:31:02.571956 1120415 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 13:31:02.586330 1120415 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 13:31:02.586358 1120415 kubeadm.go:156] found existing configuration files:
	
	I0603 13:31:02.586420 1120415 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 13:31:02.596344 1120415 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 13:31:02.596414 1120415 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 13:31:02.606826 1120415 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 13:31:02.616977 1120415 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 13:31:02.617050 1120415 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 13:31:02.629877 1120415 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 13:31:02.640442 1120415 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 13:31:02.640509 1120415 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 13:31:02.650268 1120415 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 13:31:02.660171 1120415 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 13:31:02.660245 1120415 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 13:31:02.670643 1120415 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 13:31:02.791082 1120415 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0603 13:31:02.791151 1120415 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 13:31:02.939711 1120415 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 13:31:02.939883 1120415 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 13:31:02.940034 1120415 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 13:31:03.142574 1120415 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 13:31:03.183936 1120415 out.go:204]   - Generating certificates and keys ...
	I0603 13:31:03.184174 1120415 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 13:31:03.184293 1120415 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 13:31:03.283524 1120415 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0603 13:31:03.404380 1120415 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0603 13:31:03.719877 1120415 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0603 13:31:03.917364 1120415 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0603 13:31:04.028442 1120415 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0603 13:31:04.028768 1120415 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-423965 localhost] and IPs [192.168.50.64 127.0.0.1 ::1]
	I0603 13:31:04.143788 1120415 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0603 13:31:04.144038 1120415 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-423965 localhost] and IPs [192.168.50.64 127.0.0.1 ::1]
	I0603 13:31:04.248936 1120415 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0603 13:31:04.511686 1120415 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0603 13:31:04.591501 1120415 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0603 13:31:04.591784 1120415 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 13:31:04.945971 1120415 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 13:31:05.334597 1120415 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 13:31:05.660570 1120415 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 13:31:05.874066 1120415 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 13:31:05.896154 1120415 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 13:31:05.896837 1120415 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 13:31:05.896903 1120415 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 13:31:06.028536 1120415 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 13:31:06.031707 1120415 out.go:204]   - Booting up control plane ...
	I0603 13:31:06.031846 1120415 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 13:31:06.039811 1120415 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 13:31:06.039920 1120415 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 13:31:06.040083 1120415 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 13:31:06.044248 1120415 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0603 13:31:46.037915 1120415 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0603 13:31:46.038127 1120415 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:31:46.038297 1120415 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:31:51.038469 1120415 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:31:51.038734 1120415 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:32:01.037862 1120415 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:32:01.038107 1120415 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:32:21.042436 1120415 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:32:21.042611 1120415 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:33:01.044075 1120415 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:33:01.044319 1120415 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:33:01.044350 1120415 kubeadm.go:309] 
	I0603 13:33:01.044417 1120415 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0603 13:33:01.044479 1120415 kubeadm.go:309] 		timed out waiting for the condition
	I0603 13:33:01.044495 1120415 kubeadm.go:309] 
	I0603 13:33:01.044561 1120415 kubeadm.go:309] 	This error is likely caused by:
	I0603 13:33:01.044603 1120415 kubeadm.go:309] 		- The kubelet is not running
	I0603 13:33:01.044756 1120415 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0603 13:33:01.044767 1120415 kubeadm.go:309] 
	I0603 13:33:01.044905 1120415 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0603 13:33:01.044967 1120415 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0603 13:33:01.045024 1120415 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0603 13:33:01.045041 1120415 kubeadm.go:309] 
	I0603 13:33:01.045139 1120415 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0603 13:33:01.045227 1120415 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0603 13:33:01.045237 1120415 kubeadm.go:309] 
	I0603 13:33:01.045374 1120415 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0603 13:33:01.045533 1120415 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0603 13:33:01.045653 1120415 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0603 13:33:01.045772 1120415 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0603 13:33:01.045787 1120415 kubeadm.go:309] 
	I0603 13:33:01.046062 1120415 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 13:33:01.046181 1120415 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0603 13:33:01.046253 1120415 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0603 13:33:01.046396 1120415 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-423965 localhost] and IPs [192.168.50.64 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-423965 localhost] and IPs [192.168.50.64 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-423965 localhost] and IPs [192.168.50.64 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-423965 localhost] and IPs [192.168.50.64 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0603 13:33:01.046442 1120415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 13:33:03.025561 1120415 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.979085498s)
	I0603 13:33:03.025646 1120415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:33:03.040631 1120415 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 13:33:03.052770 1120415 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 13:33:03.052795 1120415 kubeadm.go:156] found existing configuration files:
	
	I0603 13:33:03.052845 1120415 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 13:33:03.064665 1120415 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 13:33:03.064727 1120415 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 13:33:03.076498 1120415 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 13:33:03.088254 1120415 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 13:33:03.088337 1120415 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 13:33:03.100439 1120415 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 13:33:03.111920 1120415 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 13:33:03.111974 1120415 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 13:33:03.123819 1120415 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 13:33:03.135569 1120415 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 13:33:03.135640 1120415 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 13:33:03.147365 1120415 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 13:33:03.224231 1120415 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0603 13:33:03.224438 1120415 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 13:33:03.409741 1120415 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 13:33:03.409868 1120415 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 13:33:03.409991 1120415 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 13:33:03.626882 1120415 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 13:33:03.628766 1120415 out.go:204]   - Generating certificates and keys ...
	I0603 13:33:03.628879 1120415 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 13:33:03.628965 1120415 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 13:33:03.629060 1120415 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 13:33:03.629144 1120415 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 13:33:03.629258 1120415 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 13:33:03.629361 1120415 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 13:33:03.629624 1120415 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 13:33:03.630274 1120415 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 13:33:03.630747 1120415 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 13:33:03.631170 1120415 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 13:33:03.631265 1120415 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 13:33:03.631316 1120415 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 13:33:03.978798 1120415 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 13:33:04.050484 1120415 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 13:33:04.152679 1120415 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 13:33:04.273068 1120415 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 13:33:04.292797 1120415 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 13:33:04.296813 1120415 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 13:33:04.296880 1120415 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 13:33:04.452773 1120415 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 13:33:04.454770 1120415 out.go:204]   - Booting up control plane ...
	I0603 13:33:04.454880 1120415 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 13:33:04.455268 1120415 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 13:33:04.456165 1120415 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 13:33:04.456778 1120415 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 13:33:04.458886 1120415 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0603 13:33:44.461741 1120415 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0603 13:33:44.462110 1120415 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:33:44.462295 1120415 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:33:49.462953 1120415 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:33:49.463178 1120415 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:33:59.463577 1120415 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:33:59.463792 1120415 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:34:19.462889 1120415 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:34:19.463140 1120415 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:34:59.463066 1120415 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:34:59.463315 1120415 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:34:59.463333 1120415 kubeadm.go:309] 
	I0603 13:34:59.463419 1120415 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0603 13:34:59.463508 1120415 kubeadm.go:309] 		timed out waiting for the condition
	I0603 13:34:59.463519 1120415 kubeadm.go:309] 
	I0603 13:34:59.463577 1120415 kubeadm.go:309] 	This error is likely caused by:
	I0603 13:34:59.463614 1120415 kubeadm.go:309] 		- The kubelet is not running
	I0603 13:34:59.463703 1120415 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0603 13:34:59.463713 1120415 kubeadm.go:309] 
	I0603 13:34:59.463860 1120415 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0603 13:34:59.463900 1120415 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0603 13:34:59.463948 1120415 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0603 13:34:59.463957 1120415 kubeadm.go:309] 
	I0603 13:34:59.464103 1120415 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0603 13:34:59.464211 1120415 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0603 13:34:59.464228 1120415 kubeadm.go:309] 
	I0603 13:34:59.464408 1120415 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0603 13:34:59.464543 1120415 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0603 13:34:59.464653 1120415 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0603 13:34:59.464767 1120415 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0603 13:34:59.464781 1120415 kubeadm.go:309] 
	I0603 13:34:59.465323 1120415 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 13:34:59.465459 1120415 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0603 13:34:59.465550 1120415 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0603 13:34:59.465639 1120415 kubeadm.go:393] duration metric: took 3m56.959366831s to StartCluster
	I0603 13:34:59.465715 1120415 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:34:59.465790 1120415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:34:59.518585 1120415 cri.go:89] found id: ""
	I0603 13:34:59.518614 1120415 logs.go:276] 0 containers: []
	W0603 13:34:59.518625 1120415 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:34:59.518634 1120415 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:34:59.518698 1120415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:34:59.564700 1120415 cri.go:89] found id: ""
	I0603 13:34:59.564734 1120415 logs.go:276] 0 containers: []
	W0603 13:34:59.564745 1120415 logs.go:278] No container was found matching "etcd"
	I0603 13:34:59.564754 1120415 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:34:59.564821 1120415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:34:59.607460 1120415 cri.go:89] found id: ""
	I0603 13:34:59.607491 1120415 logs.go:276] 0 containers: []
	W0603 13:34:59.607503 1120415 logs.go:278] No container was found matching "coredns"
	I0603 13:34:59.607510 1120415 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:34:59.607589 1120415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:34:59.652223 1120415 cri.go:89] found id: ""
	I0603 13:34:59.652265 1120415 logs.go:276] 0 containers: []
	W0603 13:34:59.652274 1120415 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:34:59.652280 1120415 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:34:59.652357 1120415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:34:59.696189 1120415 cri.go:89] found id: ""
	I0603 13:34:59.696225 1120415 logs.go:276] 0 containers: []
	W0603 13:34:59.696236 1120415 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:34:59.696244 1120415 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:34:59.696314 1120415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:34:59.732768 1120415 cri.go:89] found id: ""
	I0603 13:34:59.732801 1120415 logs.go:276] 0 containers: []
	W0603 13:34:59.732812 1120415 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:34:59.732821 1120415 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:34:59.732900 1120415 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:34:59.770463 1120415 cri.go:89] found id: ""
	I0603 13:34:59.770592 1120415 logs.go:276] 0 containers: []
	W0603 13:34:59.770624 1120415 logs.go:278] No container was found matching "kindnet"
	I0603 13:34:59.770642 1120415 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:34:59.770668 1120415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:34:59.881801 1120415 logs.go:123] Gathering logs for container status ...
	I0603 13:34:59.881848 1120415 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:34:59.935528 1120415 logs.go:123] Gathering logs for kubelet ...
	I0603 13:34:59.935562 1120415 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:34:59.999627 1120415 logs.go:123] Gathering logs for dmesg ...
	I0603 13:34:59.999669 1120415 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:35:00.016761 1120415 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:35:00.016796 1120415 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:35:00.162403 1120415 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0603 13:35:00.162453 1120415 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0603 13:35:00.162494 1120415 out.go:239] * 
	* 
	W0603 13:35:00.162565 1120415 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0603 13:35:00.162590 1120415 out.go:239] * 
	* 
	W0603 13:35:00.163456 1120415 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0603 13:35:00.296779 1120415 out.go:177] 
	W0603 13:35:00.300796 1120415 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0603 13:35:00.300863 1120415 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0603 13:35:00.300887 1120415 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0603 13:35:00.303161 1120415 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-423965 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-423965
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-423965: (1.520003131s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-423965 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-423965 status --format={{.Host}}: exit status 7 (73.842913ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-423965 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-423965 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (54.406577666s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-423965 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-423965 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-423965 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (85.831042ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-423965] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19011
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19011-1078924/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19011-1078924/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-423965
	    minikube start -p kubernetes-upgrade-423965 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4239652 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.1, by running:
	    
	    minikube start -p kubernetes-upgrade-423965 --kubernetes-version=v1.30.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-423965 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-423965 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (23.276327702s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-06-03 13:36:19.792970887 +0000 UTC m=+4331.997529403
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-423965 -n kubernetes-upgrade-423965
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-423965 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-423965 logs -n 25: (1.375331702s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-021279 sudo              | cilium-021279             | jenkins | v1.33.1 | 03 Jun 24 13:32 UTC |                     |
	|         | systemctl status crio --all        |                           |         |         |                     |                     |
	|         | --full --no-pager                  |                           |         |         |                     |                     |
	| ssh     | -p cilium-021279 sudo              | cilium-021279             | jenkins | v1.33.1 | 03 Jun 24 13:32 UTC |                     |
	|         | systemctl cat crio --no-pager      |                           |         |         |                     |                     |
	| ssh     | -p cilium-021279 sudo find         | cilium-021279             | jenkins | v1.33.1 | 03 Jun 24 13:32 UTC |                     |
	|         | /etc/crio -type f -exec sh -c      |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;               |                           |         |         |                     |                     |
	| ssh     | -p cilium-021279 sudo crio         | cilium-021279             | jenkins | v1.33.1 | 03 Jun 24 13:32 UTC |                     |
	|         | config                             |                           |         |         |                     |                     |
	| delete  | -p cilium-021279                   | cilium-021279             | jenkins | v1.33.1 | 03 Jun 24 13:32 UTC | 03 Jun 24 13:32 UTC |
	| start   | -p stopped-upgrade-259751          | minikube                  | jenkins | v1.26.0 | 03 Jun 24 13:32 UTC | 03 Jun 24 13:33 UTC |
	|         | --memory=2200 --vm-driver=kvm2     |                           |         |         |                     |                     |
	|         |  --container-runtime=crio          |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-541206 sudo        | NoKubernetes-541206       | jenkins | v1.33.1 | 03 Jun 24 13:32 UTC |                     |
	|         | systemctl is-active --quiet        |                           |         |         |                     |                     |
	|         | service kubelet                    |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-541206             | NoKubernetes-541206       | jenkins | v1.33.1 | 03 Jun 24 13:33 UTC | 03 Jun 24 13:33 UTC |
	| start   | -p NoKubernetes-541206             | NoKubernetes-541206       | jenkins | v1.33.1 | 03 Jun 24 13:33 UTC | 03 Jun 24 13:33 UTC |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-541206 sudo        | NoKubernetes-541206       | jenkins | v1.33.1 | 03 Jun 24 13:33 UTC |                     |
	|         | systemctl is-active --quiet        |                           |         |         |                     |                     |
	|         | service kubelet                    |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-439186          | running-upgrade-439186    | jenkins | v1.33.1 | 03 Jun 24 13:33 UTC | 03 Jun 24 13:33 UTC |
	| delete  | -p NoKubernetes-541206             | NoKubernetes-541206       | jenkins | v1.33.1 | 03 Jun 24 13:33 UTC | 03 Jun 24 13:33 UTC |
	| start   | -p cert-expiration-925487          | cert-expiration-925487    | jenkins | v1.33.1 | 03 Jun 24 13:33 UTC | 03 Jun 24 13:34 UTC |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --cert-expiration=3m               |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p force-systemd-flag-977376       | force-systemd-flag-977376 | jenkins | v1.33.1 | 03 Jun 24 13:33 UTC | 03 Jun 24 13:34 UTC |
	|         | --memory=2048 --force-systemd      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-259751 stop        | minikube                  | jenkins | v1.26.0 | 03 Jun 24 13:33 UTC | 03 Jun 24 13:33 UTC |
	| start   | -p stopped-upgrade-259751          | stopped-upgrade-259751    | jenkins | v1.33.1 | 03 Jun 24 13:33 UTC | 03 Jun 24 13:35 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-977376 ssh cat  | force-systemd-flag-977376 | jenkins | v1.33.1 | 03 Jun 24 13:34 UTC | 03 Jun 24 13:34 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-977376       | force-systemd-flag-977376 | jenkins | v1.33.1 | 03 Jun 24 13:34 UTC | 03 Jun 24 13:34 UTC |
	| start   | -p pause-374510 --memory=2048      | pause-374510              | jenkins | v1.33.1 | 03 Jun 24 13:34 UTC |                     |
	|         | --install-addons=false             |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-423965       | kubernetes-upgrade-423965 | jenkins | v1.33.1 | 03 Jun 24 13:35 UTC | 03 Jun 24 13:35 UTC |
	| start   | -p kubernetes-upgrade-423965       | kubernetes-upgrade-423965 | jenkins | v1.33.1 | 03 Jun 24 13:35 UTC | 03 Jun 24 13:35 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1       |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-259751          | stopped-upgrade-259751    | jenkins | v1.33.1 | 03 Jun 24 13:35 UTC | 03 Jun 24 13:35 UTC |
	| start   | -p cert-options-724800             | cert-options-724800       | jenkins | v1.33.1 | 03 Jun 24 13:35 UTC |                     |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1          |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15      |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost        |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com   |                           |         |         |                     |                     |
	|         | --apiserver-port=8555              |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-423965       | kubernetes-upgrade-423965 | jenkins | v1.33.1 | 03 Jun 24 13:35 UTC |                     |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0       |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-423965       | kubernetes-upgrade-423965 | jenkins | v1.33.1 | 03 Jun 24 13:35 UTC | 03 Jun 24 13:36 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1       |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 13:35:56
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 13:35:56.557960 1127686 out.go:291] Setting OutFile to fd 1 ...
	I0603 13:35:56.558190 1127686 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:35:56.558198 1127686 out.go:304] Setting ErrFile to fd 2...
	I0603 13:35:56.558202 1127686 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:35:56.558352 1127686 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
	I0603 13:35:56.558895 1127686 out.go:298] Setting JSON to false
	I0603 13:35:56.559870 1127686 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":15504,"bootTime":1717406253,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 13:35:56.559939 1127686 start.go:139] virtualization: kvm guest
	I0603 13:35:56.563272 1127686 out.go:177] * [kubernetes-upgrade-423965] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 13:35:56.565207 1127686 out.go:177]   - MINIKUBE_LOCATION=19011
	I0603 13:35:56.565212 1127686 notify.go:220] Checking for updates...
	I0603 13:35:56.566862 1127686 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 13:35:56.568378 1127686 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 13:35:56.569737 1127686 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 13:35:56.571184 1127686 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 13:35:56.572603 1127686 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 13:35:56.574458 1127686 config.go:182] Loaded profile config "kubernetes-upgrade-423965": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:35:56.574935 1127686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:35:56.575001 1127686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:35:56.590597 1127686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33631
	I0603 13:35:56.591102 1127686 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:35:56.591657 1127686 main.go:141] libmachine: Using API Version  1
	I0603 13:35:56.591681 1127686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:35:56.592029 1127686 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:35:56.592292 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .DriverName
	I0603 13:35:56.592613 1127686 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 13:35:56.592955 1127686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:35:56.592998 1127686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:35:56.607857 1127686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36891
	I0603 13:35:56.608294 1127686 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:35:56.608785 1127686 main.go:141] libmachine: Using API Version  1
	I0603 13:35:56.608807 1127686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:35:56.609177 1127686 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:35:56.609457 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .DriverName
	I0603 13:35:56.646505 1127686 out.go:177] * Using the kvm2 driver based on existing profile
	I0603 13:35:56.647822 1127686 start.go:297] selected driver: kvm2
	I0603 13:35:56.647848 1127686 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-423965 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.1 ClusterName:kubernetes-upgrade-423965 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.64 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:35:56.647952 1127686 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 13:35:56.648669 1127686 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 13:35:56.648754 1127686 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19011-1078924/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0603 13:35:56.663927 1127686 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0603 13:35:56.664309 1127686 cni.go:84] Creating CNI manager for ""
	I0603 13:35:56.664324 1127686 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:35:56.664358 1127686 start.go:340] cluster config:
	{Name:kubernetes-upgrade-423965 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:kubernetes-upgrade-423965 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.64 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:35:56.664476 1127686 iso.go:125] acquiring lock: {Name:mka26d6a83f88b83737ccc78b57cc462fbe70fe1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 13:35:56.666345 1127686 out.go:177] * Starting "kubernetes-upgrade-423965" primary control-plane node in "kubernetes-upgrade-423965" cluster
	I0603 13:35:53.374171 1127176 main.go:141] libmachine: (cert-options-724800) DBG | domain cert-options-724800 has defined MAC address 52:54:00:5f:e0:98 in network mk-cert-options-724800
	I0603 13:35:53.374772 1127176 main.go:141] libmachine: (cert-options-724800) DBG | unable to find current IP address of domain cert-options-724800 in network mk-cert-options-724800
	I0603 13:35:53.374815 1127176 main.go:141] libmachine: (cert-options-724800) DBG | I0603 13:35:53.374741 1127360 retry.go:31] will retry after 3.831028455s: waiting for machine to come up
	I0603 13:35:57.207065 1127176 main.go:141] libmachine: (cert-options-724800) DBG | domain cert-options-724800 has defined MAC address 52:54:00:5f:e0:98 in network mk-cert-options-724800
	I0603 13:35:57.207666 1127176 main.go:141] libmachine: (cert-options-724800) DBG | unable to find current IP address of domain cert-options-724800 in network mk-cert-options-724800
	I0603 13:35:57.207688 1127176 main.go:141] libmachine: (cert-options-724800) DBG | I0603 13:35:57.207612 1127360 retry.go:31] will retry after 5.223854759s: waiting for machine to come up
	I0603 13:35:56.236413 1126655 pod_ready.go:102] pod "coredns-7db6d8ff4d-k4gdp" in "kube-system" namespace has status "Ready":"False"
	I0603 13:35:58.732262 1126655 pod_ready.go:102] pod "coredns-7db6d8ff4d-k4gdp" in "kube-system" namespace has status "Ready":"False"
	I0603 13:35:56.667752 1127686 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 13:35:56.667795 1127686 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0603 13:35:56.667817 1127686 cache.go:56] Caching tarball of preloaded images
	I0603 13:35:56.667913 1127686 preload.go:173] Found /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 13:35:56.667927 1127686 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0603 13:35:56.668042 1127686 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kubernetes-upgrade-423965/config.json ...
	I0603 13:35:56.668269 1127686 start.go:360] acquireMachinesLock for kubernetes-upgrade-423965: {Name:mk20baaab39609d00406b78ad309423511e633ec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 13:36:02.433613 1127176 main.go:141] libmachine: (cert-options-724800) DBG | domain cert-options-724800 has defined MAC address 52:54:00:5f:e0:98 in network mk-cert-options-724800
	I0603 13:36:02.434216 1127176 main.go:141] libmachine: (cert-options-724800) Found IP for machine: 192.168.72.155
	I0603 13:36:02.434245 1127176 main.go:141] libmachine: (cert-options-724800) DBG | domain cert-options-724800 has current primary IP address 192.168.72.155 and MAC address 52:54:00:5f:e0:98 in network mk-cert-options-724800
	I0603 13:36:02.434251 1127176 main.go:141] libmachine: (cert-options-724800) Reserving static IP address...
	I0603 13:36:02.434665 1127176 main.go:141] libmachine: (cert-options-724800) DBG | unable to find host DHCP lease matching {name: "cert-options-724800", mac: "52:54:00:5f:e0:98", ip: "192.168.72.155"} in network mk-cert-options-724800
	I0603 13:36:02.514191 1127176 main.go:141] libmachine: (cert-options-724800) DBG | Getting to WaitForSSH function...
	I0603 13:36:02.514210 1127176 main.go:141] libmachine: (cert-options-724800) Reserved static IP address: 192.168.72.155
	I0603 13:36:02.514222 1127176 main.go:141] libmachine: (cert-options-724800) Waiting for SSH to be available...
	I0603 13:36:02.517272 1127176 main.go:141] libmachine: (cert-options-724800) DBG | domain cert-options-724800 has defined MAC address 52:54:00:5f:e0:98 in network mk-cert-options-724800
	I0603 13:36:02.517904 1127176 main.go:141] libmachine: (cert-options-724800) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:e0:98", ip: ""} in network mk-cert-options-724800: {Iface:virbr4 ExpiryTime:2024-06-03 14:35:53 +0000 UTC Type:0 Mac:52:54:00:5f:e0:98 Iaid: IPaddr:192.168.72.155 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5f:e0:98}
	I0603 13:36:02.517958 1127176 main.go:141] libmachine: (cert-options-724800) DBG | domain cert-options-724800 has defined IP address 192.168.72.155 and MAC address 52:54:00:5f:e0:98 in network mk-cert-options-724800
	I0603 13:36:02.518105 1127176 main.go:141] libmachine: (cert-options-724800) DBG | Using SSH client type: external
	I0603 13:36:02.518127 1127176 main.go:141] libmachine: (cert-options-724800) DBG | Using SSH private key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/cert-options-724800/id_rsa (-rw-------)
	I0603 13:36:02.518155 1127176 main.go:141] libmachine: (cert-options-724800) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.155 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/cert-options-724800/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 13:36:02.518162 1127176 main.go:141] libmachine: (cert-options-724800) DBG | About to run SSH command:
	I0603 13:36:02.518170 1127176 main.go:141] libmachine: (cert-options-724800) DBG | exit 0
	I0603 13:36:02.641730 1127176 main.go:141] libmachine: (cert-options-724800) DBG | SSH cmd err, output: <nil>: 
	I0603 13:36:02.642024 1127176 main.go:141] libmachine: (cert-options-724800) KVM machine creation complete!
	I0603 13:36:02.642371 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetConfigRaw
	I0603 13:36:02.642981 1127176 main.go:141] libmachine: (cert-options-724800) Calling .DriverName
	I0603 13:36:02.643232 1127176 main.go:141] libmachine: (cert-options-724800) Calling .DriverName
	I0603 13:36:02.643414 1127176 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0603 13:36:02.643425 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetState
	I0603 13:36:02.644868 1127176 main.go:141] libmachine: Detecting operating system of created instance...
	I0603 13:36:02.644877 1127176 main.go:141] libmachine: Waiting for SSH to be available...
	I0603 13:36:02.644881 1127176 main.go:141] libmachine: Getting to WaitForSSH function...
	I0603 13:36:02.644886 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetSSHHostname
	I0603 13:36:02.647248 1127176 main.go:141] libmachine: (cert-options-724800) DBG | domain cert-options-724800 has defined MAC address 52:54:00:5f:e0:98 in network mk-cert-options-724800
	I0603 13:36:02.647641 1127176 main.go:141] libmachine: (cert-options-724800) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:e0:98", ip: ""} in network mk-cert-options-724800: {Iface:virbr4 ExpiryTime:2024-06-03 14:35:53 +0000 UTC Type:0 Mac:52:54:00:5f:e0:98 Iaid: IPaddr:192.168.72.155 Prefix:24 Hostname:cert-options-724800 Clientid:01:52:54:00:5f:e0:98}
	I0603 13:36:02.647666 1127176 main.go:141] libmachine: (cert-options-724800) DBG | domain cert-options-724800 has defined IP address 192.168.72.155 and MAC address 52:54:00:5f:e0:98 in network mk-cert-options-724800
	I0603 13:36:02.647862 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetSSHPort
	I0603 13:36:02.648084 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetSSHKeyPath
	I0603 13:36:02.648283 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetSSHKeyPath
	I0603 13:36:02.648442 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetSSHUsername
	I0603 13:36:02.648601 1127176 main.go:141] libmachine: Using SSH client type: native
	I0603 13:36:02.648886 1127176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.155 22 <nil> <nil>}
	I0603 13:36:02.648895 1127176 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0603 13:36:02.752952 1127176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 13:36:02.752964 1127176 main.go:141] libmachine: Detecting the provisioner...
	I0603 13:36:02.752972 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetSSHHostname
	I0603 13:36:02.756060 1127176 main.go:141] libmachine: (cert-options-724800) DBG | domain cert-options-724800 has defined MAC address 52:54:00:5f:e0:98 in network mk-cert-options-724800
	I0603 13:36:02.756466 1127176 main.go:141] libmachine: (cert-options-724800) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:e0:98", ip: ""} in network mk-cert-options-724800: {Iface:virbr4 ExpiryTime:2024-06-03 14:35:53 +0000 UTC Type:0 Mac:52:54:00:5f:e0:98 Iaid: IPaddr:192.168.72.155 Prefix:24 Hostname:cert-options-724800 Clientid:01:52:54:00:5f:e0:98}
	I0603 13:36:02.756526 1127176 main.go:141] libmachine: (cert-options-724800) DBG | domain cert-options-724800 has defined IP address 192.168.72.155 and MAC address 52:54:00:5f:e0:98 in network mk-cert-options-724800
	I0603 13:36:02.756795 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetSSHPort
	I0603 13:36:02.757020 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetSSHKeyPath
	I0603 13:36:02.757202 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetSSHKeyPath
	I0603 13:36:02.757366 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetSSHUsername
	I0603 13:36:02.757548 1127176 main.go:141] libmachine: Using SSH client type: native
	I0603 13:36:02.757738 1127176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.155 22 <nil> <nil>}
	I0603 13:36:02.757743 1127176 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0603 13:36:02.866931 1127176 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0603 13:36:02.866995 1127176 main.go:141] libmachine: found compatible host: buildroot
	I0603 13:36:02.867000 1127176 main.go:141] libmachine: Provisioning with buildroot...
	I0603 13:36:02.867007 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetMachineName
	I0603 13:36:02.867246 1127176 buildroot.go:166] provisioning hostname "cert-options-724800"
	I0603 13:36:02.867274 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetMachineName
	I0603 13:36:02.867471 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetSSHHostname
	I0603 13:36:02.870217 1127176 main.go:141] libmachine: (cert-options-724800) DBG | domain cert-options-724800 has defined MAC address 52:54:00:5f:e0:98 in network mk-cert-options-724800
	I0603 13:36:02.870629 1127176 main.go:141] libmachine: (cert-options-724800) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:e0:98", ip: ""} in network mk-cert-options-724800: {Iface:virbr4 ExpiryTime:2024-06-03 14:35:53 +0000 UTC Type:0 Mac:52:54:00:5f:e0:98 Iaid: IPaddr:192.168.72.155 Prefix:24 Hostname:cert-options-724800 Clientid:01:52:54:00:5f:e0:98}
	I0603 13:36:02.870660 1127176 main.go:141] libmachine: (cert-options-724800) DBG | domain cert-options-724800 has defined IP address 192.168.72.155 and MAC address 52:54:00:5f:e0:98 in network mk-cert-options-724800
	I0603 13:36:02.870840 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetSSHPort
	I0603 13:36:02.871023 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetSSHKeyPath
	I0603 13:36:02.871186 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetSSHKeyPath
	I0603 13:36:02.871315 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetSSHUsername
	I0603 13:36:02.871491 1127176 main.go:141] libmachine: Using SSH client type: native
	I0603 13:36:02.871669 1127176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.155 22 <nil> <nil>}
	I0603 13:36:02.871676 1127176 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-options-724800 && echo "cert-options-724800" | sudo tee /etc/hostname
	I0603 13:36:02.989541 1127176 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-options-724800
	
	I0603 13:36:02.989560 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetSSHHostname
	I0603 13:36:02.992547 1127176 main.go:141] libmachine: (cert-options-724800) DBG | domain cert-options-724800 has defined MAC address 52:54:00:5f:e0:98 in network mk-cert-options-724800
	I0603 13:36:02.992887 1127176 main.go:141] libmachine: (cert-options-724800) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:e0:98", ip: ""} in network mk-cert-options-724800: {Iface:virbr4 ExpiryTime:2024-06-03 14:35:53 +0000 UTC Type:0 Mac:52:54:00:5f:e0:98 Iaid: IPaddr:192.168.72.155 Prefix:24 Hostname:cert-options-724800 Clientid:01:52:54:00:5f:e0:98}
	I0603 13:36:02.992906 1127176 main.go:141] libmachine: (cert-options-724800) DBG | domain cert-options-724800 has defined IP address 192.168.72.155 and MAC address 52:54:00:5f:e0:98 in network mk-cert-options-724800
	I0603 13:36:02.993174 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetSSHPort
	I0603 13:36:02.993435 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetSSHKeyPath
	I0603 13:36:02.993620 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetSSHKeyPath
	I0603 13:36:02.993790 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetSSHUsername
	I0603 13:36:02.993942 1127176 main.go:141] libmachine: Using SSH client type: native
	I0603 13:36:02.994112 1127176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.155 22 <nil> <nil>}
	I0603 13:36:02.994122 1127176 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-options-724800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-options-724800/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-options-724800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 13:36:03.994906 1127686 start.go:364] duration metric: took 7.326583365s to acquireMachinesLock for "kubernetes-upgrade-423965"
	I0603 13:36:03.994970 1127686 start.go:96] Skipping create...Using existing machine configuration
	I0603 13:36:03.994979 1127686 fix.go:54] fixHost starting: 
	I0603 13:36:03.995375 1127686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:36:03.995437 1127686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:36:04.016262 1127686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37435
	I0603 13:36:04.016686 1127686 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:36:04.017337 1127686 main.go:141] libmachine: Using API Version  1
	I0603 13:36:04.017373 1127686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:36:04.017759 1127686 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:36:04.018009 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .DriverName
	I0603 13:36:04.018195 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetState
	I0603 13:36:04.020163 1127686 fix.go:112] recreateIfNeeded on kubernetes-upgrade-423965: state=Running err=<nil>
	W0603 13:36:04.020183 1127686 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 13:36:04.022466 1127686 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-423965" VM ...
	I0603 13:36:00.732473 1126655 pod_ready.go:102] pod "coredns-7db6d8ff4d-k4gdp" in "kube-system" namespace has status "Ready":"False"
	I0603 13:36:02.733175 1126655 pod_ready.go:102] pod "coredns-7db6d8ff4d-k4gdp" in "kube-system" namespace has status "Ready":"False"
	I0603 13:36:03.111675 1127176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 13:36:03.111701 1127176 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19011-1078924/.minikube CaCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19011-1078924/.minikube}
	I0603 13:36:03.111742 1127176 buildroot.go:174] setting up certificates
	I0603 13:36:03.111752 1127176 provision.go:84] configureAuth start
	I0603 13:36:03.111761 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetMachineName
	I0603 13:36:03.112117 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetIP
	I0603 13:36:03.115092 1127176 main.go:141] libmachine: (cert-options-724800) DBG | domain cert-options-724800 has defined MAC address 52:54:00:5f:e0:98 in network mk-cert-options-724800
	I0603 13:36:03.115448 1127176 main.go:141] libmachine: (cert-options-724800) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:e0:98", ip: ""} in network mk-cert-options-724800: {Iface:virbr4 ExpiryTime:2024-06-03 14:35:53 +0000 UTC Type:0 Mac:52:54:00:5f:e0:98 Iaid: IPaddr:192.168.72.155 Prefix:24 Hostname:cert-options-724800 Clientid:01:52:54:00:5f:e0:98}
	I0603 13:36:03.115465 1127176 main.go:141] libmachine: (cert-options-724800) DBG | domain cert-options-724800 has defined IP address 192.168.72.155 and MAC address 52:54:00:5f:e0:98 in network mk-cert-options-724800
	I0603 13:36:03.115625 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetSSHHostname
	I0603 13:36:03.118227 1127176 main.go:141] libmachine: (cert-options-724800) DBG | domain cert-options-724800 has defined MAC address 52:54:00:5f:e0:98 in network mk-cert-options-724800
	I0603 13:36:03.118656 1127176 main.go:141] libmachine: (cert-options-724800) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:e0:98", ip: ""} in network mk-cert-options-724800: {Iface:virbr4 ExpiryTime:2024-06-03 14:35:53 +0000 UTC Type:0 Mac:52:54:00:5f:e0:98 Iaid: IPaddr:192.168.72.155 Prefix:24 Hostname:cert-options-724800 Clientid:01:52:54:00:5f:e0:98}
	I0603 13:36:03.118690 1127176 main.go:141] libmachine: (cert-options-724800) DBG | domain cert-options-724800 has defined IP address 192.168.72.155 and MAC address 52:54:00:5f:e0:98 in network mk-cert-options-724800
	I0603 13:36:03.118822 1127176 provision.go:143] copyHostCerts
	I0603 13:36:03.118888 1127176 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem, removing ...
	I0603 13:36:03.118899 1127176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 13:36:03.118951 1127176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem (1078 bytes)
	I0603 13:36:03.119063 1127176 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem, removing ...
	I0603 13:36:03.119068 1127176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 13:36:03.119093 1127176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem (1123 bytes)
	I0603 13:36:03.119142 1127176 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem, removing ...
	I0603 13:36:03.119145 1127176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 13:36:03.119163 1127176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem (1675 bytes)
	I0603 13:36:03.119207 1127176 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem org=jenkins.cert-options-724800 san=[127.0.0.1 192.168.72.155 cert-options-724800 localhost minikube]
	I0603 13:36:03.262873 1127176 provision.go:177] copyRemoteCerts
	I0603 13:36:03.262935 1127176 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 13:36:03.262962 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetSSHHostname
	I0603 13:36:03.265786 1127176 main.go:141] libmachine: (cert-options-724800) DBG | domain cert-options-724800 has defined MAC address 52:54:00:5f:e0:98 in network mk-cert-options-724800
	I0603 13:36:03.266080 1127176 main.go:141] libmachine: (cert-options-724800) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:e0:98", ip: ""} in network mk-cert-options-724800: {Iface:virbr4 ExpiryTime:2024-06-03 14:35:53 +0000 UTC Type:0 Mac:52:54:00:5f:e0:98 Iaid: IPaddr:192.168.72.155 Prefix:24 Hostname:cert-options-724800 Clientid:01:52:54:00:5f:e0:98}
	I0603 13:36:03.266096 1127176 main.go:141] libmachine: (cert-options-724800) DBG | domain cert-options-724800 has defined IP address 192.168.72.155 and MAC address 52:54:00:5f:e0:98 in network mk-cert-options-724800
	I0603 13:36:03.266357 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetSSHPort
	I0603 13:36:03.266579 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetSSHKeyPath
	I0603 13:36:03.266742 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetSSHUsername
	I0603 13:36:03.266899 1127176 sshutil.go:53] new ssh client: &{IP:192.168.72.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/cert-options-724800/id_rsa Username:docker}
	I0603 13:36:03.358804 1127176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 13:36:03.386491 1127176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0603 13:36:03.414135 1127176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 13:36:03.442708 1127176 provision.go:87] duration metric: took 330.942404ms to configureAuth
	I0603 13:36:03.442727 1127176 buildroot.go:189] setting minikube options for container-runtime
	I0603 13:36:03.442931 1127176 config.go:182] Loaded profile config "cert-options-724800": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:36:03.443017 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetSSHHostname
	I0603 13:36:03.446562 1127176 main.go:141] libmachine: (cert-options-724800) DBG | domain cert-options-724800 has defined MAC address 52:54:00:5f:e0:98 in network mk-cert-options-724800
	I0603 13:36:03.447042 1127176 main.go:141] libmachine: (cert-options-724800) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:e0:98", ip: ""} in network mk-cert-options-724800: {Iface:virbr4 ExpiryTime:2024-06-03 14:35:53 +0000 UTC Type:0 Mac:52:54:00:5f:e0:98 Iaid: IPaddr:192.168.72.155 Prefix:24 Hostname:cert-options-724800 Clientid:01:52:54:00:5f:e0:98}
	I0603 13:36:03.447063 1127176 main.go:141] libmachine: (cert-options-724800) DBG | domain cert-options-724800 has defined IP address 192.168.72.155 and MAC address 52:54:00:5f:e0:98 in network mk-cert-options-724800
	I0603 13:36:03.447332 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetSSHPort
	I0603 13:36:03.447580 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetSSHKeyPath
	I0603 13:36:03.447740 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetSSHKeyPath
	I0603 13:36:03.447892 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetSSHUsername
	I0603 13:36:03.448242 1127176 main.go:141] libmachine: Using SSH client type: native
	I0603 13:36:03.448431 1127176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.155 22 <nil> <nil>}
	I0603 13:36:03.448441 1127176 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 13:36:03.742342 1127176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 13:36:03.742356 1127176 main.go:141] libmachine: Checking connection to Docker...
	I0603 13:36:03.742363 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetURL
	I0603 13:36:03.743903 1127176 main.go:141] libmachine: (cert-options-724800) DBG | Using libvirt version 6000000
	I0603 13:36:03.746923 1127176 main.go:141] libmachine: (cert-options-724800) DBG | domain cert-options-724800 has defined MAC address 52:54:00:5f:e0:98 in network mk-cert-options-724800
	I0603 13:36:03.747289 1127176 main.go:141] libmachine: (cert-options-724800) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:e0:98", ip: ""} in network mk-cert-options-724800: {Iface:virbr4 ExpiryTime:2024-06-03 14:35:53 +0000 UTC Type:0 Mac:52:54:00:5f:e0:98 Iaid: IPaddr:192.168.72.155 Prefix:24 Hostname:cert-options-724800 Clientid:01:52:54:00:5f:e0:98}
	I0603 13:36:03.747308 1127176 main.go:141] libmachine: (cert-options-724800) DBG | domain cert-options-724800 has defined IP address 192.168.72.155 and MAC address 52:54:00:5f:e0:98 in network mk-cert-options-724800
	I0603 13:36:03.747507 1127176 main.go:141] libmachine: Docker is up and running!
	I0603 13:36:03.747530 1127176 main.go:141] libmachine: Reticulating splines...
	I0603 13:36:03.747537 1127176 client.go:171] duration metric: took 25.885308846s to LocalClient.Create
	I0603 13:36:03.747582 1127176 start.go:167] duration metric: took 25.885383698s to libmachine.API.Create "cert-options-724800"
	I0603 13:36:03.747589 1127176 start.go:293] postStartSetup for "cert-options-724800" (driver="kvm2")
	I0603 13:36:03.747598 1127176 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 13:36:03.747613 1127176 main.go:141] libmachine: (cert-options-724800) Calling .DriverName
	I0603 13:36:03.747938 1127176 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 13:36:03.747959 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetSSHHostname
	I0603 13:36:03.750356 1127176 main.go:141] libmachine: (cert-options-724800) DBG | domain cert-options-724800 has defined MAC address 52:54:00:5f:e0:98 in network mk-cert-options-724800
	I0603 13:36:03.750781 1127176 main.go:141] libmachine: (cert-options-724800) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:e0:98", ip: ""} in network mk-cert-options-724800: {Iface:virbr4 ExpiryTime:2024-06-03 14:35:53 +0000 UTC Type:0 Mac:52:54:00:5f:e0:98 Iaid: IPaddr:192.168.72.155 Prefix:24 Hostname:cert-options-724800 Clientid:01:52:54:00:5f:e0:98}
	I0603 13:36:03.750813 1127176 main.go:141] libmachine: (cert-options-724800) DBG | domain cert-options-724800 has defined IP address 192.168.72.155 and MAC address 52:54:00:5f:e0:98 in network mk-cert-options-724800
	I0603 13:36:03.750967 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetSSHPort
	I0603 13:36:03.751135 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetSSHKeyPath
	I0603 13:36:03.751276 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetSSHUsername
	I0603 13:36:03.751427 1127176 sshutil.go:53] new ssh client: &{IP:192.168.72.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/cert-options-724800/id_rsa Username:docker}
	I0603 13:36:03.834624 1127176 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 13:36:03.839228 1127176 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 13:36:03.839248 1127176 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/addons for local assets ...
	I0603 13:36:03.839330 1127176 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/files for local assets ...
	I0603 13:36:03.839418 1127176 filesync.go:149] local asset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> 10862512.pem in /etc/ssl/certs
	I0603 13:36:03.839546 1127176 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 13:36:03.851444 1127176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:36:03.877736 1127176 start.go:296] duration metric: took 130.133261ms for postStartSetup
	I0603 13:36:03.877780 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetConfigRaw
	I0603 13:36:03.878516 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetIP
	I0603 13:36:03.881839 1127176 main.go:141] libmachine: (cert-options-724800) DBG | domain cert-options-724800 has defined MAC address 52:54:00:5f:e0:98 in network mk-cert-options-724800
	I0603 13:36:03.882233 1127176 main.go:141] libmachine: (cert-options-724800) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:e0:98", ip: ""} in network mk-cert-options-724800: {Iface:virbr4 ExpiryTime:2024-06-03 14:35:53 +0000 UTC Type:0 Mac:52:54:00:5f:e0:98 Iaid: IPaddr:192.168.72.155 Prefix:24 Hostname:cert-options-724800 Clientid:01:52:54:00:5f:e0:98}
	I0603 13:36:03.882255 1127176 main.go:141] libmachine: (cert-options-724800) DBG | domain cert-options-724800 has defined IP address 192.168.72.155 and MAC address 52:54:00:5f:e0:98 in network mk-cert-options-724800
	I0603 13:36:03.882561 1127176 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/cert-options-724800/config.json ...
	I0603 13:36:03.882726 1127176 start.go:128] duration metric: took 26.046110138s to createHost
	I0603 13:36:03.882742 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetSSHHostname
	I0603 13:36:03.885603 1127176 main.go:141] libmachine: (cert-options-724800) DBG | domain cert-options-724800 has defined MAC address 52:54:00:5f:e0:98 in network mk-cert-options-724800
	I0603 13:36:03.885967 1127176 main.go:141] libmachine: (cert-options-724800) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:e0:98", ip: ""} in network mk-cert-options-724800: {Iface:virbr4 ExpiryTime:2024-06-03 14:35:53 +0000 UTC Type:0 Mac:52:54:00:5f:e0:98 Iaid: IPaddr:192.168.72.155 Prefix:24 Hostname:cert-options-724800 Clientid:01:52:54:00:5f:e0:98}
	I0603 13:36:03.885988 1127176 main.go:141] libmachine: (cert-options-724800) DBG | domain cert-options-724800 has defined IP address 192.168.72.155 and MAC address 52:54:00:5f:e0:98 in network mk-cert-options-724800
	I0603 13:36:03.886097 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetSSHPort
	I0603 13:36:03.886289 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetSSHKeyPath
	I0603 13:36:03.886451 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetSSHKeyPath
	I0603 13:36:03.886650 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetSSHUsername
	I0603 13:36:03.886824 1127176 main.go:141] libmachine: Using SSH client type: native
	I0603 13:36:03.886998 1127176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.155 22 <nil> <nil>}
	I0603 13:36:03.887003 1127176 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 13:36:03.994721 1127176 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717421763.980339386
	
	I0603 13:36:03.994737 1127176 fix.go:216] guest clock: 1717421763.980339386
	I0603 13:36:03.994745 1127176 fix.go:229] Guest: 2024-06-03 13:36:03.980339386 +0000 UTC Remote: 2024-06-03 13:36:03.882731094 +0000 UTC m=+45.905295852 (delta=97.608292ms)
	I0603 13:36:03.994783 1127176 fix.go:200] guest clock delta is within tolerance: 97.608292ms
	I0603 13:36:03.994787 1127176 start.go:83] releasing machines lock for "cert-options-724800", held for 26.158355176s
	I0603 13:36:03.994815 1127176 main.go:141] libmachine: (cert-options-724800) Calling .DriverName
	I0603 13:36:03.995187 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetIP
	I0603 13:36:03.998836 1127176 main.go:141] libmachine: (cert-options-724800) DBG | domain cert-options-724800 has defined MAC address 52:54:00:5f:e0:98 in network mk-cert-options-724800
	I0603 13:36:03.999100 1127176 main.go:141] libmachine: (cert-options-724800) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:e0:98", ip: ""} in network mk-cert-options-724800: {Iface:virbr4 ExpiryTime:2024-06-03 14:35:53 +0000 UTC Type:0 Mac:52:54:00:5f:e0:98 Iaid: IPaddr:192.168.72.155 Prefix:24 Hostname:cert-options-724800 Clientid:01:52:54:00:5f:e0:98}
	I0603 13:36:03.999122 1127176 main.go:141] libmachine: (cert-options-724800) DBG | domain cert-options-724800 has defined IP address 192.168.72.155 and MAC address 52:54:00:5f:e0:98 in network mk-cert-options-724800
	I0603 13:36:03.999438 1127176 main.go:141] libmachine: (cert-options-724800) Calling .DriverName
	I0603 13:36:04.000036 1127176 main.go:141] libmachine: (cert-options-724800) Calling .DriverName
	I0603 13:36:04.000236 1127176 main.go:141] libmachine: (cert-options-724800) Calling .DriverName
	I0603 13:36:04.000349 1127176 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 13:36:04.000400 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetSSHHostname
	I0603 13:36:04.000516 1127176 ssh_runner.go:195] Run: cat /version.json
	I0603 13:36:04.000538 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetSSHHostname
	I0603 13:36:04.003465 1127176 main.go:141] libmachine: (cert-options-724800) DBG | domain cert-options-724800 has defined MAC address 52:54:00:5f:e0:98 in network mk-cert-options-724800
	I0603 13:36:04.003515 1127176 main.go:141] libmachine: (cert-options-724800) DBG | domain cert-options-724800 has defined MAC address 52:54:00:5f:e0:98 in network mk-cert-options-724800
	I0603 13:36:04.003980 1127176 main.go:141] libmachine: (cert-options-724800) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:e0:98", ip: ""} in network mk-cert-options-724800: {Iface:virbr4 ExpiryTime:2024-06-03 14:35:53 +0000 UTC Type:0 Mac:52:54:00:5f:e0:98 Iaid: IPaddr:192.168.72.155 Prefix:24 Hostname:cert-options-724800 Clientid:01:52:54:00:5f:e0:98}
	I0603 13:36:04.004005 1127176 main.go:141] libmachine: (cert-options-724800) DBG | domain cert-options-724800 has defined IP address 192.168.72.155 and MAC address 52:54:00:5f:e0:98 in network mk-cert-options-724800
	I0603 13:36:04.004135 1127176 main.go:141] libmachine: (cert-options-724800) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:e0:98", ip: ""} in network mk-cert-options-724800: {Iface:virbr4 ExpiryTime:2024-06-03 14:35:53 +0000 UTC Type:0 Mac:52:54:00:5f:e0:98 Iaid: IPaddr:192.168.72.155 Prefix:24 Hostname:cert-options-724800 Clientid:01:52:54:00:5f:e0:98}
	I0603 13:36:04.004170 1127176 main.go:141] libmachine: (cert-options-724800) DBG | domain cert-options-724800 has defined IP address 192.168.72.155 and MAC address 52:54:00:5f:e0:98 in network mk-cert-options-724800
	I0603 13:36:04.004207 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetSSHPort
	I0603 13:36:04.004357 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetSSHPort
	I0603 13:36:04.004468 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetSSHKeyPath
	I0603 13:36:04.004583 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetSSHKeyPath
	I0603 13:36:04.004652 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetSSHUsername
	I0603 13:36:04.004714 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetSSHUsername
	I0603 13:36:04.004790 1127176 sshutil.go:53] new ssh client: &{IP:192.168.72.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/cert-options-724800/id_rsa Username:docker}
	I0603 13:36:04.004836 1127176 sshutil.go:53] new ssh client: &{IP:192.168.72.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/cert-options-724800/id_rsa Username:docker}
	I0603 13:36:04.110721 1127176 ssh_runner.go:195] Run: systemctl --version
	I0603 13:36:04.116948 1127176 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 13:36:04.279054 1127176 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 13:36:04.287796 1127176 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 13:36:04.287850 1127176 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 13:36:04.305775 1127176 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 13:36:04.305789 1127176 start.go:494] detecting cgroup driver to use...
	I0603 13:36:04.305854 1127176 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 13:36:04.324765 1127176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 13:36:04.343214 1127176 docker.go:217] disabling cri-docker service (if available) ...
	I0603 13:36:04.343266 1127176 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 13:36:04.360055 1127176 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 13:36:04.375023 1127176 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 13:36:04.508991 1127176 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 13:36:04.706742 1127176 docker.go:233] disabling docker service ...
	I0603 13:36:04.706820 1127176 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 13:36:04.723779 1127176 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 13:36:04.740571 1127176 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 13:36:04.877792 1127176 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 13:36:05.016403 1127176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 13:36:05.033983 1127176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 13:36:05.055175 1127176 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 13:36:05.055230 1127176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:36:05.068347 1127176 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 13:36:05.068418 1127176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:36:05.080674 1127176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:36:05.092844 1127176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:36:05.105283 1127176 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 13:36:05.118527 1127176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:36:05.133159 1127176 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:36:05.154930 1127176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:36:05.167324 1127176 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 13:36:05.178631 1127176 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 13:36:05.178712 1127176 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 13:36:05.194782 1127176 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 13:36:05.207648 1127176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:36:05.351039 1127176 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 13:36:05.521144 1127176 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 13:36:05.521215 1127176 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 13:36:05.527894 1127176 start.go:562] Will wait 60s for crictl version
	I0603 13:36:05.527962 1127176 ssh_runner.go:195] Run: which crictl
	I0603 13:36:05.532382 1127176 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 13:36:05.579684 1127176 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 13:36:05.579752 1127176 ssh_runner.go:195] Run: crio --version
	I0603 13:36:05.610325 1127176 ssh_runner.go:195] Run: crio --version
	I0603 13:36:05.647606 1127176 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 13:36:04.023914 1127686 machine.go:94] provisionDockerMachine start ...
	I0603 13:36:04.023948 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .DriverName
	I0603 13:36:04.024228 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHHostname
	I0603 13:36:04.026974 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:36:04.027434 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:e2:9b", ip: ""} in network mk-kubernetes-upgrade-423965: {Iface:virbr2 ExpiryTime:2024-06-03 14:35:31 +0000 UTC Type:0 Mac:52:54:00:cf:e2:9b Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-423965 Clientid:01:52:54:00:cf:e2:9b}
	I0603 13:36:04.027465 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined IP address 192.168.50.64 and MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:36:04.027631 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHPort
	I0603 13:36:04.027931 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHKeyPath
	I0603 13:36:04.028174 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHKeyPath
	I0603 13:36:04.028356 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHUsername
	I0603 13:36:04.028560 1127686 main.go:141] libmachine: Using SSH client type: native
	I0603 13:36:04.028814 1127686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.64 22 <nil> <nil>}
	I0603 13:36:04.028827 1127686 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 13:36:04.146611 1127686 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-423965
	
	I0603 13:36:04.146648 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetMachineName
	I0603 13:36:04.146916 1127686 buildroot.go:166] provisioning hostname "kubernetes-upgrade-423965"
	I0603 13:36:04.146954 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetMachineName
	I0603 13:36:04.147190 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHHostname
	I0603 13:36:04.150541 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:36:04.151110 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:e2:9b", ip: ""} in network mk-kubernetes-upgrade-423965: {Iface:virbr2 ExpiryTime:2024-06-03 14:35:31 +0000 UTC Type:0 Mac:52:54:00:cf:e2:9b Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-423965 Clientid:01:52:54:00:cf:e2:9b}
	I0603 13:36:04.151144 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined IP address 192.168.50.64 and MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:36:04.151290 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHPort
	I0603 13:36:04.151499 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHKeyPath
	I0603 13:36:04.151713 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHKeyPath
	I0603 13:36:04.151951 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHUsername
	I0603 13:36:04.152175 1127686 main.go:141] libmachine: Using SSH client type: native
	I0603 13:36:04.152414 1127686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.64 22 <nil> <nil>}
	I0603 13:36:04.152436 1127686 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-423965 && echo "kubernetes-upgrade-423965" | sudo tee /etc/hostname
	I0603 13:36:04.278143 1127686 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-423965
	
	I0603 13:36:04.278174 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHHostname
	I0603 13:36:04.281819 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:36:04.282248 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:e2:9b", ip: ""} in network mk-kubernetes-upgrade-423965: {Iface:virbr2 ExpiryTime:2024-06-03 14:35:31 +0000 UTC Type:0 Mac:52:54:00:cf:e2:9b Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-423965 Clientid:01:52:54:00:cf:e2:9b}
	I0603 13:36:04.282284 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined IP address 192.168.50.64 and MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:36:04.282506 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHPort
	I0603 13:36:04.282744 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHKeyPath
	I0603 13:36:04.282962 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHKeyPath
	I0603 13:36:04.283149 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHUsername
	I0603 13:36:04.283375 1127686 main.go:141] libmachine: Using SSH client type: native
	I0603 13:36:04.283642 1127686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.64 22 <nil> <nil>}
	I0603 13:36:04.283669 1127686 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-423965' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-423965/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-423965' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 13:36:04.403414 1127686 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 13:36:04.403451 1127686 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19011-1078924/.minikube CaCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19011-1078924/.minikube}
	I0603 13:36:04.403479 1127686 buildroot.go:174] setting up certificates
	I0603 13:36:04.403492 1127686 provision.go:84] configureAuth start
	I0603 13:36:04.403506 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetMachineName
	I0603 13:36:04.403923 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetIP
	I0603 13:36:04.407199 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:36:04.407577 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:e2:9b", ip: ""} in network mk-kubernetes-upgrade-423965: {Iface:virbr2 ExpiryTime:2024-06-03 14:35:31 +0000 UTC Type:0 Mac:52:54:00:cf:e2:9b Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-423965 Clientid:01:52:54:00:cf:e2:9b}
	I0603 13:36:04.407613 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined IP address 192.168.50.64 and MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:36:04.407812 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHHostname
	I0603 13:36:04.410461 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:36:04.410855 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:e2:9b", ip: ""} in network mk-kubernetes-upgrade-423965: {Iface:virbr2 ExpiryTime:2024-06-03 14:35:31 +0000 UTC Type:0 Mac:52:54:00:cf:e2:9b Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-423965 Clientid:01:52:54:00:cf:e2:9b}
	I0603 13:36:04.410890 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined IP address 192.168.50.64 and MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:36:04.411022 1127686 provision.go:143] copyHostCerts
	I0603 13:36:04.411112 1127686 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem, removing ...
	I0603 13:36:04.411124 1127686 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 13:36:04.411179 1127686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem (1078 bytes)
	I0603 13:36:04.411301 1127686 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem, removing ...
	I0603 13:36:04.411311 1127686 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 13:36:04.411336 1127686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem (1123 bytes)
	I0603 13:36:04.411439 1127686 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem, removing ...
	I0603 13:36:04.411453 1127686 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 13:36:04.411484 1127686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem (1675 bytes)
	I0603 13:36:04.411557 1127686 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-423965 san=[127.0.0.1 192.168.50.64 kubernetes-upgrade-423965 localhost minikube]
	I0603 13:36:04.500681 1127686 provision.go:177] copyRemoteCerts
	I0603 13:36:04.500749 1127686 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 13:36:04.500780 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHHostname
	I0603 13:36:04.504297 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:36:04.504811 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:e2:9b", ip: ""} in network mk-kubernetes-upgrade-423965: {Iface:virbr2 ExpiryTime:2024-06-03 14:35:31 +0000 UTC Type:0 Mac:52:54:00:cf:e2:9b Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-423965 Clientid:01:52:54:00:cf:e2:9b}
	I0603 13:36:04.504847 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined IP address 192.168.50.64 and MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:36:04.505010 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHPort
	I0603 13:36:04.505194 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHKeyPath
	I0603 13:36:04.505361 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHUsername
	I0603 13:36:04.505613 1127686 sshutil.go:53] new ssh client: &{IP:192.168.50.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/kubernetes-upgrade-423965/id_rsa Username:docker}
	I0603 13:36:04.598664 1127686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 13:36:04.626020 1127686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0603 13:36:04.656247 1127686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 13:36:04.688734 1127686 provision.go:87] duration metric: took 285.225587ms to configureAuth
	I0603 13:36:04.688771 1127686 buildroot.go:189] setting minikube options for container-runtime
	I0603 13:36:04.689013 1127686 config.go:182] Loaded profile config "kubernetes-upgrade-423965": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:36:04.689121 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHHostname
	I0603 13:36:04.692339 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:36:04.692785 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:e2:9b", ip: ""} in network mk-kubernetes-upgrade-423965: {Iface:virbr2 ExpiryTime:2024-06-03 14:35:31 +0000 UTC Type:0 Mac:52:54:00:cf:e2:9b Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-423965 Clientid:01:52:54:00:cf:e2:9b}
	I0603 13:36:04.692819 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined IP address 192.168.50.64 and MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:36:04.693026 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHPort
	I0603 13:36:04.693273 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHKeyPath
	I0603 13:36:04.693513 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHKeyPath
	I0603 13:36:04.693822 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHUsername
	I0603 13:36:04.694068 1127686 main.go:141] libmachine: Using SSH client type: native
	I0603 13:36:04.694342 1127686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.64 22 <nil> <nil>}
	I0603 13:36:04.694364 1127686 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 13:36:05.682184 1127686 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 13:36:05.682217 1127686 machine.go:97] duration metric: took 1.658286247s to provisionDockerMachine
	I0603 13:36:05.682234 1127686 start.go:293] postStartSetup for "kubernetes-upgrade-423965" (driver="kvm2")
	I0603 13:36:05.682248 1127686 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 13:36:05.682270 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .DriverName
	I0603 13:36:05.682666 1127686 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 13:36:05.682703 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHHostname
	I0603 13:36:05.685905 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:36:05.686290 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:e2:9b", ip: ""} in network mk-kubernetes-upgrade-423965: {Iface:virbr2 ExpiryTime:2024-06-03 14:35:31 +0000 UTC Type:0 Mac:52:54:00:cf:e2:9b Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-423965 Clientid:01:52:54:00:cf:e2:9b}
	I0603 13:36:05.686319 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined IP address 192.168.50.64 and MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:36:05.686523 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHPort
	I0603 13:36:05.686769 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHKeyPath
	I0603 13:36:05.686993 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHUsername
	I0603 13:36:05.687188 1127686 sshutil.go:53] new ssh client: &{IP:192.168.50.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/kubernetes-upgrade-423965/id_rsa Username:docker}
	I0603 13:36:05.856332 1127686 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 13:36:05.871673 1127686 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 13:36:05.871715 1127686 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/addons for local assets ...
	I0603 13:36:05.871806 1127686 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/files for local assets ...
	I0603 13:36:05.871953 1127686 filesync.go:149] local asset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> 10862512.pem in /etc/ssl/certs
	I0603 13:36:05.872130 1127686 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 13:36:05.930686 1127686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:36:06.034788 1127686 start.go:296] duration metric: took 352.534459ms for postStartSetup
	I0603 13:36:06.034834 1127686 fix.go:56] duration metric: took 2.039855674s for fixHost
	I0603 13:36:06.034857 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHHostname
	I0603 13:36:06.038041 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:36:06.038724 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:e2:9b", ip: ""} in network mk-kubernetes-upgrade-423965: {Iface:virbr2 ExpiryTime:2024-06-03 14:35:31 +0000 UTC Type:0 Mac:52:54:00:cf:e2:9b Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-423965 Clientid:01:52:54:00:cf:e2:9b}
	I0603 13:36:06.038755 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined IP address 192.168.50.64 and MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:36:06.039060 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHPort
	I0603 13:36:06.039256 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHKeyPath
	I0603 13:36:06.039402 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHKeyPath
	I0603 13:36:06.039756 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHUsername
	I0603 13:36:06.039988 1127686 main.go:141] libmachine: Using SSH client type: native
	I0603 13:36:06.040223 1127686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.64 22 <nil> <nil>}
	I0603 13:36:06.040237 1127686 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 13:36:06.377472 1127686 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717421766.339646448
	
	I0603 13:36:06.377505 1127686 fix.go:216] guest clock: 1717421766.339646448
	I0603 13:36:06.377517 1127686 fix.go:229] Guest: 2024-06-03 13:36:06.339646448 +0000 UTC Remote: 2024-06-03 13:36:06.034837763 +0000 UTC m=+9.514813247 (delta=304.808685ms)
	I0603 13:36:06.377545 1127686 fix.go:200] guest clock delta is within tolerance: 304.808685ms
	I0603 13:36:06.377552 1127686 start.go:83] releasing machines lock for "kubernetes-upgrade-423965", held for 2.382608791s
	I0603 13:36:06.377576 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .DriverName
	I0603 13:36:06.377884 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetIP
	I0603 13:36:06.382052 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:36:06.382457 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:e2:9b", ip: ""} in network mk-kubernetes-upgrade-423965: {Iface:virbr2 ExpiryTime:2024-06-03 14:35:31 +0000 UTC Type:0 Mac:52:54:00:cf:e2:9b Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-423965 Clientid:01:52:54:00:cf:e2:9b}
	I0603 13:36:06.382484 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined IP address 192.168.50.64 and MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:36:06.382676 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .DriverName
	I0603 13:36:06.383256 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .DriverName
	I0603 13:36:06.383531 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .DriverName
	I0603 13:36:06.383684 1127686 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 13:36:06.383757 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHHostname
	I0603 13:36:06.383862 1127686 ssh_runner.go:195] Run: cat /version.json
	I0603 13:36:06.383877 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHHostname
	I0603 13:36:06.387214 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:36:06.387982 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:e2:9b", ip: ""} in network mk-kubernetes-upgrade-423965: {Iface:virbr2 ExpiryTime:2024-06-03 14:35:31 +0000 UTC Type:0 Mac:52:54:00:cf:e2:9b Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-423965 Clientid:01:52:54:00:cf:e2:9b}
	I0603 13:36:06.388014 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined IP address 192.168.50.64 and MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:36:06.388056 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHPort
	I0603 13:36:06.388296 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHKeyPath
	I0603 13:36:06.388516 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHUsername
	I0603 13:36:06.388719 1127686 sshutil.go:53] new ssh client: &{IP:192.168.50.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/kubernetes-upgrade-423965/id_rsa Username:docker}
	I0603 13:36:06.389669 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:36:06.390174 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:e2:9b", ip: ""} in network mk-kubernetes-upgrade-423965: {Iface:virbr2 ExpiryTime:2024-06-03 14:35:31 +0000 UTC Type:0 Mac:52:54:00:cf:e2:9b Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-423965 Clientid:01:52:54:00:cf:e2:9b}
	I0603 13:36:06.390197 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined IP address 192.168.50.64 and MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:36:06.390392 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHPort
	I0603 13:36:06.390610 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHKeyPath
	I0603 13:36:06.390796 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetSSHUsername
	I0603 13:36:06.390981 1127686 sshutil.go:53] new ssh client: &{IP:192.168.50.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/kubernetes-upgrade-423965/id_rsa Username:docker}
	I0603 13:36:05.649017 1127176 main.go:141] libmachine: (cert-options-724800) Calling .GetIP
	I0603 13:36:05.652382 1127176 main.go:141] libmachine: (cert-options-724800) DBG | domain cert-options-724800 has defined MAC address 52:54:00:5f:e0:98 in network mk-cert-options-724800
	I0603 13:36:05.652734 1127176 main.go:141] libmachine: (cert-options-724800) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:e0:98", ip: ""} in network mk-cert-options-724800: {Iface:virbr4 ExpiryTime:2024-06-03 14:35:53 +0000 UTC Type:0 Mac:52:54:00:5f:e0:98 Iaid: IPaddr:192.168.72.155 Prefix:24 Hostname:cert-options-724800 Clientid:01:52:54:00:5f:e0:98}
	I0603 13:36:05.652758 1127176 main.go:141] libmachine: (cert-options-724800) DBG | domain cert-options-724800 has defined IP address 192.168.72.155 and MAC address 52:54:00:5f:e0:98 in network mk-cert-options-724800
	I0603 13:36:05.653053 1127176 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0603 13:36:05.658068 1127176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:36:05.672239 1127176 kubeadm.go:877] updating cluster {Name:cert-options-724800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.30.1 ClusterName:cert-options-724800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.155 Port:8555 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 13:36:05.672416 1127176 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 13:36:05.672485 1127176 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:36:05.711296 1127176 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0603 13:36:05.711384 1127176 ssh_runner.go:195] Run: which lz4
	I0603 13:36:05.717340 1127176 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 13:36:05.722244 1127176 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 13:36:05.722273 1127176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0603 13:36:07.261600 1127176 crio.go:462] duration metric: took 1.544281225s to copy over tarball
	I0603 13:36:07.261696 1127176 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 13:36:06.596272 1127686 ssh_runner.go:195] Run: systemctl --version
	I0603 13:36:06.642619 1127686 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 13:36:06.940483 1127686 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 13:36:06.955463 1127686 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 13:36:06.955572 1127686 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 13:36:06.973025 1127686 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0603 13:36:06.973056 1127686 start.go:494] detecting cgroup driver to use...
	I0603 13:36:06.973133 1127686 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 13:36:07.005308 1127686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 13:36:07.030679 1127686 docker.go:217] disabling cri-docker service (if available) ...
	I0603 13:36:07.030748 1127686 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 13:36:07.055007 1127686 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 13:36:07.082531 1127686 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 13:36:07.337387 1127686 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 13:36:07.546213 1127686 docker.go:233] disabling docker service ...
	I0603 13:36:07.546308 1127686 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 13:36:07.569569 1127686 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 13:36:07.587043 1127686 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 13:36:07.785745 1127686 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 13:36:07.980179 1127686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 13:36:08.000277 1127686 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 13:36:08.023521 1127686 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 13:36:08.023598 1127686 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:36:08.037025 1127686 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 13:36:08.037134 1127686 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:36:08.050164 1127686 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:36:08.068356 1127686 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:36:08.084933 1127686 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 13:36:08.099586 1127686 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:36:08.113923 1127686 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:36:08.137487 1127686 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:36:08.161211 1127686 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 13:36:08.176839 1127686 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 13:36:08.191836 1127686 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:36:08.397525 1127686 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 13:36:08.841749 1127686 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 13:36:08.841834 1127686 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 13:36:08.848714 1127686 start.go:562] Will wait 60s for crictl version
	I0603 13:36:08.848808 1127686 ssh_runner.go:195] Run: which crictl
	I0603 13:36:08.857359 1127686 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 13:36:09.088771 1127686 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 13:36:09.088868 1127686 ssh_runner.go:195] Run: crio --version
	I0603 13:36:09.200447 1127686 ssh_runner.go:195] Run: crio --version
	I0603 13:36:09.274452 1127686 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 13:36:05.236934 1126655 pod_ready.go:102] pod "coredns-7db6d8ff4d-k4gdp" in "kube-system" namespace has status "Ready":"False"
	I0603 13:36:07.735442 1126655 pod_ready.go:102] pod "coredns-7db6d8ff4d-k4gdp" in "kube-system" namespace has status "Ready":"False"
	I0603 13:36:09.275828 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) Calling .GetIP
	I0603 13:36:09.279386 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:36:09.279962 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:e2:9b", ip: ""} in network mk-kubernetes-upgrade-423965: {Iface:virbr2 ExpiryTime:2024-06-03 14:35:31 +0000 UTC Type:0 Mac:52:54:00:cf:e2:9b Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-423965 Clientid:01:52:54:00:cf:e2:9b}
	I0603 13:36:09.280014 1127686 main.go:141] libmachine: (kubernetes-upgrade-423965) DBG | domain kubernetes-upgrade-423965 has defined IP address 192.168.50.64 and MAC address 52:54:00:cf:e2:9b in network mk-kubernetes-upgrade-423965
	I0603 13:36:09.280298 1127686 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0603 13:36:09.286493 1127686 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-423965 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.1 ClusterName:kubernetes-upgrade-423965 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.64 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 13:36:09.286650 1127686 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 13:36:09.286724 1127686 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:36:09.354076 1127686 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 13:36:09.354117 1127686 crio.go:433] Images already preloaded, skipping extraction
	I0603 13:36:09.354187 1127686 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:36:09.403818 1127686 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 13:36:09.403851 1127686 cache_images.go:84] Images are preloaded, skipping loading
	I0603 13:36:09.403862 1127686 kubeadm.go:928] updating node { 192.168.50.64 8443 v1.30.1 crio true true} ...
	I0603 13:36:09.404012 1127686 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-423965 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.64
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:kubernetes-upgrade-423965 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 13:36:09.404106 1127686 ssh_runner.go:195] Run: crio config
	I0603 13:36:09.485258 1127686 cni.go:84] Creating CNI manager for ""
	I0603 13:36:09.485289 1127686 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:36:09.485309 1127686 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 13:36:09.485340 1127686 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.64 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-423965 NodeName:kubernetes-upgrade-423965 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.64"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.64 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 13:36:09.485570 1127686 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.64
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-423965"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.64
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.64"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 13:36:09.485648 1127686 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 13:36:09.496764 1127686 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 13:36:09.496859 1127686 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 13:36:09.508081 1127686 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0603 13:36:09.530502 1127686 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 13:36:09.557915 1127686 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0603 13:36:09.581026 1127686 ssh_runner.go:195] Run: grep 192.168.50.64	control-plane.minikube.internal$ /etc/hosts
	I0603 13:36:09.586014 1127686 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:36:09.740397 1127686 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:36:09.759331 1127686 certs.go:68] Setting up /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kubernetes-upgrade-423965 for IP: 192.168.50.64
	I0603 13:36:09.759358 1127686 certs.go:194] generating shared ca certs ...
	I0603 13:36:09.759385 1127686 certs.go:226] acquiring lock for ca certs: {Name:mkeec5aabce7c9540fcb31b78e4f96c2851d54f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:36:09.759578 1127686 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key
	I0603 13:36:09.759638 1127686 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key
	I0603 13:36:09.759651 1127686 certs.go:256] generating profile certs ...
	I0603 13:36:09.759774 1127686 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kubernetes-upgrade-423965/client.key
	I0603 13:36:09.759849 1127686 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kubernetes-upgrade-423965/apiserver.key.5b2d753b
	I0603 13:36:09.759905 1127686 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kubernetes-upgrade-423965/proxy-client.key
	I0603 13:36:09.760064 1127686 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem (1338 bytes)
	W0603 13:36:09.760107 1127686 certs.go:480] ignoring /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251_empty.pem, impossibly tiny 0 bytes
	I0603 13:36:09.760121 1127686 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 13:36:09.760154 1127686 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem (1078 bytes)
	I0603 13:36:09.760194 1127686 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem (1123 bytes)
	I0603 13:36:09.760240 1127686 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem (1675 bytes)
	I0603 13:36:09.760298 1127686 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:36:09.761288 1127686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 13:36:09.794371 1127686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 13:36:09.825115 1127686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 13:36:09.854758 1127686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0603 13:36:09.883946 1127686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kubernetes-upgrade-423965/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0603 13:36:09.936559 1127686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kubernetes-upgrade-423965/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 13:36:09.971313 1127686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kubernetes-upgrade-423965/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 13:36:10.002842 1127686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kubernetes-upgrade-423965/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0603 13:36:10.031951 1127686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /usr/share/ca-certificates/10862512.pem (1708 bytes)
	I0603 13:36:10.061314 1127686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 13:36:10.089585 1127686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem --> /usr/share/ca-certificates/1086251.pem (1338 bytes)
	I0603 13:36:10.117589 1127686 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 13:36:10.142920 1127686 ssh_runner.go:195] Run: openssl version
	I0603 13:36:10.150130 1127686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10862512.pem && ln -fs /usr/share/ca-certificates/10862512.pem /etc/ssl/certs/10862512.pem"
	I0603 13:36:10.163999 1127686 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10862512.pem
	I0603 13:36:10.170451 1127686 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:37 /usr/share/ca-certificates/10862512.pem
	I0603 13:36:10.170532 1127686 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10862512.pem
	I0603 13:36:10.177818 1127686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10862512.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 13:36:10.189703 1127686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 13:36:10.202572 1127686 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:36:10.208618 1127686 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:24 /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:36:10.208688 1127686 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:36:10.215251 1127686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 13:36:10.227811 1127686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1086251.pem && ln -fs /usr/share/ca-certificates/1086251.pem /etc/ssl/certs/1086251.pem"
	I0603 13:36:10.248376 1127686 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1086251.pem
	I0603 13:36:10.254267 1127686 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:37 /usr/share/ca-certificates/1086251.pem
	I0603 13:36:10.254330 1127686 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1086251.pem
	I0603 13:36:10.261128 1127686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1086251.pem /etc/ssl/certs/51391683.0"
	I0603 13:36:10.272570 1127686 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 13:36:10.280011 1127686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 13:36:10.288262 1127686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 13:36:10.296096 1127686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 13:36:10.303183 1127686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 13:36:10.310781 1127686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 13:36:10.317730 1127686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 13:36:10.325998 1127686 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-423965 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.30.1 ClusterName:kubernetes-upgrade-423965 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.64 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:36:10.326118 1127686 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 13:36:10.326201 1127686 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:36:10.373755 1127686 cri.go:89] found id: "9f9172c9696585845b54c10ca4ea6b41f3394ec8486ed9c4ab5891fbc3b96f14"
	I0603 13:36:10.373782 1127686 cri.go:89] found id: "4b6dbc53928e4ba2daab10d4b6d0b6eb1fd4b64f6d2421046ee71594d758b997"
	I0603 13:36:10.373788 1127686 cri.go:89] found id: "13411edad0d27a257b4d7f2897fe094e6b62d4129301734dab0b431d4e92158c"
	I0603 13:36:10.373793 1127686 cri.go:89] found id: "29a4a7ee2a653ca6442bb0401ed8f8dce3d85bc1e3be5aa1511898dc686ce40c"
	I0603 13:36:10.373797 1127686 cri.go:89] found id: ""
	I0603 13:36:10.373844 1127686 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jun 03 13:36:20 kubernetes-upgrade-423965 crio[1879]: time="2024-06-03 13:36:20.529232054Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717421780529192443,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=17a2762b-d1ca-403b-9cdd-eb68029c2a19 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 13:36:20 kubernetes-upgrade-423965 crio[1879]: time="2024-06-03 13:36:20.529830725Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=16fbf6a8-4c05-4db0-b2a6-1c6190c31e97 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:36:20 kubernetes-upgrade-423965 crio[1879]: time="2024-06-03 13:36:20.529920991Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=16fbf6a8-4c05-4db0-b2a6-1c6190c31e97 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:36:20 kubernetes-upgrade-423965 crio[1879]: time="2024-06-03 13:36:20.530469787Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:43c634194be56fea4621fdd30aa44f902a8e3b3384b487115ac78b74dd8d27e7,PodSandboxId:a754b9bd541620480be4f3f62aa67dbcadbcebe2e83d7758ad4a285c85cfcac2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717421773672761326,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-423965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c06471f8c52838fcd95d5dddd117edd0,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44ca60d8760fe0ecf0010f39b82019fb9f93b9e89cd381dc60ce0e652ee6ae52,PodSandboxId:e8f5d3ba2717dc56797fbf3f93b157a7de9529fde586bc7a9d82d6d060ce3fa3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717421773662009982,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-423965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0403fb1b23cfb5e056bd4c6b9a53213,},Annotations:map[string]string{io.kubernetes.container.hash: bf169b55,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f1c7a33c877b8710dd6201bd6b79480a8bcc1450153e1ca4c24e5a652f1beec,PodSandboxId:3aefcf9c7f68fee446393354e83b3a616c7b2dc60fba0659d419d302cbe2ab08,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717421773689481573,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-423965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b0410d613f7616fb04dfb9faef02de9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:785cf860434eb043872577b202c26157ca5e312b5977754b22c78dae8238345c,PodSandboxId:74e3530da45a3452fb3d404dad47771bd2a510e13e4beefa71ac30ef24661625,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717421773659708128,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-423965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0701543be718af0aba53582d471ad4c6,},Annotations:map[string]string{io.kubernetes.container.hash: 3e2dc82c,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6dbc53928e4ba2daab10d4b6d0b6eb1fd4b64f6d2421046ee71594d758b997,PodSandboxId:8abf6c51477d1bb261d5d1cd6656fda648edb2ed08fc8bf00a0e73a168b9a933,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717421766097445758,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-423965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0403fb1b23cfb5e056bd4c6b9a53213,},Annotations:map[string]string{io.kubernetes.container.hash: bf169b55,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f9172c9696585845b54c10ca4ea6b41f3394ec8486ed9c4ab5891fbc3b96f14,PodSandboxId:ec65cb6508dd6c03b5a0cf31ebc5f40a5a779e2f2e3712d34237db2954bb0078,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717421766155208906,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-423965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c06471f8c52838fcd95d5dddd117edd0,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13411edad0d27a257b4d7f2897fe094e6b62d4129301734dab0b431d4e92158c,PodSandboxId:f4d02e915f9b9a080d8dffb2083018ca24b82072e773d08c543cbc9ee015e9df,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717421766060018615,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-423965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b0410d613f7616fb04dfb9faef02de9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29a4a7ee2a653ca6442bb0401ed8f8dce3d85bc1e3be5aa1511898dc686ce40c,PodSandboxId:bc1d551004a8508a94fa6174df8ec0ff10adfd94703187151ce00b388937946a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717421765944687663,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-423965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0701543be718af0aba53582d471ad4c6,},Annotations:map[string]string{io.kubernetes.container.hash: 3e2dc82c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=16fbf6a8-4c05-4db0-b2a6-1c6190c31e97 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:36:20 kubernetes-upgrade-423965 crio[1879]: time="2024-06-03 13:36:20.571817948Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f5c5076b-321d-4df0-9771-6e71e30156a2 name=/runtime.v1.RuntimeService/Version
	Jun 03 13:36:20 kubernetes-upgrade-423965 crio[1879]: time="2024-06-03 13:36:20.571897304Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f5c5076b-321d-4df0-9771-6e71e30156a2 name=/runtime.v1.RuntimeService/Version
	Jun 03 13:36:20 kubernetes-upgrade-423965 crio[1879]: time="2024-06-03 13:36:20.573357780Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d9b627ab-6e81-48de-bb4b-d259dae9fdce name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 13:36:20 kubernetes-upgrade-423965 crio[1879]: time="2024-06-03 13:36:20.574028103Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717421780573996600,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d9b627ab-6e81-48de-bb4b-d259dae9fdce name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 13:36:20 kubernetes-upgrade-423965 crio[1879]: time="2024-06-03 13:36:20.574862759Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ea04b881-699a-47d9-88c3-1d0aa53ddec7 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:36:20 kubernetes-upgrade-423965 crio[1879]: time="2024-06-03 13:36:20.574918950Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ea04b881-699a-47d9-88c3-1d0aa53ddec7 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:36:20 kubernetes-upgrade-423965 crio[1879]: time="2024-06-03 13:36:20.575171242Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:43c634194be56fea4621fdd30aa44f902a8e3b3384b487115ac78b74dd8d27e7,PodSandboxId:a754b9bd541620480be4f3f62aa67dbcadbcebe2e83d7758ad4a285c85cfcac2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717421773672761326,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-423965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c06471f8c52838fcd95d5dddd117edd0,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44ca60d8760fe0ecf0010f39b82019fb9f93b9e89cd381dc60ce0e652ee6ae52,PodSandboxId:e8f5d3ba2717dc56797fbf3f93b157a7de9529fde586bc7a9d82d6d060ce3fa3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717421773662009982,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-423965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0403fb1b23cfb5e056bd4c6b9a53213,},Annotations:map[string]string{io.kubernetes.container.hash: bf169b55,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f1c7a33c877b8710dd6201bd6b79480a8bcc1450153e1ca4c24e5a652f1beec,PodSandboxId:3aefcf9c7f68fee446393354e83b3a616c7b2dc60fba0659d419d302cbe2ab08,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717421773689481573,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-423965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b0410d613f7616fb04dfb9faef02de9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:785cf860434eb043872577b202c26157ca5e312b5977754b22c78dae8238345c,PodSandboxId:74e3530da45a3452fb3d404dad47771bd2a510e13e4beefa71ac30ef24661625,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717421773659708128,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-423965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0701543be718af0aba53582d471ad4c6,},Annotations:map[string]string{io.kubernetes.container.hash: 3e2dc82c,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6dbc53928e4ba2daab10d4b6d0b6eb1fd4b64f6d2421046ee71594d758b997,PodSandboxId:8abf6c51477d1bb261d5d1cd6656fda648edb2ed08fc8bf00a0e73a168b9a933,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717421766097445758,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-423965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0403fb1b23cfb5e056bd4c6b9a53213,},Annotations:map[string]string{io.kubernetes.container.hash: bf169b55,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f9172c9696585845b54c10ca4ea6b41f3394ec8486ed9c4ab5891fbc3b96f14,PodSandboxId:ec65cb6508dd6c03b5a0cf31ebc5f40a5a779e2f2e3712d34237db2954bb0078,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717421766155208906,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-423965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c06471f8c52838fcd95d5dddd117edd0,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13411edad0d27a257b4d7f2897fe094e6b62d4129301734dab0b431d4e92158c,PodSandboxId:f4d02e915f9b9a080d8dffb2083018ca24b82072e773d08c543cbc9ee015e9df,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717421766060018615,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-423965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b0410d613f7616fb04dfb9faef02de9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29a4a7ee2a653ca6442bb0401ed8f8dce3d85bc1e3be5aa1511898dc686ce40c,PodSandboxId:bc1d551004a8508a94fa6174df8ec0ff10adfd94703187151ce00b388937946a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717421765944687663,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-423965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0701543be718af0aba53582d471ad4c6,},Annotations:map[string]string{io.kubernetes.container.hash: 3e2dc82c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ea04b881-699a-47d9-88c3-1d0aa53ddec7 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:36:20 kubernetes-upgrade-423965 crio[1879]: time="2024-06-03 13:36:20.631811095Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f0ce1fa6-5ade-456f-8dff-f8f26f43b9e7 name=/runtime.v1.RuntimeService/Version
	Jun 03 13:36:20 kubernetes-upgrade-423965 crio[1879]: time="2024-06-03 13:36:20.631941108Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f0ce1fa6-5ade-456f-8dff-f8f26f43b9e7 name=/runtime.v1.RuntimeService/Version
	Jun 03 13:36:20 kubernetes-upgrade-423965 crio[1879]: time="2024-06-03 13:36:20.633313288Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8d357430-a72e-472e-b19b-abbf96c57ef4 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 13:36:20 kubernetes-upgrade-423965 crio[1879]: time="2024-06-03 13:36:20.633744936Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717421780633724192,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8d357430-a72e-472e-b19b-abbf96c57ef4 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 13:36:20 kubernetes-upgrade-423965 crio[1879]: time="2024-06-03 13:36:20.634419924Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aae98a86-8a14-4351-b74d-02136df18a33 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:36:20 kubernetes-upgrade-423965 crio[1879]: time="2024-06-03 13:36:20.634474833Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aae98a86-8a14-4351-b74d-02136df18a33 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:36:20 kubernetes-upgrade-423965 crio[1879]: time="2024-06-03 13:36:20.634645914Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:43c634194be56fea4621fdd30aa44f902a8e3b3384b487115ac78b74dd8d27e7,PodSandboxId:a754b9bd541620480be4f3f62aa67dbcadbcebe2e83d7758ad4a285c85cfcac2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717421773672761326,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-423965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c06471f8c52838fcd95d5dddd117edd0,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44ca60d8760fe0ecf0010f39b82019fb9f93b9e89cd381dc60ce0e652ee6ae52,PodSandboxId:e8f5d3ba2717dc56797fbf3f93b157a7de9529fde586bc7a9d82d6d060ce3fa3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717421773662009982,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-423965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0403fb1b23cfb5e056bd4c6b9a53213,},Annotations:map[string]string{io.kubernetes.container.hash: bf169b55,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f1c7a33c877b8710dd6201bd6b79480a8bcc1450153e1ca4c24e5a652f1beec,PodSandboxId:3aefcf9c7f68fee446393354e83b3a616c7b2dc60fba0659d419d302cbe2ab08,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717421773689481573,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-423965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b0410d613f7616fb04dfb9faef02de9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:785cf860434eb043872577b202c26157ca5e312b5977754b22c78dae8238345c,PodSandboxId:74e3530da45a3452fb3d404dad47771bd2a510e13e4beefa71ac30ef24661625,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717421773659708128,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-423965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0701543be718af0aba53582d471ad4c6,},Annotations:map[string]string{io.kubernetes.container.hash: 3e2dc82c,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6dbc53928e4ba2daab10d4b6d0b6eb1fd4b64f6d2421046ee71594d758b997,PodSandboxId:8abf6c51477d1bb261d5d1cd6656fda648edb2ed08fc8bf00a0e73a168b9a933,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717421766097445758,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-423965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0403fb1b23cfb5e056bd4c6b9a53213,},Annotations:map[string]string{io.kubernetes.container.hash: bf169b55,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f9172c9696585845b54c10ca4ea6b41f3394ec8486ed9c4ab5891fbc3b96f14,PodSandboxId:ec65cb6508dd6c03b5a0cf31ebc5f40a5a779e2f2e3712d34237db2954bb0078,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717421766155208906,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-423965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c06471f8c52838fcd95d5dddd117edd0,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13411edad0d27a257b4d7f2897fe094e6b62d4129301734dab0b431d4e92158c,PodSandboxId:f4d02e915f9b9a080d8dffb2083018ca24b82072e773d08c543cbc9ee015e9df,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717421766060018615,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-423965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b0410d613f7616fb04dfb9faef02de9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29a4a7ee2a653ca6442bb0401ed8f8dce3d85bc1e3be5aa1511898dc686ce40c,PodSandboxId:bc1d551004a8508a94fa6174df8ec0ff10adfd94703187151ce00b388937946a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717421765944687663,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-423965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0701543be718af0aba53582d471ad4c6,},Annotations:map[string]string{io.kubernetes.container.hash: 3e2dc82c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aae98a86-8a14-4351-b74d-02136df18a33 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:36:20 kubernetes-upgrade-423965 crio[1879]: time="2024-06-03 13:36:20.673959989Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=874a1379-012a-4251-acdf-a39d242ea567 name=/runtime.v1.RuntimeService/Version
	Jun 03 13:36:20 kubernetes-upgrade-423965 crio[1879]: time="2024-06-03 13:36:20.674035852Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=874a1379-012a-4251-acdf-a39d242ea567 name=/runtime.v1.RuntimeService/Version
	Jun 03 13:36:20 kubernetes-upgrade-423965 crio[1879]: time="2024-06-03 13:36:20.676397389Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f694512a-95f3-4a48-89e0-854032e41688 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 13:36:20 kubernetes-upgrade-423965 crio[1879]: time="2024-06-03 13:36:20.677229515Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717421780677199598,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f694512a-95f3-4a48-89e0-854032e41688 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 13:36:20 kubernetes-upgrade-423965 crio[1879]: time="2024-06-03 13:36:20.677782925Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f0db3469-eaa4-49db-b13e-d5e04fd72d48 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:36:20 kubernetes-upgrade-423965 crio[1879]: time="2024-06-03 13:36:20.677832960Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f0db3469-eaa4-49db-b13e-d5e04fd72d48 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:36:20 kubernetes-upgrade-423965 crio[1879]: time="2024-06-03 13:36:20.678038072Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:43c634194be56fea4621fdd30aa44f902a8e3b3384b487115ac78b74dd8d27e7,PodSandboxId:a754b9bd541620480be4f3f62aa67dbcadbcebe2e83d7758ad4a285c85cfcac2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717421773672761326,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-423965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c06471f8c52838fcd95d5dddd117edd0,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44ca60d8760fe0ecf0010f39b82019fb9f93b9e89cd381dc60ce0e652ee6ae52,PodSandboxId:e8f5d3ba2717dc56797fbf3f93b157a7de9529fde586bc7a9d82d6d060ce3fa3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717421773662009982,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-423965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0403fb1b23cfb5e056bd4c6b9a53213,},Annotations:map[string]string{io.kubernetes.container.hash: bf169b55,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f1c7a33c877b8710dd6201bd6b79480a8bcc1450153e1ca4c24e5a652f1beec,PodSandboxId:3aefcf9c7f68fee446393354e83b3a616c7b2dc60fba0659d419d302cbe2ab08,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717421773689481573,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-423965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b0410d613f7616fb04dfb9faef02de9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:785cf860434eb043872577b202c26157ca5e312b5977754b22c78dae8238345c,PodSandboxId:74e3530da45a3452fb3d404dad47771bd2a510e13e4beefa71ac30ef24661625,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717421773659708128,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-423965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0701543be718af0aba53582d471ad4c6,},Annotations:map[string]string{io.kubernetes.container.hash: 3e2dc82c,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6dbc53928e4ba2daab10d4b6d0b6eb1fd4b64f6d2421046ee71594d758b997,PodSandboxId:8abf6c51477d1bb261d5d1cd6656fda648edb2ed08fc8bf00a0e73a168b9a933,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717421766097445758,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-423965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0403fb1b23cfb5e056bd4c6b9a53213,},Annotations:map[string]string{io.kubernetes.container.hash: bf169b55,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f9172c9696585845b54c10ca4ea6b41f3394ec8486ed9c4ab5891fbc3b96f14,PodSandboxId:ec65cb6508dd6c03b5a0cf31ebc5f40a5a779e2f2e3712d34237db2954bb0078,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717421766155208906,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-423965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c06471f8c52838fcd95d5dddd117edd0,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13411edad0d27a257b4d7f2897fe094e6b62d4129301734dab0b431d4e92158c,PodSandboxId:f4d02e915f9b9a080d8dffb2083018ca24b82072e773d08c543cbc9ee015e9df,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717421766060018615,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-423965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b0410d613f7616fb04dfb9faef02de9,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29a4a7ee2a653ca6442bb0401ed8f8dce3d85bc1e3be5aa1511898dc686ce40c,PodSandboxId:bc1d551004a8508a94fa6174df8ec0ff10adfd94703187151ce00b388937946a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717421765944687663,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-423965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0701543be718af0aba53582d471ad4c6,},Annotations:map[string]string{io.kubernetes.container.hash: 3e2dc82c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f0db3469-eaa4-49db-b13e-d5e04fd72d48 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4f1c7a33c877b       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   7 seconds ago       Running             kube-controller-manager   2                   3aefcf9c7f68f       kube-controller-manager-kubernetes-upgrade-423965
	43c634194be56       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   7 seconds ago       Running             kube-scheduler            2                   a754b9bd54162       kube-scheduler-kubernetes-upgrade-423965
	44ca60d8760fe       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   7 seconds ago       Running             etcd                      2                   e8f5d3ba2717d       etcd-kubernetes-upgrade-423965
	785cf860434eb       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   7 seconds ago       Running             kube-apiserver            2                   74e3530da45a3       kube-apiserver-kubernetes-upgrade-423965
	9f9172c969658       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   14 seconds ago      Exited              kube-scheduler            1                   ec65cb6508dd6       kube-scheduler-kubernetes-upgrade-423965
	4b6dbc53928e4       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   14 seconds ago      Exited              etcd                      1                   8abf6c51477d1       etcd-kubernetes-upgrade-423965
	13411edad0d27       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   14 seconds ago      Exited              kube-controller-manager   1                   f4d02e915f9b9       kube-controller-manager-kubernetes-upgrade-423965
	29a4a7ee2a653       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   14 seconds ago      Exited              kube-apiserver            1                   bc1d551004a85       kube-apiserver-kubernetes-upgrade-423965
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-423965
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-423965
	                    kubernetes.io/os=linux
	Annotations:        volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 13:35:50 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-423965
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 13:36:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 13:36:16 +0000   Mon, 03 Jun 2024 13:35:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 13:36:16 +0000   Mon, 03 Jun 2024 13:35:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 13:36:16 +0000   Mon, 03 Jun 2024 13:35:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 13:36:16 +0000   Mon, 03 Jun 2024 13:35:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.64
	  Hostname:    kubernetes-upgrade-423965
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 035e0857dfe342869e6b0307b31d6877
	  System UUID:                035e0857-dfe3-4286-9e6b-0307b31d6877
	  Boot ID:                    be3db446-1be9-4205-82dd-5b8825a64941
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-423965                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         25s
	  kube-system                 kube-apiserver-kubernetes-upgrade-423965             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-423965    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	  kube-system                 kube-scheduler-kubernetes-upgrade-423965             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (4%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  Starting                 34s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  34s (x8 over 34s)  kubelet  Node kubernetes-upgrade-423965 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s (x8 over 34s)  kubelet  Node kubernetes-upgrade-423965 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s (x7 over 34s)  kubelet  Node kubernetes-upgrade-423965 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  34s                kubelet  Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.950940] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.063311] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068611] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.192157] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.146123] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.312860] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +4.933786] systemd-fstab-generator[737]: Ignoring "noauto" option for root device
	[  +0.078978] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.902276] systemd-fstab-generator[861]: Ignoring "noauto" option for root device
	[  +9.593125] systemd-fstab-generator[1249]: Ignoring "noauto" option for root device
	[  +0.084593] kauditd_printk_skb: 97 callbacks suppressed
	[Jun 3 13:36] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.471183] systemd-fstab-generator[1795]: Ignoring "noauto" option for root device
	[  +0.137495] kauditd_printk_skb: 33 callbacks suppressed
	[  +0.094093] systemd-fstab-generator[1810]: Ignoring "noauto" option for root device
	[  +0.244081] systemd-fstab-generator[1824]: Ignoring "noauto" option for root device
	[  +0.190974] systemd-fstab-generator[1836]: Ignoring "noauto" option for root device
	[  +0.413557] systemd-fstab-generator[1865]: Ignoring "noauto" option for root device
	[  +1.382027] systemd-fstab-generator[2193]: Ignoring "noauto" option for root device
	[  +3.300999] systemd-fstab-generator[2317]: Ignoring "noauto" option for root device
	[  +0.085704] kauditd_printk_skb: 146 callbacks suppressed
	[  +5.718207] systemd-fstab-generator[2601]: Ignoring "noauto" option for root device
	[  +0.097683] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> etcd [44ca60d8760fe0ecf0010f39b82019fb9f93b9e89cd381dc60ce0e652ee6ae52] <==
	{"level":"info","ts":"2024-06-03T13:36:14.19632Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c6005d374c1772c0","local-member-id":"7e00f7fcc1a7adc9","added-peer-id":"7e00f7fcc1a7adc9","added-peer-peer-urls":["https://192.168.50.64:2380"]}
	{"level":"info","ts":"2024-06-03T13:36:14.188185Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-03T13:36:14.196433Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-03T13:36:14.196464Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-03T13:36:14.198389Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c6005d374c1772c0","local-member-id":"7e00f7fcc1a7adc9","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T13:36:14.198465Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T13:36:14.205489Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-03T13:36:14.205712Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.64:2380"}
	{"level":"info","ts":"2024-06-03T13:36:14.208176Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.64:2380"}
	{"level":"info","ts":"2024-06-03T13:36:14.205747Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"7e00f7fcc1a7adc9","initial-advertise-peer-urls":["https://192.168.50.64:2380"],"listen-peer-urls":["https://192.168.50.64:2380"],"advertise-client-urls":["https://192.168.50.64:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.64:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-03T13:36:14.205771Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-03T13:36:15.437502Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7e00f7fcc1a7adc9 is starting a new election at term 2"}
	{"level":"info","ts":"2024-06-03T13:36:15.437617Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7e00f7fcc1a7adc9 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-06-03T13:36:15.437669Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7e00f7fcc1a7adc9 received MsgPreVoteResp from 7e00f7fcc1a7adc9 at term 2"}
	{"level":"info","ts":"2024-06-03T13:36:15.437699Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7e00f7fcc1a7adc9 became candidate at term 3"}
	{"level":"info","ts":"2024-06-03T13:36:15.437723Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7e00f7fcc1a7adc9 received MsgVoteResp from 7e00f7fcc1a7adc9 at term 3"}
	{"level":"info","ts":"2024-06-03T13:36:15.437763Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7e00f7fcc1a7adc9 became leader at term 3"}
	{"level":"info","ts":"2024-06-03T13:36:15.437789Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7e00f7fcc1a7adc9 elected leader 7e00f7fcc1a7adc9 at term 3"}
	{"level":"info","ts":"2024-06-03T13:36:15.442902Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7e00f7fcc1a7adc9","local-member-attributes":"{Name:kubernetes-upgrade-423965 ClientURLs:[https://192.168.50.64:2379]}","request-path":"/0/members/7e00f7fcc1a7adc9/attributes","cluster-id":"c6005d374c1772c0","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-03T13:36:15.442945Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-03T13:36:15.443282Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-03T13:36:15.443316Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-03T13:36:15.442964Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-03T13:36:15.445183Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-03T13:36:15.445233Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.64:2379"}
	
	
	==> etcd [4b6dbc53928e4ba2daab10d4b6d0b6eb1fd4b64f6d2421046ee71594d758b997] <==
	{"level":"info","ts":"2024-06-03T13:36:06.930952Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"64.412162ms"}
	{"level":"info","ts":"2024-06-03T13:36:06.956066Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-06-03T13:36:07.06232Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"c6005d374c1772c0","local-member-id":"7e00f7fcc1a7adc9","commit-index":308}
	{"level":"info","ts":"2024-06-03T13:36:07.062487Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7e00f7fcc1a7adc9 switched to configuration voters=()"}
	{"level":"info","ts":"2024-06-03T13:36:07.062571Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7e00f7fcc1a7adc9 became follower at term 2"}
	{"level":"info","ts":"2024-06-03T13:36:07.062621Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 7e00f7fcc1a7adc9 [peers: [], term: 2, commit: 308, applied: 0, lastindex: 308, lastterm: 2]"}
	{"level":"warn","ts":"2024-06-03T13:36:07.072595Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-06-03T13:36:07.172947Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":300}
	{"level":"info","ts":"2024-06-03T13:36:07.178466Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-06-03T13:36:07.192896Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"7e00f7fcc1a7adc9","timeout":"7s"}
	{"level":"info","ts":"2024-06-03T13:36:07.200458Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"7e00f7fcc1a7adc9"}
	{"level":"info","ts":"2024-06-03T13:36:07.200614Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"7e00f7fcc1a7adc9","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-06-03T13:36:07.200967Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-06-03T13:36:07.2035Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7e00f7fcc1a7adc9 switched to configuration voters=(9079529513731730889)"}
	{"level":"info","ts":"2024-06-03T13:36:07.205288Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c6005d374c1772c0","local-member-id":"7e00f7fcc1a7adc9","added-peer-id":"7e00f7fcc1a7adc9","added-peer-peer-urls":["https://192.168.50.64:2380"]}
	{"level":"info","ts":"2024-06-03T13:36:07.205467Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c6005d374c1772c0","local-member-id":"7e00f7fcc1a7adc9","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T13:36:07.205524Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T13:36:07.211523Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-03T13:36:07.211737Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-03T13:36:07.211869Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-03T13:36:07.24023Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-03T13:36:07.240517Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.64:2380"}
	{"level":"info","ts":"2024-06-03T13:36:07.24315Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.64:2380"}
	{"level":"info","ts":"2024-06-03T13:36:07.244617Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-03T13:36:07.244559Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"7e00f7fcc1a7adc9","initial-advertise-peer-urls":["https://192.168.50.64:2380"],"listen-peer-urls":["https://192.168.50.64:2380"],"advertise-client-urls":["https://192.168.50.64:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.64:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	
	
	==> kernel <==
	 13:36:21 up 0 min,  0 users,  load average: 1.31, 0.37, 0.13
	Linux kubernetes-upgrade-423965 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [29a4a7ee2a653ca6442bb0401ed8f8dce3d85bc1e3be5aa1511898dc686ce40c] <==
	I0603 13:36:06.482935       1 options.go:221] external host was not specified, using 192.168.50.64
	I0603 13:36:06.484648       1 server.go:148] Version: v1.30.1
	I0603 13:36:06.484708       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-apiserver [785cf860434eb043872577b202c26157ca5e312b5977754b22c78dae8238345c] <==
	I0603 13:36:16.830412       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0603 13:36:16.830444       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0603 13:36:16.830477       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0603 13:36:16.918883       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0603 13:36:16.925455       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0603 13:36:16.925489       1 policy_source.go:224] refreshing policies
	I0603 13:36:16.926656       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0603 13:36:16.928702       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0603 13:36:16.929261       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0603 13:36:16.929359       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0603 13:36:16.929396       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0603 13:36:16.929888       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0603 13:36:16.936203       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0603 13:36:16.939003       1 shared_informer.go:320] Caches are synced for configmaps
	I0603 13:36:16.939311       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0603 13:36:16.939444       1 aggregator.go:165] initial CRD sync complete...
	I0603 13:36:16.939475       1 autoregister_controller.go:141] Starting autoregister controller
	I0603 13:36:16.939483       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0603 13:36:16.939492       1 cache.go:39] Caches are synced for autoregister controller
	I0603 13:36:17.835944       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0603 13:36:18.453002       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0603 13:36:18.471178       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0603 13:36:18.508810       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0603 13:36:18.636362       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0603 13:36:18.647978       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [13411edad0d27a257b4d7f2897fe094e6b62d4129301734dab0b431d4e92158c] <==
	I0603 13:36:07.699400       1 serving.go:380] Generated self-signed cert in-memory
	
	
	==> kube-controller-manager [4f1c7a33c877b8710dd6201bd6b79480a8bcc1450153e1ca4c24e5a652f1beec] <==
	I0603 13:36:18.966504       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0603 13:36:18.966518       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0603 13:36:18.966788       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0603 13:36:18.975408       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0603 13:36:18.976540       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0603 13:36:18.976574       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0603 13:36:18.983235       1 shared_informer.go:320] Caches are synced for tokens
	I0603 13:36:18.992531       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0603 13:36:18.992696       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0603 13:36:19.002166       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0603 13:36:19.002464       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0603 13:36:19.002504       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0603 13:36:19.018535       1 controllermanager.go:761] "Started controller" controller="disruption-controller"
	I0603 13:36:19.018878       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0603 13:36:19.018933       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0603 13:36:19.018945       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0603 13:36:19.021040       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0603 13:36:19.021254       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0603 13:36:19.021282       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0603 13:36:19.042019       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0603 13:36:19.042290       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0603 13:36:19.042327       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0603 13:36:19.044971       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0603 13:36:19.045060       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0603 13:36:19.045177       1 shared_informer.go:313] Waiting for caches to sync for TTL
	
	
	==> kube-scheduler [43c634194be56fea4621fdd30aa44f902a8e3b3384b487115ac78b74dd8d27e7] <==
	I0603 13:36:14.898016       1 serving.go:380] Generated self-signed cert in-memory
	I0603 13:36:16.955445       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0603 13:36:16.955485       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 13:36:16.959305       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0603 13:36:16.959341       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0603 13:36:16.959384       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0603 13:36:16.959391       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 13:36:16.959403       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0603 13:36:16.959409       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0603 13:36:16.960205       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0603 13:36:16.960332       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 13:36:17.060272       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0603 13:36:17.060453       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0603 13:36:17.060699       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [9f9172c9696585845b54c10ca4ea6b41f3394ec8486ed9c4ab5891fbc3b96f14] <==
	I0603 13:36:08.229814       1 serving.go:380] Generated self-signed cert in-memory
	
	
	==> kubelet <==
	Jun 03 13:36:13 kubernetes-upgrade-423965 kubelet[2324]: E0603 13:36:13.376099    2324 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-423965?timeout=10s\": dial tcp 192.168.50.64:8443: connect: connection refused" interval="400ms"
	Jun 03 13:36:13 kubernetes-upgrade-423965 kubelet[2324]: I0603 13:36:13.473739    2324 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-423965"
	Jun 03 13:36:13 kubernetes-upgrade-423965 kubelet[2324]: E0603 13:36:13.474758    2324 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.64:8443: connect: connection refused" node="kubernetes-upgrade-423965"
	Jun 03 13:36:13 kubernetes-upgrade-423965 kubelet[2324]: I0603 13:36:13.475794    2324 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b0410d613f7616fb04dfb9faef02de9-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-423965\" (UID: \"4b0410d613f7616fb04dfb9faef02de9\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-423965"
	Jun 03 13:36:13 kubernetes-upgrade-423965 kubelet[2324]: I0603 13:36:13.475939    2324 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/e0403fb1b23cfb5e056bd4c6b9a53213-etcd-data\") pod \"etcd-kubernetes-upgrade-423965\" (UID: \"e0403fb1b23cfb5e056bd4c6b9a53213\") " pod="kube-system/etcd-kubernetes-upgrade-423965"
	Jun 03 13:36:13 kubernetes-upgrade-423965 kubelet[2324]: I0603 13:36:13.475961    2324 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0701543be718af0aba53582d471ad4c6-k8s-certs\") pod \"kube-apiserver-kubernetes-upgrade-423965\" (UID: \"0701543be718af0aba53582d471ad4c6\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-423965"
	Jun 03 13:36:13 kubernetes-upgrade-423965 kubelet[2324]: I0603 13:36:13.476188    2324 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0701543be718af0aba53582d471ad4c6-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-423965\" (UID: \"0701543be718af0aba53582d471ad4c6\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-423965"
	Jun 03 13:36:13 kubernetes-upgrade-423965 kubelet[2324]: I0603 13:36:13.476285    2324 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4b0410d613f7616fb04dfb9faef02de9-ca-certs\") pod \"kube-controller-manager-kubernetes-upgrade-423965\" (UID: \"4b0410d613f7616fb04dfb9faef02de9\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-423965"
	Jun 03 13:36:13 kubernetes-upgrade-423965 kubelet[2324]: I0603 13:36:13.476303    2324 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4b0410d613f7616fb04dfb9faef02de9-flexvolume-dir\") pod \"kube-controller-manager-kubernetes-upgrade-423965\" (UID: \"4b0410d613f7616fb04dfb9faef02de9\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-423965"
	Jun 03 13:36:13 kubernetes-upgrade-423965 kubelet[2324]: I0603 13:36:13.476530    2324 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0701543be718af0aba53582d471ad4c6-ca-certs\") pod \"kube-apiserver-kubernetes-upgrade-423965\" (UID: \"0701543be718af0aba53582d471ad4c6\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-423965"
	Jun 03 13:36:13 kubernetes-upgrade-423965 kubelet[2324]: I0603 13:36:13.476631    2324 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4b0410d613f7616fb04dfb9faef02de9-k8s-certs\") pod \"kube-controller-manager-kubernetes-upgrade-423965\" (UID: \"4b0410d613f7616fb04dfb9faef02de9\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-423965"
	Jun 03 13:36:13 kubernetes-upgrade-423965 kubelet[2324]: I0603 13:36:13.476746    2324 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4b0410d613f7616fb04dfb9faef02de9-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-423965\" (UID: \"4b0410d613f7616fb04dfb9faef02de9\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-423965"
	Jun 03 13:36:13 kubernetes-upgrade-423965 kubelet[2324]: I0603 13:36:13.476845    2324 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c06471f8c52838fcd95d5dddd117edd0-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-423965\" (UID: \"c06471f8c52838fcd95d5dddd117edd0\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-423965"
	Jun 03 13:36:13 kubernetes-upgrade-423965 kubelet[2324]: I0603 13:36:13.641992    2324 scope.go:117] "RemoveContainer" containerID="4b6dbc53928e4ba2daab10d4b6d0b6eb1fd4b64f6d2421046ee71594d758b997"
	Jun 03 13:36:13 kubernetes-upgrade-423965 kubelet[2324]: I0603 13:36:13.642960    2324 scope.go:117] "RemoveContainer" containerID="29a4a7ee2a653ca6442bb0401ed8f8dce3d85bc1e3be5aa1511898dc686ce40c"
	Jun 03 13:36:13 kubernetes-upgrade-423965 kubelet[2324]: I0603 13:36:13.649435    2324 scope.go:117] "RemoveContainer" containerID="13411edad0d27a257b4d7f2897fe094e6b62d4129301734dab0b431d4e92158c"
	Jun 03 13:36:13 kubernetes-upgrade-423965 kubelet[2324]: I0603 13:36:13.650817    2324 scope.go:117] "RemoveContainer" containerID="9f9172c9696585845b54c10ca4ea6b41f3394ec8486ed9c4ab5891fbc3b96f14"
	Jun 03 13:36:13 kubernetes-upgrade-423965 kubelet[2324]: E0603 13:36:13.777720    2324 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-423965?timeout=10s\": dial tcp 192.168.50.64:8443: connect: connection refused" interval="800ms"
	Jun 03 13:36:13 kubernetes-upgrade-423965 kubelet[2324]: I0603 13:36:13.876021    2324 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-423965"
	Jun 03 13:36:13 kubernetes-upgrade-423965 kubelet[2324]: E0603 13:36:13.877085    2324 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.64:8443: connect: connection refused" node="kubernetes-upgrade-423965"
	Jun 03 13:36:14 kubernetes-upgrade-423965 kubelet[2324]: I0603 13:36:14.678868    2324 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-423965"
	Jun 03 13:36:16 kubernetes-upgrade-423965 kubelet[2324]: I0603 13:36:16.982624    2324 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-423965"
	Jun 03 13:36:16 kubernetes-upgrade-423965 kubelet[2324]: I0603 13:36:16.982771    2324 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-423965"
	Jun 03 13:36:17 kubernetes-upgrade-423965 kubelet[2324]: I0603 13:36:17.152066    2324 apiserver.go:52] "Watching apiserver"
	Jun 03 13:36:17 kubernetes-upgrade-423965 kubelet[2324]: I0603 13:36:17.175204    2324 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0603 13:36:20.154410 1127889 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19011-1078924/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-423965 -n kubernetes-upgrade-423965
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-423965 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-423965 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-423965 describe pod storage-provisioner: exit status 1 (72.140269ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-423965 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-423965" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-423965
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-423965: (1.017399247s)
--- FAIL: TestKubernetesUpgrade (380.04s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (94.53s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-374510 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-374510 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m29.427246275s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-374510] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19011
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19011-1078924/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19011-1078924/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-374510" primary control-plane node in "pause-374510" cluster
	* Updating the running kvm2 "pause-374510" VM ...
	* Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-374510" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0603 13:36:32.757201 1128402 out.go:291] Setting OutFile to fd 1 ...
	I0603 13:36:32.757673 1128402 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:36:32.757725 1128402 out.go:304] Setting ErrFile to fd 2...
	I0603 13:36:32.757744 1128402 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:36:32.758164 1128402 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
	I0603 13:36:32.759489 1128402 out.go:298] Setting JSON to false
	I0603 13:36:32.761029 1128402 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":15540,"bootTime":1717406253,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 13:36:32.761129 1128402 start.go:139] virtualization: kvm guest
	I0603 13:36:32.763524 1128402 out.go:177] * [pause-374510] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 13:36:32.765227 1128402 out.go:177]   - MINIKUBE_LOCATION=19011
	I0603 13:36:32.765250 1128402 notify.go:220] Checking for updates...
	I0603 13:36:32.766920 1128402 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 13:36:32.768601 1128402 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 13:36:32.770189 1128402 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 13:36:32.771672 1128402 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 13:36:32.773296 1128402 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 13:36:32.775534 1128402 config.go:182] Loaded profile config "pause-374510": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:36:32.776130 1128402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:36:32.776202 1128402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:36:32.793046 1128402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37611
	I0603 13:36:32.793618 1128402 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:36:32.794377 1128402 main.go:141] libmachine: Using API Version  1
	I0603 13:36:32.794410 1128402 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:36:32.794913 1128402 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:36:32.795232 1128402 main.go:141] libmachine: (pause-374510) Calling .DriverName
	I0603 13:36:32.795593 1128402 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 13:36:32.796007 1128402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:36:32.796066 1128402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:36:32.813989 1128402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41149
	I0603 13:36:32.814559 1128402 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:36:32.815229 1128402 main.go:141] libmachine: Using API Version  1
	I0603 13:36:32.815266 1128402 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:36:32.815619 1128402 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:36:32.815903 1128402 main.go:141] libmachine: (pause-374510) Calling .DriverName
	I0603 13:36:32.853303 1128402 out.go:177] * Using the kvm2 driver based on existing profile
	I0603 13:36:32.854922 1128402 start.go:297] selected driver: kvm2
	I0603 13:36:32.854950 1128402 start.go:901] validating driver "kvm2" against &{Name:pause-374510 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.30.1 ClusterName:pause-374510 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.3 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-devic
e-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:36:32.855070 1128402 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 13:36:32.855393 1128402 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 13:36:32.855467 1128402 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19011-1078924/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0603 13:36:32.872838 1128402 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0603 13:36:32.874016 1128402 cni.go:84] Creating CNI manager for ""
	I0603 13:36:32.874042 1128402 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:36:32.874126 1128402 start.go:340] cluster config:
	{Name:pause-374510 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:pause-374510 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.3 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:fa
lse registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:36:32.874366 1128402 iso.go:125] acquiring lock: {Name:mka26d6a83f88b83737ccc78b57cc462fbe70fe1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 13:36:32.876530 1128402 out.go:177] * Starting "pause-374510" primary control-plane node in "pause-374510" cluster
	I0603 13:36:32.878057 1128402 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 13:36:32.878104 1128402 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0603 13:36:32.878115 1128402 cache.go:56] Caching tarball of preloaded images
	I0603 13:36:32.878220 1128402 preload.go:173] Found /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 13:36:32.878233 1128402 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0603 13:36:32.878414 1128402 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/pause-374510/config.json ...
	I0603 13:36:32.878701 1128402 start.go:360] acquireMachinesLock for pause-374510: {Name:mk20baaab39609d00406b78ad309423511e633ec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 13:37:16.010896 1128402 start.go:364] duration metric: took 43.132161166s to acquireMachinesLock for "pause-374510"
	I0603 13:37:16.010956 1128402 start.go:96] Skipping create...Using existing machine configuration
	I0603 13:37:16.010973 1128402 fix.go:54] fixHost starting: 
	I0603 13:37:16.011417 1128402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:37:16.011476 1128402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:37:16.029021 1128402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46667
	I0603 13:37:16.029634 1128402 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:37:16.030167 1128402 main.go:141] libmachine: Using API Version  1
	I0603 13:37:16.030212 1128402 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:37:16.030614 1128402 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:37:16.030834 1128402 main.go:141] libmachine: (pause-374510) Calling .DriverName
	I0603 13:37:16.031033 1128402 main.go:141] libmachine: (pause-374510) Calling .GetState
	I0603 13:37:16.032599 1128402 fix.go:112] recreateIfNeeded on pause-374510: state=Running err=<nil>
	W0603 13:37:16.032621 1128402 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 13:37:16.035042 1128402 out.go:177] * Updating the running kvm2 "pause-374510" VM ...
	I0603 13:37:16.036304 1128402 machine.go:94] provisionDockerMachine start ...
	I0603 13:37:16.036331 1128402 main.go:141] libmachine: (pause-374510) Calling .DriverName
	I0603 13:37:16.036588 1128402 main.go:141] libmachine: (pause-374510) Calling .GetSSHHostname
	I0603 13:37:16.039295 1128402 main.go:141] libmachine: (pause-374510) DBG | domain pause-374510 has defined MAC address 52:54:00:d1:27:7d in network mk-pause-374510
	I0603 13:37:16.039782 1128402 main.go:141] libmachine: (pause-374510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:27:7d", ip: ""} in network mk-pause-374510: {Iface:virbr5 ExpiryTime:2024-06-03 14:35:09 +0000 UTC Type:0 Mac:52:54:00:d1:27:7d Iaid: IPaddr:192.168.94.3 Prefix:24 Hostname:pause-374510 Clientid:01:52:54:00:d1:27:7d}
	I0603 13:37:16.039812 1128402 main.go:141] libmachine: (pause-374510) DBG | domain pause-374510 has defined IP address 192.168.94.3 and MAC address 52:54:00:d1:27:7d in network mk-pause-374510
	I0603 13:37:16.039946 1128402 main.go:141] libmachine: (pause-374510) Calling .GetSSHPort
	I0603 13:37:16.040128 1128402 main.go:141] libmachine: (pause-374510) Calling .GetSSHKeyPath
	I0603 13:37:16.040305 1128402 main.go:141] libmachine: (pause-374510) Calling .GetSSHKeyPath
	I0603 13:37:16.040465 1128402 main.go:141] libmachine: (pause-374510) Calling .GetSSHUsername
	I0603 13:37:16.040638 1128402 main.go:141] libmachine: Using SSH client type: native
	I0603 13:37:16.040911 1128402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.94.3 22 <nil> <nil>}
	I0603 13:37:16.040928 1128402 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 13:37:16.159218 1128402 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-374510
	
	I0603 13:37:16.159267 1128402 main.go:141] libmachine: (pause-374510) Calling .GetMachineName
	I0603 13:37:16.159585 1128402 buildroot.go:166] provisioning hostname "pause-374510"
	I0603 13:37:16.159615 1128402 main.go:141] libmachine: (pause-374510) Calling .GetMachineName
	I0603 13:37:16.159790 1128402 main.go:141] libmachine: (pause-374510) Calling .GetSSHHostname
	I0603 13:37:16.163297 1128402 main.go:141] libmachine: (pause-374510) DBG | domain pause-374510 has defined MAC address 52:54:00:d1:27:7d in network mk-pause-374510
	I0603 13:37:16.163829 1128402 main.go:141] libmachine: (pause-374510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:27:7d", ip: ""} in network mk-pause-374510: {Iface:virbr5 ExpiryTime:2024-06-03 14:35:09 +0000 UTC Type:0 Mac:52:54:00:d1:27:7d Iaid: IPaddr:192.168.94.3 Prefix:24 Hostname:pause-374510 Clientid:01:52:54:00:d1:27:7d}
	I0603 13:37:16.163868 1128402 main.go:141] libmachine: (pause-374510) DBG | domain pause-374510 has defined IP address 192.168.94.3 and MAC address 52:54:00:d1:27:7d in network mk-pause-374510
	I0603 13:37:16.164112 1128402 main.go:141] libmachine: (pause-374510) Calling .GetSSHPort
	I0603 13:37:16.164355 1128402 main.go:141] libmachine: (pause-374510) Calling .GetSSHKeyPath
	I0603 13:37:16.164565 1128402 main.go:141] libmachine: (pause-374510) Calling .GetSSHKeyPath
	I0603 13:37:16.164769 1128402 main.go:141] libmachine: (pause-374510) Calling .GetSSHUsername
	I0603 13:37:16.164985 1128402 main.go:141] libmachine: Using SSH client type: native
	I0603 13:37:16.165171 1128402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.94.3 22 <nil> <nil>}
	I0603 13:37:16.165189 1128402 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-374510 && echo "pause-374510" | sudo tee /etc/hostname
	I0603 13:37:16.309068 1128402 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-374510
	
	I0603 13:37:16.309106 1128402 main.go:141] libmachine: (pause-374510) Calling .GetSSHHostname
	I0603 13:37:16.312262 1128402 main.go:141] libmachine: (pause-374510) DBG | domain pause-374510 has defined MAC address 52:54:00:d1:27:7d in network mk-pause-374510
	I0603 13:37:16.312737 1128402 main.go:141] libmachine: (pause-374510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:27:7d", ip: ""} in network mk-pause-374510: {Iface:virbr5 ExpiryTime:2024-06-03 14:35:09 +0000 UTC Type:0 Mac:52:54:00:d1:27:7d Iaid: IPaddr:192.168.94.3 Prefix:24 Hostname:pause-374510 Clientid:01:52:54:00:d1:27:7d}
	I0603 13:37:16.312773 1128402 main.go:141] libmachine: (pause-374510) DBG | domain pause-374510 has defined IP address 192.168.94.3 and MAC address 52:54:00:d1:27:7d in network mk-pause-374510
	I0603 13:37:16.312970 1128402 main.go:141] libmachine: (pause-374510) Calling .GetSSHPort
	I0603 13:37:16.313183 1128402 main.go:141] libmachine: (pause-374510) Calling .GetSSHKeyPath
	I0603 13:37:16.313376 1128402 main.go:141] libmachine: (pause-374510) Calling .GetSSHKeyPath
	I0603 13:37:16.313589 1128402 main.go:141] libmachine: (pause-374510) Calling .GetSSHUsername
	I0603 13:37:16.313812 1128402 main.go:141] libmachine: Using SSH client type: native
	I0603 13:37:16.314017 1128402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.94.3 22 <nil> <nil>}
	I0603 13:37:16.314041 1128402 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-374510' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-374510/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-374510' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 13:37:16.435256 1128402 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 13:37:16.435291 1128402 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19011-1078924/.minikube CaCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19011-1078924/.minikube}
	I0603 13:37:16.435340 1128402 buildroot.go:174] setting up certificates
	I0603 13:37:16.435375 1128402 provision.go:84] configureAuth start
	I0603 13:37:16.435407 1128402 main.go:141] libmachine: (pause-374510) Calling .GetMachineName
	I0603 13:37:16.435748 1128402 main.go:141] libmachine: (pause-374510) Calling .GetIP
	I0603 13:37:16.438715 1128402 main.go:141] libmachine: (pause-374510) DBG | domain pause-374510 has defined MAC address 52:54:00:d1:27:7d in network mk-pause-374510
	I0603 13:37:16.439142 1128402 main.go:141] libmachine: (pause-374510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:27:7d", ip: ""} in network mk-pause-374510: {Iface:virbr5 ExpiryTime:2024-06-03 14:35:09 +0000 UTC Type:0 Mac:52:54:00:d1:27:7d Iaid: IPaddr:192.168.94.3 Prefix:24 Hostname:pause-374510 Clientid:01:52:54:00:d1:27:7d}
	I0603 13:37:16.439193 1128402 main.go:141] libmachine: (pause-374510) DBG | domain pause-374510 has defined IP address 192.168.94.3 and MAC address 52:54:00:d1:27:7d in network mk-pause-374510
	I0603 13:37:16.439365 1128402 main.go:141] libmachine: (pause-374510) Calling .GetSSHHostname
	I0603 13:37:16.441984 1128402 main.go:141] libmachine: (pause-374510) DBG | domain pause-374510 has defined MAC address 52:54:00:d1:27:7d in network mk-pause-374510
	I0603 13:37:16.442384 1128402 main.go:141] libmachine: (pause-374510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:27:7d", ip: ""} in network mk-pause-374510: {Iface:virbr5 ExpiryTime:2024-06-03 14:35:09 +0000 UTC Type:0 Mac:52:54:00:d1:27:7d Iaid: IPaddr:192.168.94.3 Prefix:24 Hostname:pause-374510 Clientid:01:52:54:00:d1:27:7d}
	I0603 13:37:16.442426 1128402 main.go:141] libmachine: (pause-374510) DBG | domain pause-374510 has defined IP address 192.168.94.3 and MAC address 52:54:00:d1:27:7d in network mk-pause-374510
	I0603 13:37:16.442601 1128402 provision.go:143] copyHostCerts
	I0603 13:37:16.442686 1128402 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem, removing ...
	I0603 13:37:16.442704 1128402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 13:37:16.442780 1128402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem (1123 bytes)
	I0603 13:37:16.442915 1128402 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem, removing ...
	I0603 13:37:16.442928 1128402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 13:37:16.442956 1128402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem (1675 bytes)
	I0603 13:37:16.443069 1128402 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem, removing ...
	I0603 13:37:16.443081 1128402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 13:37:16.443116 1128402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem (1078 bytes)
	I0603 13:37:16.443226 1128402 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem org=jenkins.pause-374510 san=[127.0.0.1 192.168.94.3 localhost minikube pause-374510]
	I0603 13:37:16.619812 1128402 provision.go:177] copyRemoteCerts
	I0603 13:37:16.619886 1128402 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 13:37:16.619925 1128402 main.go:141] libmachine: (pause-374510) Calling .GetSSHHostname
	I0603 13:37:16.623026 1128402 main.go:141] libmachine: (pause-374510) DBG | domain pause-374510 has defined MAC address 52:54:00:d1:27:7d in network mk-pause-374510
	I0603 13:37:16.623513 1128402 main.go:141] libmachine: (pause-374510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:27:7d", ip: ""} in network mk-pause-374510: {Iface:virbr5 ExpiryTime:2024-06-03 14:35:09 +0000 UTC Type:0 Mac:52:54:00:d1:27:7d Iaid: IPaddr:192.168.94.3 Prefix:24 Hostname:pause-374510 Clientid:01:52:54:00:d1:27:7d}
	I0603 13:37:16.623546 1128402 main.go:141] libmachine: (pause-374510) DBG | domain pause-374510 has defined IP address 192.168.94.3 and MAC address 52:54:00:d1:27:7d in network mk-pause-374510
	I0603 13:37:16.623889 1128402 main.go:141] libmachine: (pause-374510) Calling .GetSSHPort
	I0603 13:37:16.624103 1128402 main.go:141] libmachine: (pause-374510) Calling .GetSSHKeyPath
	I0603 13:37:16.624278 1128402 main.go:141] libmachine: (pause-374510) Calling .GetSSHUsername
	I0603 13:37:16.624433 1128402 sshutil.go:53] new ssh client: &{IP:192.168.94.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/pause-374510/id_rsa Username:docker}
	I0603 13:37:16.715298 1128402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 13:37:16.747677 1128402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0603 13:37:16.779033 1128402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 13:37:16.812748 1128402 provision.go:87] duration metric: took 377.352961ms to configureAuth
	I0603 13:37:16.812787 1128402 buildroot.go:189] setting minikube options for container-runtime
	I0603 13:37:16.813081 1128402 config.go:182] Loaded profile config "pause-374510": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:37:16.813177 1128402 main.go:141] libmachine: (pause-374510) Calling .GetSSHHostname
	I0603 13:37:16.816386 1128402 main.go:141] libmachine: (pause-374510) DBG | domain pause-374510 has defined MAC address 52:54:00:d1:27:7d in network mk-pause-374510
	I0603 13:37:16.816845 1128402 main.go:141] libmachine: (pause-374510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:27:7d", ip: ""} in network mk-pause-374510: {Iface:virbr5 ExpiryTime:2024-06-03 14:35:09 +0000 UTC Type:0 Mac:52:54:00:d1:27:7d Iaid: IPaddr:192.168.94.3 Prefix:24 Hostname:pause-374510 Clientid:01:52:54:00:d1:27:7d}
	I0603 13:37:16.816876 1128402 main.go:141] libmachine: (pause-374510) DBG | domain pause-374510 has defined IP address 192.168.94.3 and MAC address 52:54:00:d1:27:7d in network mk-pause-374510
	I0603 13:37:16.817216 1128402 main.go:141] libmachine: (pause-374510) Calling .GetSSHPort
	I0603 13:37:16.817512 1128402 main.go:141] libmachine: (pause-374510) Calling .GetSSHKeyPath
	I0603 13:37:16.817787 1128402 main.go:141] libmachine: (pause-374510) Calling .GetSSHKeyPath
	I0603 13:37:16.817979 1128402 main.go:141] libmachine: (pause-374510) Calling .GetSSHUsername
	I0603 13:37:16.818180 1128402 main.go:141] libmachine: Using SSH client type: native
	I0603 13:37:16.818387 1128402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.94.3 22 <nil> <nil>}
	I0603 13:37:16.818409 1128402 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 13:37:24.542785 1128402 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 13:37:24.542814 1128402 machine.go:97] duration metric: took 8.506492634s to provisionDockerMachine
	I0603 13:37:24.542829 1128402 start.go:293] postStartSetup for "pause-374510" (driver="kvm2")
	I0603 13:37:24.542842 1128402 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 13:37:24.542863 1128402 main.go:141] libmachine: (pause-374510) Calling .DriverName
	I0603 13:37:24.545751 1128402 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 13:37:24.545789 1128402 main.go:141] libmachine: (pause-374510) Calling .GetSSHHostname
	I0603 13:37:24.549068 1128402 main.go:141] libmachine: (pause-374510) DBG | domain pause-374510 has defined MAC address 52:54:00:d1:27:7d in network mk-pause-374510
	I0603 13:37:24.549790 1128402 main.go:141] libmachine: (pause-374510) Calling .GetSSHPort
	I0603 13:37:24.550029 1128402 main.go:141] libmachine: (pause-374510) Calling .GetSSHKeyPath
	I0603 13:37:24.550097 1128402 main.go:141] libmachine: (pause-374510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:27:7d", ip: ""} in network mk-pause-374510: {Iface:virbr5 ExpiryTime:2024-06-03 14:35:09 +0000 UTC Type:0 Mac:52:54:00:d1:27:7d Iaid: IPaddr:192.168.94.3 Prefix:24 Hostname:pause-374510 Clientid:01:52:54:00:d1:27:7d}
	I0603 13:37:24.550134 1128402 main.go:141] libmachine: (pause-374510) DBG | domain pause-374510 has defined IP address 192.168.94.3 and MAC address 52:54:00:d1:27:7d in network mk-pause-374510
	I0603 13:37:24.550201 1128402 main.go:141] libmachine: (pause-374510) Calling .GetSSHUsername
	I0603 13:37:24.550381 1128402 sshutil.go:53] new ssh client: &{IP:192.168.94.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/pause-374510/id_rsa Username:docker}
	I0603 13:37:24.652541 1128402 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 13:37:24.657738 1128402 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 13:37:24.657766 1128402 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/addons for local assets ...
	I0603 13:37:24.657837 1128402 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/files for local assets ...
	I0603 13:37:24.657968 1128402 filesync.go:149] local asset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> 10862512.pem in /etc/ssl/certs
	I0603 13:37:24.658101 1128402 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 13:37:24.669803 1128402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:37:24.701830 1128402 start.go:296] duration metric: took 158.981375ms for postStartSetup
	I0603 13:37:24.701885 1128402 fix.go:56] duration metric: took 8.690918607s for fixHost
	I0603 13:37:24.701914 1128402 main.go:141] libmachine: (pause-374510) Calling .GetSSHHostname
	I0603 13:37:24.705682 1128402 main.go:141] libmachine: (pause-374510) DBG | domain pause-374510 has defined MAC address 52:54:00:d1:27:7d in network mk-pause-374510
	I0603 13:37:24.706167 1128402 main.go:141] libmachine: (pause-374510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:27:7d", ip: ""} in network mk-pause-374510: {Iface:virbr5 ExpiryTime:2024-06-03 14:35:09 +0000 UTC Type:0 Mac:52:54:00:d1:27:7d Iaid: IPaddr:192.168.94.3 Prefix:24 Hostname:pause-374510 Clientid:01:52:54:00:d1:27:7d}
	I0603 13:37:24.706203 1128402 main.go:141] libmachine: (pause-374510) DBG | domain pause-374510 has defined IP address 192.168.94.3 and MAC address 52:54:00:d1:27:7d in network mk-pause-374510
	I0603 13:37:24.706445 1128402 main.go:141] libmachine: (pause-374510) Calling .GetSSHPort
	I0603 13:37:24.706669 1128402 main.go:141] libmachine: (pause-374510) Calling .GetSSHKeyPath
	I0603 13:37:24.706851 1128402 main.go:141] libmachine: (pause-374510) Calling .GetSSHKeyPath
	I0603 13:37:24.707061 1128402 main.go:141] libmachine: (pause-374510) Calling .GetSSHUsername
	I0603 13:37:24.707288 1128402 main.go:141] libmachine: Using SSH client type: native
	I0603 13:37:24.707502 1128402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.94.3 22 <nil> <nil>}
	I0603 13:37:24.707517 1128402 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0603 13:37:24.836293 1128402 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717421844.824409788
	
	I0603 13:37:24.836324 1128402 fix.go:216] guest clock: 1717421844.824409788
	I0603 13:37:24.836335 1128402 fix.go:229] Guest: 2024-06-03 13:37:24.824409788 +0000 UTC Remote: 2024-06-03 13:37:24.701890003 +0000 UTC m=+51.985907320 (delta=122.519785ms)
	I0603 13:37:24.836365 1128402 fix.go:200] guest clock delta is within tolerance: 122.519785ms
	I0603 13:37:24.836371 1128402 start.go:83] releasing machines lock for "pause-374510", held for 8.825449557s
	I0603 13:37:24.836406 1128402 main.go:141] libmachine: (pause-374510) Calling .DriverName
	I0603 13:37:24.836733 1128402 main.go:141] libmachine: (pause-374510) Calling .GetIP
	I0603 13:37:24.840318 1128402 main.go:141] libmachine: (pause-374510) DBG | domain pause-374510 has defined MAC address 52:54:00:d1:27:7d in network mk-pause-374510
	I0603 13:37:24.840804 1128402 main.go:141] libmachine: (pause-374510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:27:7d", ip: ""} in network mk-pause-374510: {Iface:virbr5 ExpiryTime:2024-06-03 14:35:09 +0000 UTC Type:0 Mac:52:54:00:d1:27:7d Iaid: IPaddr:192.168.94.3 Prefix:24 Hostname:pause-374510 Clientid:01:52:54:00:d1:27:7d}
	I0603 13:37:24.840867 1128402 main.go:141] libmachine: (pause-374510) DBG | domain pause-374510 has defined IP address 192.168.94.3 and MAC address 52:54:00:d1:27:7d in network mk-pause-374510
	I0603 13:37:24.841184 1128402 main.go:141] libmachine: (pause-374510) Calling .DriverName
	I0603 13:37:24.841886 1128402 main.go:141] libmachine: (pause-374510) Calling .DriverName
	I0603 13:37:24.842150 1128402 main.go:141] libmachine: (pause-374510) Calling .DriverName
	I0603 13:37:24.842311 1128402 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 13:37:24.842454 1128402 main.go:141] libmachine: (pause-374510) Calling .GetSSHHostname
	I0603 13:37:24.842458 1128402 ssh_runner.go:195] Run: cat /version.json
	I0603 13:37:24.842805 1128402 main.go:141] libmachine: (pause-374510) Calling .GetSSHHostname
	I0603 13:37:24.846560 1128402 main.go:141] libmachine: (pause-374510) DBG | domain pause-374510 has defined MAC address 52:54:00:d1:27:7d in network mk-pause-374510
	I0603 13:37:24.846728 1128402 main.go:141] libmachine: (pause-374510) DBG | domain pause-374510 has defined MAC address 52:54:00:d1:27:7d in network mk-pause-374510
	I0603 13:37:24.847274 1128402 main.go:141] libmachine: (pause-374510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:27:7d", ip: ""} in network mk-pause-374510: {Iface:virbr5 ExpiryTime:2024-06-03 14:35:09 +0000 UTC Type:0 Mac:52:54:00:d1:27:7d Iaid: IPaddr:192.168.94.3 Prefix:24 Hostname:pause-374510 Clientid:01:52:54:00:d1:27:7d}
	I0603 13:37:24.847303 1128402 main.go:141] libmachine: (pause-374510) DBG | domain pause-374510 has defined IP address 192.168.94.3 and MAC address 52:54:00:d1:27:7d in network mk-pause-374510
	I0603 13:37:24.847666 1128402 main.go:141] libmachine: (pause-374510) Calling .GetSSHPort
	I0603 13:37:24.847874 1128402 main.go:141] libmachine: (pause-374510) Calling .GetSSHKeyPath
	I0603 13:37:24.847985 1128402 main.go:141] libmachine: (pause-374510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:27:7d", ip: ""} in network mk-pause-374510: {Iface:virbr5 ExpiryTime:2024-06-03 14:35:09 +0000 UTC Type:0 Mac:52:54:00:d1:27:7d Iaid: IPaddr:192.168.94.3 Prefix:24 Hostname:pause-374510 Clientid:01:52:54:00:d1:27:7d}
	I0603 13:37:24.848009 1128402 main.go:141] libmachine: (pause-374510) DBG | domain pause-374510 has defined IP address 192.168.94.3 and MAC address 52:54:00:d1:27:7d in network mk-pause-374510
	I0603 13:37:24.848048 1128402 main.go:141] libmachine: (pause-374510) Calling .GetSSHUsername
	I0603 13:37:24.848085 1128402 main.go:141] libmachine: (pause-374510) Calling .GetSSHPort
	I0603 13:37:24.848364 1128402 sshutil.go:53] new ssh client: &{IP:192.168.94.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/pause-374510/id_rsa Username:docker}
	I0603 13:37:24.848379 1128402 main.go:141] libmachine: (pause-374510) Calling .GetSSHKeyPath
	I0603 13:37:24.848538 1128402 main.go:141] libmachine: (pause-374510) Calling .GetSSHUsername
	I0603 13:37:24.848639 1128402 sshutil.go:53] new ssh client: &{IP:192.168.94.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/pause-374510/id_rsa Username:docker}
	I0603 13:37:24.974621 1128402 ssh_runner.go:195] Run: systemctl --version
	I0603 13:37:24.982309 1128402 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 13:37:25.165735 1128402 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 13:37:25.178292 1128402 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 13:37:25.178394 1128402 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 13:37:25.190879 1128402 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0603 13:37:25.190965 1128402 start.go:494] detecting cgroup driver to use...
	I0603 13:37:25.191046 1128402 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 13:37:25.218471 1128402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 13:37:25.235541 1128402 docker.go:217] disabling cri-docker service (if available) ...
	I0603 13:37:25.235628 1128402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 13:37:25.252977 1128402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 13:37:25.271191 1128402 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 13:37:25.475921 1128402 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 13:37:25.724794 1128402 docker.go:233] disabling docker service ...
	I0603 13:37:25.724884 1128402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 13:37:25.777583 1128402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 13:37:25.999855 1128402 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 13:37:26.276896 1128402 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 13:37:26.544142 1128402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 13:37:26.570156 1128402 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 13:37:26.724993 1128402 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 13:37:26.725076 1128402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:37:26.773134 1128402 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 13:37:26.773240 1128402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:37:26.808344 1128402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:37:26.828401 1128402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:37:26.849633 1128402 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 13:37:26.866726 1128402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:37:26.881817 1128402 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:37:26.900630 1128402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:37:26.916912 1128402 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 13:37:26.929759 1128402 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 13:37:26.941572 1128402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:37:27.173841 1128402 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 13:37:27.901471 1128402 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 13:37:27.901567 1128402 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 13:37:27.909937 1128402 start.go:562] Will wait 60s for crictl version
	I0603 13:37:27.910020 1128402 ssh_runner.go:195] Run: which crictl
	I0603 13:37:27.914479 1128402 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 13:37:27.959283 1128402 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 13:37:27.959378 1128402 ssh_runner.go:195] Run: crio --version
	I0603 13:37:27.989375 1128402 ssh_runner.go:195] Run: crio --version
	I0603 13:37:28.020440 1128402 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 13:37:28.021818 1128402 main.go:141] libmachine: (pause-374510) Calling .GetIP
	I0603 13:37:28.025495 1128402 main.go:141] libmachine: (pause-374510) DBG | domain pause-374510 has defined MAC address 52:54:00:d1:27:7d in network mk-pause-374510
	I0603 13:37:28.026075 1128402 main.go:141] libmachine: (pause-374510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:27:7d", ip: ""} in network mk-pause-374510: {Iface:virbr5 ExpiryTime:2024-06-03 14:35:09 +0000 UTC Type:0 Mac:52:54:00:d1:27:7d Iaid: IPaddr:192.168.94.3 Prefix:24 Hostname:pause-374510 Clientid:01:52:54:00:d1:27:7d}
	I0603 13:37:28.026108 1128402 main.go:141] libmachine: (pause-374510) DBG | domain pause-374510 has defined IP address 192.168.94.3 and MAC address 52:54:00:d1:27:7d in network mk-pause-374510
	I0603 13:37:28.026580 1128402 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0603 13:37:28.031738 1128402 kubeadm.go:877] updating cluster {Name:pause-374510 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1
ClusterName:pause-374510 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.3 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false
olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 13:37:28.031981 1128402 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 13:37:28.032061 1128402 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:37:28.085042 1128402 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 13:37:28.085070 1128402 crio.go:433] Images already preloaded, skipping extraction
	I0603 13:37:28.085141 1128402 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:37:28.137031 1128402 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 13:37:28.137063 1128402 cache_images.go:84] Images are preloaded, skipping loading
	I0603 13:37:28.137074 1128402 kubeadm.go:928] updating node { 192.168.94.3 8443 v1.30.1 crio true true} ...
	I0603 13:37:28.137248 1128402 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-374510 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:pause-374510 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 13:37:28.137364 1128402 ssh_runner.go:195] Run: crio config
	I0603 13:37:28.195752 1128402 cni.go:84] Creating CNI manager for ""
	I0603 13:37:28.195780 1128402 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:37:28.195793 1128402 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 13:37:28.195817 1128402 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.3 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-374510 NodeName:pause-374510 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 13:37:28.195974 1128402 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-374510"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 13:37:28.196065 1128402 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 13:37:28.210863 1128402 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 13:37:28.210966 1128402 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 13:37:28.221519 1128402 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0603 13:37:28.246464 1128402 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 13:37:28.270679 1128402 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0603 13:37:28.294885 1128402 ssh_runner.go:195] Run: grep 192.168.94.3	control-plane.minikube.internal$ /etc/hosts
	I0603 13:37:28.299689 1128402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:37:28.478357 1128402 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:37:28.494176 1128402 certs.go:68] Setting up /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/pause-374510 for IP: 192.168.94.3
	I0603 13:37:28.494206 1128402 certs.go:194] generating shared ca certs ...
	I0603 13:37:28.494230 1128402 certs.go:226] acquiring lock for ca certs: {Name:mkeec5aabce7c9540fcb31b78e4f96c2851d54f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:37:28.494442 1128402 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key
	I0603 13:37:28.494507 1128402 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key
	I0603 13:37:28.494522 1128402 certs.go:256] generating profile certs ...
	I0603 13:37:28.494637 1128402 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/pause-374510/client.key
	I0603 13:37:28.494716 1128402 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/pause-374510/apiserver.key.cea453b6
	I0603 13:37:28.494772 1128402 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/pause-374510/proxy-client.key
	I0603 13:37:28.494917 1128402 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem (1338 bytes)
	W0603 13:37:28.494968 1128402 certs.go:480] ignoring /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251_empty.pem, impossibly tiny 0 bytes
	I0603 13:37:28.494984 1128402 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 13:37:28.495020 1128402 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem (1078 bytes)
	I0603 13:37:28.495055 1128402 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem (1123 bytes)
	I0603 13:37:28.495104 1128402 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem (1675 bytes)
	I0603 13:37:28.495165 1128402 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:37:28.496123 1128402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 13:37:28.527405 1128402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 13:37:28.558046 1128402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 13:37:28.593465 1128402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0603 13:37:28.624569 1128402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/pause-374510/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0603 13:37:28.652152 1128402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/pause-374510/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 13:37:28.684033 1128402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/pause-374510/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 13:37:28.789543 1128402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/pause-374510/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 13:37:28.961602 1128402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem --> /usr/share/ca-certificates/1086251.pem (1338 bytes)
	I0603 13:37:29.121563 1128402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /usr/share/ca-certificates/10862512.pem (1708 bytes)
	I0603 13:37:29.203651 1128402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 13:37:29.298457 1128402 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 13:37:29.340187 1128402 ssh_runner.go:195] Run: openssl version
	I0603 13:37:29.351083 1128402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1086251.pem && ln -fs /usr/share/ca-certificates/1086251.pem /etc/ssl/certs/1086251.pem"
	I0603 13:37:29.383592 1128402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1086251.pem
	I0603 13:37:29.390644 1128402 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:37 /usr/share/ca-certificates/1086251.pem
	I0603 13:37:29.390723 1128402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1086251.pem
	I0603 13:37:29.407574 1128402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1086251.pem /etc/ssl/certs/51391683.0"
	I0603 13:37:29.429989 1128402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10862512.pem && ln -fs /usr/share/ca-certificates/10862512.pem /etc/ssl/certs/10862512.pem"
	I0603 13:37:29.460259 1128402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10862512.pem
	I0603 13:37:29.471863 1128402 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:37 /usr/share/ca-certificates/10862512.pem
	I0603 13:37:29.471949 1128402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10862512.pem
	I0603 13:37:29.483900 1128402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10862512.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 13:37:29.500451 1128402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 13:37:29.521715 1128402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:37:29.528504 1128402 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:24 /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:37:29.528575 1128402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:37:29.540459 1128402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 13:37:29.553572 1128402 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 13:37:29.562555 1128402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 13:37:29.571399 1128402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 13:37:29.586077 1128402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 13:37:29.608267 1128402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 13:37:29.617763 1128402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 13:37:29.628875 1128402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 13:37:29.639420 1128402 kubeadm.go:391] StartCluster: {Name:pause-374510 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:pause-374510 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.3 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm
:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:37:29.639591 1128402 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 13:37:29.639680 1128402 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:37:29.690915 1128402 cri.go:89] found id: "4c4a1aa1c0a528a93a6d1c0724b7347ae5975e50e0a4df0a56d406a85700ac82"
	I0603 13:37:29.690949 1128402 cri.go:89] found id: "2ad021413893a1057877b37ab2a5aaf857fbc67d8904ffcdcb50cdf8f3cd5473"
	I0603 13:37:29.690956 1128402 cri.go:89] found id: "a5958fa5fc08cd625441deb6a1c8f01186a492a25779925cead50389ac5778a2"
	I0603 13:37:29.690960 1128402 cri.go:89] found id: "a5bb0ea114452d89b9a9c672315f8800acbe91c80b0716e8bb40643528e0e592"
	I0603 13:37:29.690964 1128402 cri.go:89] found id: "07d936e6df4f5e08035f9b4e2f8682831c0d5beda04da16180aa097741c81d0d"
	I0603 13:37:29.690968 1128402 cri.go:89] found id: "97feea60a1f2eed2fa73ec0056267bc817af405169342b8c8640b346ed9c3b3d"
	I0603 13:37:29.690972 1128402 cri.go:89] found id: "646d8679c111d7c550b08b53d6f63422698705fd1c7aa412c642b2d338b19723"
	I0603 13:37:29.690975 1128402 cri.go:89] found id: ""
	I0603 13:37:29.691034 1128402 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-374510 -n pause-374510
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-374510 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-374510 logs -n 25: (1.625502196s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-021279 sudo journalctl                       | auto-021279            | jenkins | v1.33.1 | 03 Jun 24 13:37 UTC | 03 Jun 24 13:37 UTC |
	|         | -xeu kubelet --all --full                            |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p auto-021279 sudo cat                              | auto-021279            | jenkins | v1.33.1 | 03 Jun 24 13:37 UTC | 03 Jun 24 13:37 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                        |         |         |                     |                     |
	| ssh     | -p auto-021279 sudo cat                              | auto-021279            | jenkins | v1.33.1 | 03 Jun 24 13:37 UTC | 03 Jun 24 13:37 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                        |         |         |                     |                     |
	| ssh     | -p auto-021279 sudo systemctl                        | auto-021279            | jenkins | v1.33.1 | 03 Jun 24 13:37 UTC |                     |
	|         | status docker --all --full                           |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| delete  | -p cert-expiration-925487                            | cert-expiration-925487 | jenkins | v1.33.1 | 03 Jun 24 13:37 UTC | 03 Jun 24 13:37 UTC |
	| ssh     | -p auto-021279 sudo systemctl                        | auto-021279            | jenkins | v1.33.1 | 03 Jun 24 13:37 UTC | 03 Jun 24 13:37 UTC |
	|         | cat docker --no-pager                                |                        |         |         |                     |                     |
	| ssh     | -p auto-021279 sudo cat                              | auto-021279            | jenkins | v1.33.1 | 03 Jun 24 13:37 UTC | 03 Jun 24 13:37 UTC |
	|         | /etc/docker/daemon.json                              |                        |         |         |                     |                     |
	| ssh     | -p auto-021279 sudo docker                           | auto-021279            | jenkins | v1.33.1 | 03 Jun 24 13:37 UTC |                     |
	|         | system info                                          |                        |         |         |                     |                     |
	| start   | -p custom-flannel-021279                             | custom-flannel-021279  | jenkins | v1.33.1 | 03 Jun 24 13:37 UTC |                     |
	|         | --memory=3072 --alsologtostderr                      |                        |         |         |                     |                     |
	|         | --wait=true --wait-timeout=15m                       |                        |         |         |                     |                     |
	|         | --cni=testdata/kube-flannel.yaml                     |                        |         |         |                     |                     |
	|         | --driver=kvm2                                        |                        |         |         |                     |                     |
	|         | --container-runtime=crio                             |                        |         |         |                     |                     |
	| ssh     | -p auto-021279 sudo systemctl                        | auto-021279            | jenkins | v1.33.1 | 03 Jun 24 13:37 UTC |                     |
	|         | status cri-docker --all --full                       |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p auto-021279 sudo systemctl                        | auto-021279            | jenkins | v1.33.1 | 03 Jun 24 13:37 UTC | 03 Jun 24 13:37 UTC |
	|         | cat cri-docker --no-pager                            |                        |         |         |                     |                     |
	| ssh     | -p auto-021279 sudo cat                              | auto-021279            | jenkins | v1.33.1 | 03 Jun 24 13:37 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                        |         |         |                     |                     |
	| ssh     | -p auto-021279 sudo cat                              | auto-021279            | jenkins | v1.33.1 | 03 Jun 24 13:37 UTC | 03 Jun 24 13:37 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                        |         |         |                     |                     |
	| ssh     | -p auto-021279 sudo                                  | auto-021279            | jenkins | v1.33.1 | 03 Jun 24 13:37 UTC | 03 Jun 24 13:37 UTC |
	|         | cri-dockerd --version                                |                        |         |         |                     |                     |
	| ssh     | -p auto-021279 sudo systemctl                        | auto-021279            | jenkins | v1.33.1 | 03 Jun 24 13:37 UTC |                     |
	|         | status containerd --all --full                       |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p auto-021279 sudo systemctl                        | auto-021279            | jenkins | v1.33.1 | 03 Jun 24 13:37 UTC | 03 Jun 24 13:37 UTC |
	|         | cat containerd --no-pager                            |                        |         |         |                     |                     |
	| ssh     | -p auto-021279 sudo cat                              | auto-021279            | jenkins | v1.33.1 | 03 Jun 24 13:37 UTC | 03 Jun 24 13:37 UTC |
	|         | /lib/systemd/system/containerd.service               |                        |         |         |                     |                     |
	| ssh     | -p auto-021279 sudo cat                              | auto-021279            | jenkins | v1.33.1 | 03 Jun 24 13:37 UTC | 03 Jun 24 13:37 UTC |
	|         | /etc/containerd/config.toml                          |                        |         |         |                     |                     |
	| ssh     | -p auto-021279 sudo containerd                       | auto-021279            | jenkins | v1.33.1 | 03 Jun 24 13:37 UTC | 03 Jun 24 13:37 UTC |
	|         | config dump                                          |                        |         |         |                     |                     |
	| ssh     | -p auto-021279 sudo systemctl                        | auto-021279            | jenkins | v1.33.1 | 03 Jun 24 13:37 UTC | 03 Jun 24 13:37 UTC |
	|         | status crio --all --full                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p auto-021279 sudo systemctl                        | auto-021279            | jenkins | v1.33.1 | 03 Jun 24 13:37 UTC | 03 Jun 24 13:37 UTC |
	|         | cat crio --no-pager                                  |                        |         |         |                     |                     |
	| ssh     | -p auto-021279 sudo find                             | auto-021279            | jenkins | v1.33.1 | 03 Jun 24 13:37 UTC | 03 Jun 24 13:37 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                        |         |         |                     |                     |
	| ssh     | -p auto-021279 sudo crio                             | auto-021279            | jenkins | v1.33.1 | 03 Jun 24 13:37 UTC | 03 Jun 24 13:37 UTC |
	|         | config                                               |                        |         |         |                     |                     |
	| delete  | -p auto-021279                                       | auto-021279            | jenkins | v1.33.1 | 03 Jun 24 13:37 UTC | 03 Jun 24 13:37 UTC |
	| start   | -p kindnet-021279                                    | kindnet-021279         | jenkins | v1.33.1 | 03 Jun 24 13:37 UTC |                     |
	|         | --memory=3072                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                        |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                        |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                          |                        |         |         |                     |                     |
	|         | --container-runtime=crio                             |                        |         |         |                     |                     |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 13:37:56
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 13:37:56.561662 1130573 out.go:291] Setting OutFile to fd 1 ...
	I0603 13:37:56.561967 1130573 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:37:56.561980 1130573 out.go:304] Setting ErrFile to fd 2...
	I0603 13:37:56.561987 1130573 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:37:56.562220 1130573 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
	I0603 13:37:56.562876 1130573 out.go:298] Setting JSON to false
	I0603 13:37:56.564092 1130573 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":15624,"bootTime":1717406253,"procs":302,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 13:37:56.564173 1130573 start.go:139] virtualization: kvm guest
	I0603 13:37:56.565901 1130573 out.go:177] * [kindnet-021279] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 13:37:56.567965 1130573 out.go:177]   - MINIKUBE_LOCATION=19011
	I0603 13:37:56.567965 1130573 notify.go:220] Checking for updates...
	I0603 13:37:56.569673 1130573 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 13:37:56.571562 1130573 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 13:37:56.573256 1130573 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 13:37:56.574774 1130573 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 13:37:56.576220 1130573 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 13:37:56.579146 1130573 config.go:182] Loaded profile config "calico-021279": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:37:56.579283 1130573 config.go:182] Loaded profile config "custom-flannel-021279": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:37:56.579444 1130573 config.go:182] Loaded profile config "pause-374510": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:37:56.579613 1130573 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 13:37:56.623810 1130573 out.go:177] * Using the kvm2 driver based on user configuration
	I0603 13:37:56.625323 1130573 start.go:297] selected driver: kvm2
	I0603 13:37:56.625369 1130573 start.go:901] validating driver "kvm2" against <nil>
	I0603 13:37:56.625388 1130573 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 13:37:56.626482 1130573 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 13:37:56.626623 1130573 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19011-1078924/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0603 13:37:56.644932 1130573 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0603 13:37:56.645004 1130573 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0603 13:37:56.645331 1130573 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 13:37:56.645375 1130573 cni.go:84] Creating CNI manager for "kindnet"
	I0603 13:37:56.645387 1130573 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0603 13:37:56.645476 1130573 start.go:340] cluster config:
	{Name:kindnet-021279 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:kindnet-021279 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:37:56.645622 1130573 iso.go:125] acquiring lock: {Name:mka26d6a83f88b83737ccc78b57cc462fbe70fe1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 13:37:56.647720 1130573 out.go:177] * Starting "kindnet-021279" primary control-plane node in "kindnet-021279" cluster
	I0603 13:37:53.641702 1128402 pod_ready.go:92] pod "coredns-7db6d8ff4d-k4gdp" in "kube-system" namespace has status "Ready":"True"
	I0603 13:37:53.641734 1128402 pod_ready.go:81] duration metric: took 5.508773941s for pod "coredns-7db6d8ff4d-k4gdp" in "kube-system" namespace to be "Ready" ...
	I0603 13:37:53.641747 1128402 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-374510" in "kube-system" namespace to be "Ready" ...
	I0603 13:37:55.650752 1128402 pod_ready.go:92] pod "etcd-pause-374510" in "kube-system" namespace has status "Ready":"True"
	I0603 13:37:55.650789 1128402 pod_ready.go:81] duration metric: took 2.009032722s for pod "etcd-pause-374510" in "kube-system" namespace to be "Ready" ...
	I0603 13:37:55.650804 1128402 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-374510" in "kube-system" namespace to be "Ready" ...
	I0603 13:37:56.159124 1128402 pod_ready.go:92] pod "kube-apiserver-pause-374510" in "kube-system" namespace has status "Ready":"True"
	I0603 13:37:56.159156 1128402 pod_ready.go:81] duration metric: took 508.342427ms for pod "kube-apiserver-pause-374510" in "kube-system" namespace to be "Ready" ...
	I0603 13:37:56.159195 1128402 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-374510" in "kube-system" namespace to be "Ready" ...
	I0603 13:37:56.166278 1128402 pod_ready.go:92] pod "kube-controller-manager-pause-374510" in "kube-system" namespace has status "Ready":"True"
	I0603 13:37:56.166310 1128402 pod_ready.go:81] duration metric: took 7.106132ms for pod "kube-controller-manager-pause-374510" in "kube-system" namespace to be "Ready" ...
	I0603 13:37:56.166325 1128402 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6tc5r" in "kube-system" namespace to be "Ready" ...
	I0603 13:37:56.172136 1128402 pod_ready.go:92] pod "kube-proxy-6tc5r" in "kube-system" namespace has status "Ready":"True"
	I0603 13:37:56.172165 1128402 pod_ready.go:81] duration metric: took 5.831785ms for pod "kube-proxy-6tc5r" in "kube-system" namespace to be "Ready" ...
	I0603 13:37:56.172178 1128402 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-374510" in "kube-system" namespace to be "Ready" ...
	I0603 13:37:58.179321 1128402 pod_ready.go:102] pod "kube-scheduler-pause-374510" in "kube-system" namespace has status "Ready":"False"
	I0603 13:37:59.179524 1128402 pod_ready.go:92] pod "kube-scheduler-pause-374510" in "kube-system" namespace has status "Ready":"True"
	I0603 13:37:59.179552 1128402 pod_ready.go:81] duration metric: took 3.007365098s for pod "kube-scheduler-pause-374510" in "kube-system" namespace to be "Ready" ...
	I0603 13:37:59.179560 1128402 pod_ready.go:38] duration metric: took 11.055483912s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:37:59.179579 1128402 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 13:37:59.195447 1128402 ops.go:34] apiserver oom_adj: -16
	I0603 13:37:59.195481 1128402 kubeadm.go:591] duration metric: took 29.391734123s to restartPrimaryControlPlane
	I0603 13:37:59.195502 1128402 kubeadm.go:393] duration metric: took 29.556098168s to StartCluster
	I0603 13:37:59.195521 1128402 settings.go:142] acquiring lock: {Name:mka7155af15d143794eb08b8670f7d850f44839e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:37:59.195609 1128402 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 13:37:59.196881 1128402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/kubeconfig: {Name:mk082a4c41fd0f4876b4085806e1bc5ef6533b14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:37:59.197184 1128402 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.94.3 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 13:37:59.199057 1128402 out.go:177] * Verifying Kubernetes components...
	I0603 13:37:59.197289 1128402 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 13:37:59.197508 1128402 config.go:182] Loaded profile config "pause-374510": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:37:59.200865 1128402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:37:59.200873 1128402 out.go:177] * Enabled addons: 
	I0603 13:37:57.287525 1128335 node_ready.go:53] node "calico-021279" has status "Ready":"False"
	I0603 13:37:58.286008 1128335 node_ready.go:49] node "calico-021279" has status "Ready":"True"
	I0603 13:37:58.286039 1128335 node_ready.go:38] duration metric: took 8.003135211s for node "calico-021279" to be "Ready" ...
	I0603 13:37:58.286051 1128335 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:37:58.301560 1128335 pod_ready.go:78] waiting up to 15m0s for pod "calico-kube-controllers-564985c589-znvpg" in "kube-system" namespace to be "Ready" ...
	I0603 13:38:00.308720 1128335 pod_ready.go:102] pod "calico-kube-controllers-564985c589-znvpg" in "kube-system" namespace has status "Ready":"False"
	I0603 13:37:56.299502 1129957 main.go:141] libmachine: (custom-flannel-021279) DBG | domain custom-flannel-021279 has defined MAC address 52:54:00:0b:62:a0 in network mk-custom-flannel-021279
	I0603 13:37:56.327982 1129957 main.go:141] libmachine: (custom-flannel-021279) DBG | unable to find current IP address of domain custom-flannel-021279 in network mk-custom-flannel-021279
	I0603 13:37:56.328007 1129957 main.go:141] libmachine: (custom-flannel-021279) DBG | I0603 13:37:56.300218 1129998 retry.go:31] will retry after 634.620815ms: waiting for machine to come up
	I0603 13:37:56.936416 1129957 main.go:141] libmachine: (custom-flannel-021279) DBG | domain custom-flannel-021279 has defined MAC address 52:54:00:0b:62:a0 in network mk-custom-flannel-021279
	I0603 13:37:56.937110 1129957 main.go:141] libmachine: (custom-flannel-021279) DBG | unable to find current IP address of domain custom-flannel-021279 in network mk-custom-flannel-021279
	I0603 13:37:56.937142 1129957 main.go:141] libmachine: (custom-flannel-021279) DBG | I0603 13:37:56.937047 1129998 retry.go:31] will retry after 966.248782ms: waiting for machine to come up
	I0603 13:37:57.904883 1129957 main.go:141] libmachine: (custom-flannel-021279) DBG | domain custom-flannel-021279 has defined MAC address 52:54:00:0b:62:a0 in network mk-custom-flannel-021279
	I0603 13:37:57.905954 1129957 main.go:141] libmachine: (custom-flannel-021279) DBG | unable to find current IP address of domain custom-flannel-021279 in network mk-custom-flannel-021279
	I0603 13:37:57.905982 1129957 main.go:141] libmachine: (custom-flannel-021279) DBG | I0603 13:37:57.905883 1129998 retry.go:31] will retry after 1.019724207s: waiting for machine to come up
	I0603 13:37:58.927151 1129957 main.go:141] libmachine: (custom-flannel-021279) DBG | domain custom-flannel-021279 has defined MAC address 52:54:00:0b:62:a0 in network mk-custom-flannel-021279
	I0603 13:37:58.927706 1129957 main.go:141] libmachine: (custom-flannel-021279) DBG | unable to find current IP address of domain custom-flannel-021279 in network mk-custom-flannel-021279
	I0603 13:37:58.927737 1129957 main.go:141] libmachine: (custom-flannel-021279) DBG | I0603 13:37:58.927650 1129998 retry.go:31] will retry after 1.440630461s: waiting for machine to come up
	I0603 13:38:00.369529 1129957 main.go:141] libmachine: (custom-flannel-021279) DBG | domain custom-flannel-021279 has defined MAC address 52:54:00:0b:62:a0 in network mk-custom-flannel-021279
	I0603 13:38:00.370177 1129957 main.go:141] libmachine: (custom-flannel-021279) DBG | unable to find current IP address of domain custom-flannel-021279 in network mk-custom-flannel-021279
	I0603 13:38:00.370206 1129957 main.go:141] libmachine: (custom-flannel-021279) DBG | I0603 13:38:00.370095 1129998 retry.go:31] will retry after 1.420803394s: waiting for machine to come up
	I0603 13:37:56.649075 1130573 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 13:37:56.649134 1130573 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0603 13:37:56.649146 1130573 cache.go:56] Caching tarball of preloaded images
	I0603 13:37:56.649307 1130573 preload.go:173] Found /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 13:37:56.649326 1130573 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0603 13:37:56.649515 1130573 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kindnet-021279/config.json ...
	I0603 13:37:56.649542 1130573 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kindnet-021279/config.json: {Name:mka7e6568c5ab33747f807bb6c3d3f010f4b2853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:37:56.649796 1130573 start.go:360] acquireMachinesLock for kindnet-021279: {Name:mk20baaab39609d00406b78ad309423511e633ec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 13:37:59.202251 1128402 addons.go:510] duration metric: took 4.96381ms for enable addons: enabled=[]
	I0603 13:37:59.430699 1128402 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:37:59.447061 1128402 node_ready.go:35] waiting up to 6m0s for node "pause-374510" to be "Ready" ...
	I0603 13:37:59.450786 1128402 node_ready.go:49] node "pause-374510" has status "Ready":"True"
	I0603 13:37:59.450812 1128402 node_ready.go:38] duration metric: took 3.683419ms for node "pause-374510" to be "Ready" ...
	I0603 13:37:59.450821 1128402 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:37:59.455872 1128402 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-k4gdp" in "kube-system" namespace to be "Ready" ...
	I0603 13:37:59.461935 1128402 pod_ready.go:92] pod "coredns-7db6d8ff4d-k4gdp" in "kube-system" namespace has status "Ready":"True"
	I0603 13:37:59.461955 1128402 pod_ready.go:81] duration metric: took 6.05415ms for pod "coredns-7db6d8ff4d-k4gdp" in "kube-system" namespace to be "Ready" ...
	I0603 13:37:59.461964 1128402 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-374510" in "kube-system" namespace to be "Ready" ...
	I0603 13:37:59.645780 1128402 pod_ready.go:92] pod "etcd-pause-374510" in "kube-system" namespace has status "Ready":"True"
	I0603 13:37:59.645808 1128402 pod_ready.go:81] duration metric: took 183.838709ms for pod "etcd-pause-374510" in "kube-system" namespace to be "Ready" ...
	I0603 13:37:59.645819 1128402 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-374510" in "kube-system" namespace to be "Ready" ...
	I0603 13:38:00.046057 1128402 pod_ready.go:92] pod "kube-apiserver-pause-374510" in "kube-system" namespace has status "Ready":"True"
	I0603 13:38:00.046091 1128402 pod_ready.go:81] duration metric: took 400.263199ms for pod "kube-apiserver-pause-374510" in "kube-system" namespace to be "Ready" ...
	I0603 13:38:00.046106 1128402 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-374510" in "kube-system" namespace to be "Ready" ...
	I0603 13:38:00.449364 1128402 pod_ready.go:92] pod "kube-controller-manager-pause-374510" in "kube-system" namespace has status "Ready":"True"
	I0603 13:38:00.449401 1128402 pod_ready.go:81] duration metric: took 403.285311ms for pod "kube-controller-manager-pause-374510" in "kube-system" namespace to be "Ready" ...
	I0603 13:38:00.449443 1128402 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6tc5r" in "kube-system" namespace to be "Ready" ...
	I0603 13:38:00.846767 1128402 pod_ready.go:92] pod "kube-proxy-6tc5r" in "kube-system" namespace has status "Ready":"True"
	I0603 13:38:00.846801 1128402 pod_ready.go:81] duration metric: took 397.348896ms for pod "kube-proxy-6tc5r" in "kube-system" namespace to be "Ready" ...
	I0603 13:38:00.846814 1128402 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-374510" in "kube-system" namespace to be "Ready" ...
	I0603 13:38:01.247647 1128402 pod_ready.go:92] pod "kube-scheduler-pause-374510" in "kube-system" namespace has status "Ready":"True"
	I0603 13:38:01.247682 1128402 pod_ready.go:81] duration metric: took 400.859083ms for pod "kube-scheduler-pause-374510" in "kube-system" namespace to be "Ready" ...
	I0603 13:38:01.247694 1128402 pod_ready.go:38] duration metric: took 1.796862399s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:38:01.247713 1128402 api_server.go:52] waiting for apiserver process to appear ...
	I0603 13:38:01.247801 1128402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:38:01.270397 1128402 api_server.go:72] duration metric: took 2.073152123s to wait for apiserver process to appear ...
	I0603 13:38:01.270464 1128402 api_server.go:88] waiting for apiserver healthz status ...
	I0603 13:38:01.270506 1128402 api_server.go:253] Checking apiserver healthz at https://192.168.94.3:8443/healthz ...
	I0603 13:38:01.276496 1128402 api_server.go:279] https://192.168.94.3:8443/healthz returned 200:
	ok
	I0603 13:38:01.277754 1128402 api_server.go:141] control plane version: v1.30.1
	I0603 13:38:01.277828 1128402 api_server.go:131] duration metric: took 7.355365ms to wait for apiserver health ...
	I0603 13:38:01.277842 1128402 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 13:38:01.449829 1128402 system_pods.go:59] 6 kube-system pods found
	I0603 13:38:01.449874 1128402 system_pods.go:61] "coredns-7db6d8ff4d-k4gdp" [ea63b48b-59b9-4fc1-ab16-4bd2452781d8] Running
	I0603 13:38:01.449884 1128402 system_pods.go:61] "etcd-pause-374510" [6802cbfa-1b6a-4f49-93d4-8fd472f4f1ba] Running
	I0603 13:38:01.449889 1128402 system_pods.go:61] "kube-apiserver-pause-374510" [b54aa0cb-5d8f-4499-9db7-b8b9f8435cf9] Running
	I0603 13:38:01.449896 1128402 system_pods.go:61] "kube-controller-manager-pause-374510" [4b6dcc0e-479d-4f24-876b-6933b62af65e] Running
	I0603 13:38:01.449901 1128402 system_pods.go:61] "kube-proxy-6tc5r" [13008dfa-c9ca-4978-bb85-797ab01a9495] Running
	I0603 13:38:01.449906 1128402 system_pods.go:61] "kube-scheduler-pause-374510" [8c97b04d-8936-438d-bf6b-4c192d34e4d4] Running
	I0603 13:38:01.449915 1128402 system_pods.go:74] duration metric: took 172.065387ms to wait for pod list to return data ...
	I0603 13:38:01.449926 1128402 default_sa.go:34] waiting for default service account to be created ...
	I0603 13:38:01.647422 1128402 default_sa.go:45] found service account: "default"
	I0603 13:38:01.647474 1128402 default_sa.go:55] duration metric: took 197.537996ms for default service account to be created ...
	I0603 13:38:01.647489 1128402 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 13:38:01.849376 1128402 system_pods.go:86] 6 kube-system pods found
	I0603 13:38:01.849423 1128402 system_pods.go:89] "coredns-7db6d8ff4d-k4gdp" [ea63b48b-59b9-4fc1-ab16-4bd2452781d8] Running
	I0603 13:38:01.849432 1128402 system_pods.go:89] "etcd-pause-374510" [6802cbfa-1b6a-4f49-93d4-8fd472f4f1ba] Running
	I0603 13:38:01.849439 1128402 system_pods.go:89] "kube-apiserver-pause-374510" [b54aa0cb-5d8f-4499-9db7-b8b9f8435cf9] Running
	I0603 13:38:01.849446 1128402 system_pods.go:89] "kube-controller-manager-pause-374510" [4b6dcc0e-479d-4f24-876b-6933b62af65e] Running
	I0603 13:38:01.849452 1128402 system_pods.go:89] "kube-proxy-6tc5r" [13008dfa-c9ca-4978-bb85-797ab01a9495] Running
	I0603 13:38:01.849459 1128402 system_pods.go:89] "kube-scheduler-pause-374510" [8c97b04d-8936-438d-bf6b-4c192d34e4d4] Running
	I0603 13:38:01.849468 1128402 system_pods.go:126] duration metric: took 201.971373ms to wait for k8s-apps to be running ...
	I0603 13:38:01.849477 1128402 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 13:38:01.849664 1128402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:38:01.870409 1128402 system_svc.go:56] duration metric: took 20.917384ms WaitForService to wait for kubelet
	I0603 13:38:01.870454 1128402 kubeadm.go:576] duration metric: took 2.673226559s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 13:38:01.870482 1128402 node_conditions.go:102] verifying NodePressure condition ...
	I0603 13:38:02.047147 1128402 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 13:38:02.047179 1128402 node_conditions.go:123] node cpu capacity is 2
	I0603 13:38:02.047192 1128402 node_conditions.go:105] duration metric: took 176.703823ms to run NodePressure ...
	I0603 13:38:02.047207 1128402 start.go:240] waiting for startup goroutines ...
	I0603 13:38:02.047215 1128402 start.go:245] waiting for cluster config update ...
	I0603 13:38:02.047227 1128402 start.go:254] writing updated cluster config ...
	I0603 13:38:02.047574 1128402 ssh_runner.go:195] Run: rm -f paused
	I0603 13:38:02.117595 1128402 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 13:38:02.119949 1128402 out.go:177] * Done! kubectl is now configured to use "pause-374510" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jun 03 13:38:02 pause-374510 crio[3006]: time="2024-06-03 13:38:02.901972011Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:9842f1d13a85b715b4029ee646388a47e9ab0ccdbc2acc765bb2132e7f832ce8,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-k4gdp,Uid:ea63b48b-59b9-4fc1-ab16-4bd2452781d8,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1717421849024327768,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-k4gdp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea63b48b-59b9-4fc1-ab16-4bd2452781d8,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03T13:35:50.763148968Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:85b02571ace707f0e4c5b222d0995e8d55a9c1c3e4851755af821f31b714bd63,Metadata:&PodSandboxMetadata{Name:kube-proxy-6tc5r,Uid:13008dfa-c9ca-4978-bb85-797ab01a9495,Namespace:kube-system,Attempt
:2,},State:SANDBOX_READY,CreatedAt:1717421848953662809,Labels:map[string]string{controller-revision-hash: 5dbf89796d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-6tc5r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13008dfa-c9ca-4978-bb85-797ab01a9495,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03T13:35:50.639634654Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:138d4c7d1e874cdb202fb0fdd6882666ca5e79e79ef3b17a7f77b5d2dd1f606c,Metadata:&PodSandboxMetadata{Name:etcd-pause-374510,Uid:dfdc9599d2feaa906e61fb7a6e4cf2b1,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1717421848909611263,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-374510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfdc9599d2feaa906e61fb7a6e4cf2b1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/
etcd.advertise-client-urls: https://192.168.94.3:2379,kubernetes.io/config.hash: dfdc9599d2feaa906e61fb7a6e4cf2b1,kubernetes.io/config.seen: 2024-06-03T13:35:36.984917848Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1fdf623c19e30a9d6eeed0489e4b05439916b481f77d85a3487147bb38b55c7e,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-374510,Uid:377763bf209825dc0f3733b4fb073b5b,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1717421848832105505,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-374510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 377763bf209825dc0f3733b4fb073b5b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 377763bf209825dc0f3733b4fb073b5b,kubernetes.io/config.seen: 2024-06-03T13:35:36.984922774Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3973517155d58d495053d072e8bfa888e8d
d8ae229a0316c18b7d8e00fd7b13f,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-374510,Uid:50a237ba4f3cdf7f5165fcfcc243c780,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1717421848776941977,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-374510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50a237ba4f3cdf7f5165fcfcc243c780,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 50a237ba4f3cdf7f5165fcfcc243c780,kubernetes.io/config.seen: 2024-06-03T13:35:36.984923553Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:94b137d75ba345a1764a7316d025be19cf2bb136da2a00646ce75422605cb2a6,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-374510,Uid:ddffbe442e034723d60bf0b98bb412a8,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1717421848762421250,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.ku
bernetes.pod.name: kube-apiserver-pause-374510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddffbe442e034723d60bf0b98bb412a8,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.94.3:8443,kubernetes.io/config.hash: ddffbe442e034723d60bf0b98bb412a8,kubernetes.io/config.seen: 2024-06-03T13:35:36.984921575Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4dc1fb3aa6d92c1a0f0f8f13424851a5b4e0019139f39be4d53440dd223b7153,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-374510,Uid:377763bf209825dc0f3733b4fb073b5b,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1717421845669989260,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-374510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 377763bf209825dc0f3733b4fb073b5b,tier: control-plane,},Annotations:map[string]s
tring{kubernetes.io/config.hash: 377763bf209825dc0f3733b4fb073b5b,kubernetes.io/config.seen: 2024-06-03T13:35:36.984922774Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:aa8e932c5cdcb9aa97979ca2705f20053db1c445916f2cb7c80e8039c0beae75,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-374510,Uid:50a237ba4f3cdf7f5165fcfcc243c780,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1717421845662410327,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-374510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50a237ba4f3cdf7f5165fcfcc243c780,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 50a237ba4f3cdf7f5165fcfcc243c780,kubernetes.io/config.seen: 2024-06-03T13:35:36.984923553Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7f5e32f9e93875b5fab213d27ba8761561aaac4b57db73ec9b4b0ea4bea385c5,Metadata:&PodSandboxMetadata{Name:kube-api
server-pause-374510,Uid:ddffbe442e034723d60bf0b98bb412a8,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1717421845653438251,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-374510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddffbe442e034723d60bf0b98bb412a8,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.94.3:8443,kubernetes.io/config.hash: ddffbe442e034723d60bf0b98bb412a8,kubernetes.io/config.seen: 2024-06-03T13:35:36.984921575Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:586d69d890612ad63e928507a6b9d23d8d3d3b680494dcfac417105f2fa3af15,Metadata:&PodSandboxMetadata{Name:etcd-pause-374510,Uid:dfdc9599d2feaa906e61fb7a6e4cf2b1,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1717421845640921236,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kuber
netes.pod.name: etcd-pause-374510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfdc9599d2feaa906e61fb7a6e4cf2b1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.94.3:2379,kubernetes.io/config.hash: dfdc9599d2feaa906e61fb7a6e4cf2b1,kubernetes.io/config.seen: 2024-06-03T13:35:36.984917848Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:940e6913de02cd9d44fe61e933c51d87d0d0b5562bf737f64fc2a466d2f1a765,Metadata:&PodSandboxMetadata{Name:kube-proxy-6tc5r,Uid:13008dfa-c9ca-4978-bb85-797ab01a9495,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1717421845577956535,Labels:map[string]string{controller-revision-hash: 5dbf89796d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-6tc5r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13008dfa-c9ca-4978-bb85-797ab01a9495,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/con
fig.seen: 2024-06-03T13:35:50.639634654Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=f4b8177b-92b9-4e86-a611-e1f91ee8773a name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 03 13:38:02 pause-374510 crio[3006]: time="2024-06-03 13:38:02.903127028Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=db409211-bd70-4fee-944c-9d5aacf46ef9 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:38:02 pause-374510 crio[3006]: time="2024-06-03 13:38:02.903232382Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=db409211-bd70-4fee-944c-9d5aacf46ef9 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:38:02 pause-374510 crio[3006]: time="2024-06-03 13:38:02.904026664Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2c3ef99bcb80f03e2a2ddf94b11e826201743f0dd48929f09c80b068e344215e,PodSandboxId:85b02571ace707f0e4c5b222d0995e8d55a9c1c3e4851755af821f31b714bd63,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717421866966502370,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6tc5r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13008dfa-c9ca-4978-bb85-797ab01a9495,},Annotations:map[string]string{io.kubernetes.container.hash: 1ee71ac3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cb23db9f8a3dd78b3d53081b709a1d3d1156cf4c9723b592f545a51b616d60d,PodSandboxId:9842f1d13a85b715b4029ee646388a47e9ab0ccdbc2acc765bb2132e7f832ce8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717421866985560331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k4gdp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea63b48b-59b9-4fc1-ab16-4bd2452781d8,},Annotations:map[string]string{io.kubernetes.container.hash: bfed4508,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a69250a66f1441760777adb3165aa7372f5f9b52e75d69a40f604bd2be0f9d65,PodSandboxId:3973517155d58d495053d072e8bfa888e8dd8ae229a0316c18b7d8e00fd7b13f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717421862148224241,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-374510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50a237ba4f
3cdf7f5165fcfcc243c780,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53fabfa8c7517b5dd93ca8586044c5afe02f25933453645a4bc07434517fec98,PodSandboxId:94b137d75ba345a1764a7316d025be19cf2bb136da2a00646ce75422605cb2a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717421862138387418,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-374510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddffbe442e034723d60bf0b98bb4
12a8,},Annotations:map[string]string{io.kubernetes.container.hash: 86c219cc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d64e33e05a7ee34059a437610585db1b0e60e597731ef4f35efc8f23ebfd52d0,PodSandboxId:138d4c7d1e874cdb202fb0fdd6882666ca5e79e79ef3b17a7f77b5d2dd1f606c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717421862107412796,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-374510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfdc9599d2feaa906e61fb7a6e4cf2b1,},Annotations:map[string]string{io.kubernet
es.container.hash: 45a6e82c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0712cd1cac74f0eab2e6823ff7c28f1c14c20828fca7018183948cb3c515ad2,PodSandboxId:1fdf623c19e30a9d6eeed0489e4b05439916b481f77d85a3487147bb38b55c7e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717421862114220466,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-374510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 377763bf209825dc0f3733b4fb073b5b,},Annotations:map[string]string{io
.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c4a1aa1c0a528a93a6d1c0724b7347ae5975e50e0a4df0a56d406a85700ac82,PodSandboxId:9842f1d13a85b715b4029ee646388a47e9ab0ccdbc2acc765bb2132e7f832ce8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717421849478569966,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k4gdp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea63b48b-59b9-4fc1-ab16-4bd2452781d8,},Annotations:map[string]string{io.kubernetes.container.hash: bfed
4508,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5bb0ea114452d89b9a9c672315f8800acbe91c80b0716e8bb40643528e0e592,PodSandboxId:940e6913de02cd9d44fe61e933c51d87d0d0b5562bf737f64fc2a466d2f1a765,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717421846179624669,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-6tc5r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13008dfa-c9ca-4978-bb85-797ab01a9495,},Annotations:map[string]string{io.kubernetes.container.hash: 1ee71ac3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad021413893a1057877b37ab2a5aaf857fbc67d8904ffcdcb50cdf8f3cd5473,PodSandboxId:586d69d890612ad63e928507a6b9d23d8d3d3b680494dcfac417105f2fa3af15,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717421846265523371,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-374510,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: dfdc9599d2feaa906e61fb7a6e4cf2b1,},Annotations:map[string]string{io.kubernetes.container.hash: 45a6e82c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5958fa5fc08cd625441deb6a1c8f01186a492a25779925cead50389ac5778a2,PodSandboxId:7f5e32f9e93875b5fab213d27ba8761561aaac4b57db73ec9b4b0ea4bea385c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717421846191550779,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-374510,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: ddffbe442e034723d60bf0b98bb412a8,},Annotations:map[string]string{io.kubernetes.container.hash: 86c219cc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07d936e6df4f5e08035f9b4e2f8682831c0d5beda04da16180aa097741c81d0d,PodSandboxId:4dc1fb3aa6d92c1a0f0f8f13424851a5b4e0019139f39be4d53440dd223b7153,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717421846123420339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-374510,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 377763bf209825dc0f3733b4fb073b5b,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97feea60a1f2eed2fa73ec0056267bc817af405169342b8c8640b346ed9c3b3d,PodSandboxId:aa8e932c5cdcb9aa97979ca2705f20053db1c445916f2cb7c80e8039c0beae75,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717421846048695570,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-374510,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 50a237ba4f3cdf7f5165fcfcc243c780,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=db409211-bd70-4fee-944c-9d5aacf46ef9 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:38:02 pause-374510 crio[3006]: time="2024-06-03 13:38:02.951715449Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d7ee9e40-de79-44f9-96bb-3d7cbc003063 name=/runtime.v1.RuntimeService/Version
	Jun 03 13:38:02 pause-374510 crio[3006]: time="2024-06-03 13:38:02.951923092Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d7ee9e40-de79-44f9-96bb-3d7cbc003063 name=/runtime.v1.RuntimeService/Version
	Jun 03 13:38:02 pause-374510 crio[3006]: time="2024-06-03 13:38:02.954118245Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=63fca961-073d-46a1-8c08-4592544b07c9 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 13:38:02 pause-374510 crio[3006]: time="2024-06-03 13:38:02.954950948Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717421882954912715,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=63fca961-073d-46a1-8c08-4592544b07c9 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 13:38:02 pause-374510 crio[3006]: time="2024-06-03 13:38:02.955623426Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e85cd75c-064e-4f29-83b2-2213d7903ba5 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:38:02 pause-374510 crio[3006]: time="2024-06-03 13:38:02.955701647Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e85cd75c-064e-4f29-83b2-2213d7903ba5 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:38:02 pause-374510 crio[3006]: time="2024-06-03 13:38:02.956116217Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2c3ef99bcb80f03e2a2ddf94b11e826201743f0dd48929f09c80b068e344215e,PodSandboxId:85b02571ace707f0e4c5b222d0995e8d55a9c1c3e4851755af821f31b714bd63,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717421866966502370,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6tc5r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13008dfa-c9ca-4978-bb85-797ab01a9495,},Annotations:map[string]string{io.kubernetes.container.hash: 1ee71ac3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cb23db9f8a3dd78b3d53081b709a1d3d1156cf4c9723b592f545a51b616d60d,PodSandboxId:9842f1d13a85b715b4029ee646388a47e9ab0ccdbc2acc765bb2132e7f832ce8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717421866985560331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k4gdp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea63b48b-59b9-4fc1-ab16-4bd2452781d8,},Annotations:map[string]string{io.kubernetes.container.hash: bfed4508,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a69250a66f1441760777adb3165aa7372f5f9b52e75d69a40f604bd2be0f9d65,PodSandboxId:3973517155d58d495053d072e8bfa888e8dd8ae229a0316c18b7d8e00fd7b13f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717421862148224241,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-374510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50a237ba4f
3cdf7f5165fcfcc243c780,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53fabfa8c7517b5dd93ca8586044c5afe02f25933453645a4bc07434517fec98,PodSandboxId:94b137d75ba345a1764a7316d025be19cf2bb136da2a00646ce75422605cb2a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717421862138387418,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-374510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddffbe442e034723d60bf0b98bb4
12a8,},Annotations:map[string]string{io.kubernetes.container.hash: 86c219cc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d64e33e05a7ee34059a437610585db1b0e60e597731ef4f35efc8f23ebfd52d0,PodSandboxId:138d4c7d1e874cdb202fb0fdd6882666ca5e79e79ef3b17a7f77b5d2dd1f606c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717421862107412796,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-374510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfdc9599d2feaa906e61fb7a6e4cf2b1,},Annotations:map[string]string{io.kubernet
es.container.hash: 45a6e82c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0712cd1cac74f0eab2e6823ff7c28f1c14c20828fca7018183948cb3c515ad2,PodSandboxId:1fdf623c19e30a9d6eeed0489e4b05439916b481f77d85a3487147bb38b55c7e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717421862114220466,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-374510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 377763bf209825dc0f3733b4fb073b5b,},Annotations:map[string]string{io
.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c4a1aa1c0a528a93a6d1c0724b7347ae5975e50e0a4df0a56d406a85700ac82,PodSandboxId:9842f1d13a85b715b4029ee646388a47e9ab0ccdbc2acc765bb2132e7f832ce8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717421849478569966,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k4gdp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea63b48b-59b9-4fc1-ab16-4bd2452781d8,},Annotations:map[string]string{io.kubernetes.container.hash: bfed
4508,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5bb0ea114452d89b9a9c672315f8800acbe91c80b0716e8bb40643528e0e592,PodSandboxId:940e6913de02cd9d44fe61e933c51d87d0d0b5562bf737f64fc2a466d2f1a765,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717421846179624669,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-6tc5r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13008dfa-c9ca-4978-bb85-797ab01a9495,},Annotations:map[string]string{io.kubernetes.container.hash: 1ee71ac3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad021413893a1057877b37ab2a5aaf857fbc67d8904ffcdcb50cdf8f3cd5473,PodSandboxId:586d69d890612ad63e928507a6b9d23d8d3d3b680494dcfac417105f2fa3af15,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717421846265523371,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-374510,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: dfdc9599d2feaa906e61fb7a6e4cf2b1,},Annotations:map[string]string{io.kubernetes.container.hash: 45a6e82c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5958fa5fc08cd625441deb6a1c8f01186a492a25779925cead50389ac5778a2,PodSandboxId:7f5e32f9e93875b5fab213d27ba8761561aaac4b57db73ec9b4b0ea4bea385c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717421846191550779,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-374510,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: ddffbe442e034723d60bf0b98bb412a8,},Annotations:map[string]string{io.kubernetes.container.hash: 86c219cc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07d936e6df4f5e08035f9b4e2f8682831c0d5beda04da16180aa097741c81d0d,PodSandboxId:4dc1fb3aa6d92c1a0f0f8f13424851a5b4e0019139f39be4d53440dd223b7153,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717421846123420339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-374510,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 377763bf209825dc0f3733b4fb073b5b,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97feea60a1f2eed2fa73ec0056267bc817af405169342b8c8640b346ed9c3b3d,PodSandboxId:aa8e932c5cdcb9aa97979ca2705f20053db1c445916f2cb7c80e8039c0beae75,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717421846048695570,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-374510,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 50a237ba4f3cdf7f5165fcfcc243c780,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e85cd75c-064e-4f29-83b2-2213d7903ba5 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:38:03 pause-374510 crio[3006]: time="2024-06-03 13:38:03.006972934Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1f647fea-1917-411e-b7e8-1ff0f15fa9c5 name=/runtime.v1.RuntimeService/Version
	Jun 03 13:38:03 pause-374510 crio[3006]: time="2024-06-03 13:38:03.007045910Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1f647fea-1917-411e-b7e8-1ff0f15fa9c5 name=/runtime.v1.RuntimeService/Version
	Jun 03 13:38:03 pause-374510 crio[3006]: time="2024-06-03 13:38:03.008048796Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e9496584-f973-464f-a0b3-36d481e40c9e name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 13:38:03 pause-374510 crio[3006]: time="2024-06-03 13:38:03.008425566Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717421883008400611,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e9496584-f973-464f-a0b3-36d481e40c9e name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 13:38:03 pause-374510 crio[3006]: time="2024-06-03 13:38:03.009244473Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ebc2561f-2cd1-40c4-a7d5-7b40f9c092d7 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:38:03 pause-374510 crio[3006]: time="2024-06-03 13:38:03.009324721Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ebc2561f-2cd1-40c4-a7d5-7b40f9c092d7 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:38:03 pause-374510 crio[3006]: time="2024-06-03 13:38:03.009865368Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2c3ef99bcb80f03e2a2ddf94b11e826201743f0dd48929f09c80b068e344215e,PodSandboxId:85b02571ace707f0e4c5b222d0995e8d55a9c1c3e4851755af821f31b714bd63,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717421866966502370,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6tc5r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13008dfa-c9ca-4978-bb85-797ab01a9495,},Annotations:map[string]string{io.kubernetes.container.hash: 1ee71ac3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cb23db9f8a3dd78b3d53081b709a1d3d1156cf4c9723b592f545a51b616d60d,PodSandboxId:9842f1d13a85b715b4029ee646388a47e9ab0ccdbc2acc765bb2132e7f832ce8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717421866985560331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k4gdp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea63b48b-59b9-4fc1-ab16-4bd2452781d8,},Annotations:map[string]string{io.kubernetes.container.hash: bfed4508,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a69250a66f1441760777adb3165aa7372f5f9b52e75d69a40f604bd2be0f9d65,PodSandboxId:3973517155d58d495053d072e8bfa888e8dd8ae229a0316c18b7d8e00fd7b13f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717421862148224241,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-374510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50a237ba4f
3cdf7f5165fcfcc243c780,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53fabfa8c7517b5dd93ca8586044c5afe02f25933453645a4bc07434517fec98,PodSandboxId:94b137d75ba345a1764a7316d025be19cf2bb136da2a00646ce75422605cb2a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717421862138387418,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-374510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddffbe442e034723d60bf0b98bb4
12a8,},Annotations:map[string]string{io.kubernetes.container.hash: 86c219cc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d64e33e05a7ee34059a437610585db1b0e60e597731ef4f35efc8f23ebfd52d0,PodSandboxId:138d4c7d1e874cdb202fb0fdd6882666ca5e79e79ef3b17a7f77b5d2dd1f606c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717421862107412796,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-374510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfdc9599d2feaa906e61fb7a6e4cf2b1,},Annotations:map[string]string{io.kubernet
es.container.hash: 45a6e82c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0712cd1cac74f0eab2e6823ff7c28f1c14c20828fca7018183948cb3c515ad2,PodSandboxId:1fdf623c19e30a9d6eeed0489e4b05439916b481f77d85a3487147bb38b55c7e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717421862114220466,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-374510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 377763bf209825dc0f3733b4fb073b5b,},Annotations:map[string]string{io
.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c4a1aa1c0a528a93a6d1c0724b7347ae5975e50e0a4df0a56d406a85700ac82,PodSandboxId:9842f1d13a85b715b4029ee646388a47e9ab0ccdbc2acc765bb2132e7f832ce8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717421849478569966,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k4gdp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea63b48b-59b9-4fc1-ab16-4bd2452781d8,},Annotations:map[string]string{io.kubernetes.container.hash: bfed
4508,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5bb0ea114452d89b9a9c672315f8800acbe91c80b0716e8bb40643528e0e592,PodSandboxId:940e6913de02cd9d44fe61e933c51d87d0d0b5562bf737f64fc2a466d2f1a765,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717421846179624669,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-6tc5r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13008dfa-c9ca-4978-bb85-797ab01a9495,},Annotations:map[string]string{io.kubernetes.container.hash: 1ee71ac3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad021413893a1057877b37ab2a5aaf857fbc67d8904ffcdcb50cdf8f3cd5473,PodSandboxId:586d69d890612ad63e928507a6b9d23d8d3d3b680494dcfac417105f2fa3af15,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717421846265523371,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-374510,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: dfdc9599d2feaa906e61fb7a6e4cf2b1,},Annotations:map[string]string{io.kubernetes.container.hash: 45a6e82c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5958fa5fc08cd625441deb6a1c8f01186a492a25779925cead50389ac5778a2,PodSandboxId:7f5e32f9e93875b5fab213d27ba8761561aaac4b57db73ec9b4b0ea4bea385c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717421846191550779,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-374510,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: ddffbe442e034723d60bf0b98bb412a8,},Annotations:map[string]string{io.kubernetes.container.hash: 86c219cc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07d936e6df4f5e08035f9b4e2f8682831c0d5beda04da16180aa097741c81d0d,PodSandboxId:4dc1fb3aa6d92c1a0f0f8f13424851a5b4e0019139f39be4d53440dd223b7153,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717421846123420339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-374510,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 377763bf209825dc0f3733b4fb073b5b,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97feea60a1f2eed2fa73ec0056267bc817af405169342b8c8640b346ed9c3b3d,PodSandboxId:aa8e932c5cdcb9aa97979ca2705f20053db1c445916f2cb7c80e8039c0beae75,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717421846048695570,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-374510,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 50a237ba4f3cdf7f5165fcfcc243c780,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ebc2561f-2cd1-40c4-a7d5-7b40f9c092d7 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:38:03 pause-374510 crio[3006]: time="2024-06-03 13:38:03.061470407Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=19abf0cf-6b38-4f66-aceb-a8036c55c2f6 name=/runtime.v1.RuntimeService/Version
	Jun 03 13:38:03 pause-374510 crio[3006]: time="2024-06-03 13:38:03.061578408Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=19abf0cf-6b38-4f66-aceb-a8036c55c2f6 name=/runtime.v1.RuntimeService/Version
	Jun 03 13:38:03 pause-374510 crio[3006]: time="2024-06-03 13:38:03.070357201Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bb81b629-545b-4257-9b16-9c899d3fc7df name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 13:38:03 pause-374510 crio[3006]: time="2024-06-03 13:38:03.071438000Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717421883071394601,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bb81b629-545b-4257-9b16-9c899d3fc7df name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 13:38:03 pause-374510 crio[3006]: time="2024-06-03 13:38:03.072436931Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7c7f88ae-5515-4db2-95a1-6227723a1223 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:38:03 pause-374510 crio[3006]: time="2024-06-03 13:38:03.072535521Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7c7f88ae-5515-4db2-95a1-6227723a1223 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:38:03 pause-374510 crio[3006]: time="2024-06-03 13:38:03.073014186Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2c3ef99bcb80f03e2a2ddf94b11e826201743f0dd48929f09c80b068e344215e,PodSandboxId:85b02571ace707f0e4c5b222d0995e8d55a9c1c3e4851755af821f31b714bd63,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717421866966502370,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6tc5r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13008dfa-c9ca-4978-bb85-797ab01a9495,},Annotations:map[string]string{io.kubernetes.container.hash: 1ee71ac3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cb23db9f8a3dd78b3d53081b709a1d3d1156cf4c9723b592f545a51b616d60d,PodSandboxId:9842f1d13a85b715b4029ee646388a47e9ab0ccdbc2acc765bb2132e7f832ce8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717421866985560331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k4gdp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea63b48b-59b9-4fc1-ab16-4bd2452781d8,},Annotations:map[string]string{io.kubernetes.container.hash: bfed4508,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a69250a66f1441760777adb3165aa7372f5f9b52e75d69a40f604bd2be0f9d65,PodSandboxId:3973517155d58d495053d072e8bfa888e8dd8ae229a0316c18b7d8e00fd7b13f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717421862148224241,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-374510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50a237ba4f
3cdf7f5165fcfcc243c780,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53fabfa8c7517b5dd93ca8586044c5afe02f25933453645a4bc07434517fec98,PodSandboxId:94b137d75ba345a1764a7316d025be19cf2bb136da2a00646ce75422605cb2a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717421862138387418,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-374510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddffbe442e034723d60bf0b98bb4
12a8,},Annotations:map[string]string{io.kubernetes.container.hash: 86c219cc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d64e33e05a7ee34059a437610585db1b0e60e597731ef4f35efc8f23ebfd52d0,PodSandboxId:138d4c7d1e874cdb202fb0fdd6882666ca5e79e79ef3b17a7f77b5d2dd1f606c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717421862107412796,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-374510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfdc9599d2feaa906e61fb7a6e4cf2b1,},Annotations:map[string]string{io.kubernet
es.container.hash: 45a6e82c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0712cd1cac74f0eab2e6823ff7c28f1c14c20828fca7018183948cb3c515ad2,PodSandboxId:1fdf623c19e30a9d6eeed0489e4b05439916b481f77d85a3487147bb38b55c7e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717421862114220466,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-374510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 377763bf209825dc0f3733b4fb073b5b,},Annotations:map[string]string{io
.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c4a1aa1c0a528a93a6d1c0724b7347ae5975e50e0a4df0a56d406a85700ac82,PodSandboxId:9842f1d13a85b715b4029ee646388a47e9ab0ccdbc2acc765bb2132e7f832ce8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717421849478569966,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k4gdp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea63b48b-59b9-4fc1-ab16-4bd2452781d8,},Annotations:map[string]string{io.kubernetes.container.hash: bfed
4508,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5bb0ea114452d89b9a9c672315f8800acbe91c80b0716e8bb40643528e0e592,PodSandboxId:940e6913de02cd9d44fe61e933c51d87d0d0b5562bf737f64fc2a466d2f1a765,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717421846179624669,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-6tc5r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13008dfa-c9ca-4978-bb85-797ab01a9495,},Annotations:map[string]string{io.kubernetes.container.hash: 1ee71ac3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad021413893a1057877b37ab2a5aaf857fbc67d8904ffcdcb50cdf8f3cd5473,PodSandboxId:586d69d890612ad63e928507a6b9d23d8d3d3b680494dcfac417105f2fa3af15,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717421846265523371,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-374510,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: dfdc9599d2feaa906e61fb7a6e4cf2b1,},Annotations:map[string]string{io.kubernetes.container.hash: 45a6e82c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5958fa5fc08cd625441deb6a1c8f01186a492a25779925cead50389ac5778a2,PodSandboxId:7f5e32f9e93875b5fab213d27ba8761561aaac4b57db73ec9b4b0ea4bea385c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717421846191550779,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-374510,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: ddffbe442e034723d60bf0b98bb412a8,},Annotations:map[string]string{io.kubernetes.container.hash: 86c219cc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07d936e6df4f5e08035f9b4e2f8682831c0d5beda04da16180aa097741c81d0d,PodSandboxId:4dc1fb3aa6d92c1a0f0f8f13424851a5b4e0019139f39be4d53440dd223b7153,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717421846123420339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-374510,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 377763bf209825dc0f3733b4fb073b5b,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97feea60a1f2eed2fa73ec0056267bc817af405169342b8c8640b346ed9c3b3d,PodSandboxId:aa8e932c5cdcb9aa97979ca2705f20053db1c445916f2cb7c80e8039c0beae75,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717421846048695570,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-374510,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 50a237ba4f3cdf7f5165fcfcc243c780,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7c7f88ae-5515-4db2-95a1-6227723a1223 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0cb23db9f8a3d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 seconds ago      Running             coredns                   2                   9842f1d13a85b       coredns-7db6d8ff4d-k4gdp
	2c3ef99bcb80f       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   16 seconds ago      Running             kube-proxy                2                   85b02571ace70       kube-proxy-6tc5r
	a69250a66f144       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   21 seconds ago      Running             kube-scheduler            2                   3973517155d58       kube-scheduler-pause-374510
	53fabfa8c7517       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   21 seconds ago      Running             kube-apiserver            2                   94b137d75ba34       kube-apiserver-pause-374510
	b0712cd1cac74       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   21 seconds ago      Running             kube-controller-manager   2                   1fdf623c19e30       kube-controller-manager-pause-374510
	d64e33e05a7ee       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   21 seconds ago      Running             etcd                      2                   138d4c7d1e874       etcd-pause-374510
	4c4a1aa1c0a52       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   33 seconds ago      Exited              coredns                   1                   9842f1d13a85b       coredns-7db6d8ff4d-k4gdp
	2ad021413893a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   36 seconds ago      Exited              etcd                      1                   586d69d890612       etcd-pause-374510
	a5958fa5fc08c       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   36 seconds ago      Exited              kube-apiserver            1                   7f5e32f9e9387       kube-apiserver-pause-374510
	a5bb0ea114452       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   36 seconds ago      Exited              kube-proxy                1                   940e6913de02c       kube-proxy-6tc5r
	07d936e6df4f5       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   37 seconds ago      Exited              kube-controller-manager   1                   4dc1fb3aa6d92       kube-controller-manager-pause-374510
	97feea60a1f2e       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   37 seconds ago      Exited              kube-scheduler            1                   aa8e932c5cdcb       kube-scheduler-pause-374510
	
	
	==> coredns [0cb23db9f8a3dd78b3d53081b709a1d3d1156cf4c9723b592f545a51b616d60d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:45251 - 46558 "HINFO IN 5808006950088449286.6134672951148051959. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027946014s
	
	
	==> coredns [4c4a1aa1c0a528a93a6d1c0724b7347ae5975e50e0a4df0a56d406a85700ac82] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:49054 - 55389 "HINFO IN 3417706863083808389.5363907523611926070. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.029407227s
	
	
	==> describe nodes <==
	Name:               pause-374510
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-374510
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	                    minikube.k8s.io/name=pause-374510
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_03T13_35_37_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 13:35:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-374510
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 13:37:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 13:37:46 +0000   Mon, 03 Jun 2024 13:35:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 13:37:46 +0000   Mon, 03 Jun 2024 13:35:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 13:37:46 +0000   Mon, 03 Jun 2024 13:35:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 13:37:46 +0000   Mon, 03 Jun 2024 13:35:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.3
	  Hostname:    pause-374510
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 3082e70007634b0d9204380c016436a0
	  System UUID:                3082e700-0763-4b0d-9204-380c016436a0
	  Boot ID:                    cb8d017a-cbb4-436a-88cc-b091df51b88e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-k4gdp                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     2m13s
	  kube-system                 etcd-pause-374510                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         2m26s
	  kube-system                 kube-apiserver-pause-374510             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m28s
	  kube-system                 kube-controller-manager-pause-374510    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m27s
	  kube-system                 kube-proxy-6tc5r                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m13s
	  kube-system                 kube-scheduler-pause-374510             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m11s                  kube-proxy       
	  Normal  Starting                 16s                    kube-proxy       
	  Normal  Starting                 2m32s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m32s (x2 over 2m32s)  kubelet          Node pause-374510 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m32s                  kubelet          Node pause-374510 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m32s                  kubelet          Node pause-374510 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m32s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m27s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    2m26s                  kubelet          Node pause-374510 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m26s                  kubelet          Node pause-374510 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m26s                  kubelet          Node pause-374510 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m25s                  kubelet          Node pause-374510 status is now: NodeReady
	  Normal  RegisteredNode           2m14s                  node-controller  Node pause-374510 event: Registered Node pause-374510 in Controller
	  Normal  Starting                 22s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)      kubelet          Node pause-374510 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)      kubelet          Node pause-374510 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)      kubelet          Node pause-374510 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5s                     node-controller  Node pause-374510 event: Registered Node pause-374510 in Controller
	
	
	==> dmesg <==
	[  +0.062443] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065590] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.212042] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.145033] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.314882] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +4.666701] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +0.064933] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.942301] systemd-fstab-generator[949]: Ignoring "noauto" option for root device
	[  +0.063494] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.498188] systemd-fstab-generator[1283]: Ignoring "noauto" option for root device
	[  +0.090933] kauditd_printk_skb: 69 callbacks suppressed
	[ +13.903602] systemd-fstab-generator[1504]: Ignoring "noauto" option for root device
	[  +0.126379] kauditd_printk_skb: 21 callbacks suppressed
	[Jun 3 13:36] kauditd_printk_skb: 88 callbacks suppressed
	[Jun 3 13:37] systemd-fstab-generator[2390]: Ignoring "noauto" option for root device
	[  +0.204694] systemd-fstab-generator[2402]: Ignoring "noauto" option for root device
	[  +0.569953] systemd-fstab-generator[2617]: Ignoring "noauto" option for root device
	[  +0.229602] systemd-fstab-generator[2680]: Ignoring "noauto" option for root device
	[  +0.659753] systemd-fstab-generator[2886]: Ignoring "noauto" option for root device
	[  +1.347047] systemd-fstab-generator[3162]: Ignoring "noauto" option for root device
	[  +8.708188] kauditd_printk_skb: 243 callbacks suppressed
	[  +4.301494] systemd-fstab-generator[3698]: Ignoring "noauto" option for root device
	[  +0.843235] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.321768] kauditd_printk_skb: 20 callbacks suppressed
	[ +11.725756] systemd-fstab-generator[4103]: Ignoring "noauto" option for root device
	
	
	==> etcd [2ad021413893a1057877b37ab2a5aaf857fbc67d8904ffcdcb50cdf8f3cd5473] <==
	{"level":"warn","ts":"2024-06-03T13:37:26.896121Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-06-03T13:37:26.896294Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.94.3:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.94.3:2380","--initial-cluster=pause-374510=https://192.168.94.3:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.94.3:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.94.3:2380","--name=pause-374510","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-fil
e=/var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2024-06-03T13:37:26.896408Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2024-06-03T13:37:26.896474Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-06-03T13:37:26.896514Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.94.3:2380"]}
	{"level":"info","ts":"2024-06-03T13:37:26.896798Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-03T13:37:26.897934Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.3:2379"]}
	{"level":"info","ts":"2024-06-03T13:37:26.905967Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"pause-374510","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.94.3:2380"],"listen-peer-urls":["https://192.168.94.3:2380"],"advertise-client-urls":["https://192.168.94.3:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.3:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-to
ken":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2024-06-03T13:37:26.994295Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"86.932254ms"}
	{"level":"info","ts":"2024-06-03T13:37:27.084666Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	
	
	==> etcd [d64e33e05a7ee34059a437610585db1b0e60e597731ef4f35efc8f23ebfd52d0] <==
	{"level":"info","ts":"2024-06-03T13:37:42.629196Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-03T13:37:42.629205Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-03T13:37:42.629445Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0316bfca4b3b866 switched to configuration voters=(17307733575800698982)"}
	{"level":"info","ts":"2024-06-03T13:37:42.629537Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d28f0976ab114d3","local-member-id":"f0316bfca4b3b866","added-peer-id":"f0316bfca4b3b866","added-peer-peer-urls":["https://192.168.94.3:2380"]}
	{"level":"info","ts":"2024-06-03T13:37:42.629678Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d28f0976ab114d3","local-member-id":"f0316bfca4b3b866","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T13:37:42.629769Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T13:37:42.634557Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-03T13:37:42.645165Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f0316bfca4b3b866","initial-advertise-peer-urls":["https://192.168.94.3:2380"],"listen-peer-urls":["https://192.168.94.3:2380"],"advertise-client-urls":["https://192.168.94.3:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.3:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-03T13:37:42.645221Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-03T13:37:42.634698Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.94.3:2380"}
	{"level":"info","ts":"2024-06-03T13:37:42.64525Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.94.3:2380"}
	{"level":"info","ts":"2024-06-03T13:37:44.393309Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0316bfca4b3b866 is starting a new election at term 2"}
	{"level":"info","ts":"2024-06-03T13:37:44.393381Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0316bfca4b3b866 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-06-03T13:37:44.393406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0316bfca4b3b866 received MsgPreVoteResp from f0316bfca4b3b866 at term 2"}
	{"level":"info","ts":"2024-06-03T13:37:44.393419Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0316bfca4b3b866 became candidate at term 3"}
	{"level":"info","ts":"2024-06-03T13:37:44.393425Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0316bfca4b3b866 received MsgVoteResp from f0316bfca4b3b866 at term 3"}
	{"level":"info","ts":"2024-06-03T13:37:44.393433Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0316bfca4b3b866 became leader at term 3"}
	{"level":"info","ts":"2024-06-03T13:37:44.39344Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f0316bfca4b3b866 elected leader f0316bfca4b3b866 at term 3"}
	{"level":"info","ts":"2024-06-03T13:37:44.399191Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"f0316bfca4b3b866","local-member-attributes":"{Name:pause-374510 ClientURLs:[https://192.168.94.3:2379]}","request-path":"/0/members/f0316bfca4b3b866/attributes","cluster-id":"d28f0976ab114d3","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-03T13:37:44.399211Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-03T13:37:44.399605Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-03T13:37:44.399664Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-03T13:37:44.399231Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-03T13:37:44.402083Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-03T13:37:44.402598Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.3:2379"}
	
	
	==> kernel <==
	 13:38:03 up 3 min,  0 users,  load average: 0.99, 0.51, 0.20
	Linux pause-374510 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [53fabfa8c7517b5dd93ca8586044c5afe02f25933453645a4bc07434517fec98] <==
	I0603 13:37:46.344713       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0603 13:37:46.344944       1 aggregator.go:165] initial CRD sync complete...
	I0603 13:37:46.345003       1 autoregister_controller.go:141] Starting autoregister controller
	I0603 13:37:46.345033       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0603 13:37:46.345063       1 cache.go:39] Caches are synced for autoregister controller
	I0603 13:37:46.415041       1 shared_informer.go:320] Caches are synced for configmaps
	I0603 13:37:46.415442       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0603 13:37:46.415465       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0603 13:37:46.415481       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0603 13:37:46.416285       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0603 13:37:46.415510       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0603 13:37:46.418826       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0603 13:37:46.422038       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0603 13:37:46.422080       1 policy_source.go:224] refreshing policies
	I0603 13:37:46.429306       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0603 13:37:46.432595       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0603 13:37:46.437890       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0603 13:37:47.210337       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0603 13:37:47.922122       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0603 13:37:47.940683       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0603 13:37:47.998934       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0603 13:37:48.070696       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0603 13:37:48.090302       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0603 13:37:58.971578       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0603 13:37:59.154837       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [a5958fa5fc08cd625441deb6a1c8f01186a492a25779925cead50389ac5778a2] <==
	I0603 13:37:26.957827       1 options.go:221] external host was not specified, using 192.168.94.3
	I0603 13:37:26.959923       1 server.go:148] Version: v1.30.1
	I0603 13:37:26.960275       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-controller-manager [07d936e6df4f5e08035f9b4e2f8682831c0d5beda04da16180aa097741c81d0d] <==
	
	
	==> kube-controller-manager [b0712cd1cac74f0eab2e6823ff7c28f1c14c20828fca7018183948cb3c515ad2] <==
	I0603 13:37:58.980093       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0603 13:37:58.983795       1 shared_informer.go:320] Caches are synced for ephemeral
	I0603 13:37:58.983838       1 shared_informer.go:320] Caches are synced for TTL
	I0603 13:37:58.986486       1 shared_informer.go:320] Caches are synced for crt configmap
	I0603 13:37:58.988385       1 shared_informer.go:320] Caches are synced for expand
	I0603 13:37:58.989823       1 shared_informer.go:320] Caches are synced for GC
	I0603 13:37:58.990484       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0603 13:37:58.992060       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0603 13:37:58.993164       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0603 13:37:59.012657       1 shared_informer.go:320] Caches are synced for attach detach
	I0603 13:37:59.014270       1 shared_informer.go:320] Caches are synced for persistent volume
	I0603 13:37:59.021233       1 shared_informer.go:320] Caches are synced for deployment
	I0603 13:37:59.024384       1 shared_informer.go:320] Caches are synced for service account
	I0603 13:37:59.027801       1 shared_informer.go:320] Caches are synced for disruption
	I0603 13:37:59.030827       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0603 13:37:59.058989       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0603 13:37:59.067818       1 shared_informer.go:320] Caches are synced for cronjob
	I0603 13:37:59.117832       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0603 13:37:59.132973       1 shared_informer.go:320] Caches are synced for job
	I0603 13:37:59.144778       1 shared_informer.go:320] Caches are synced for endpoint
	I0603 13:37:59.215905       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 13:37:59.219185       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 13:37:59.644960       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 13:37:59.680336       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 13:37:59.680426       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [2c3ef99bcb80f03e2a2ddf94b11e826201743f0dd48929f09c80b068e344215e] <==
	I0603 13:37:47.222820       1 server_linux.go:69] "Using iptables proxy"
	I0603 13:37:47.243702       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.94.3"]
	I0603 13:37:47.327503       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 13:37:47.327564       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 13:37:47.327590       1 server_linux.go:165] "Using iptables Proxier"
	I0603 13:37:47.335858       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 13:37:47.336134       1 server.go:872] "Version info" version="v1.30.1"
	I0603 13:37:47.336180       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 13:37:47.338281       1 config.go:192] "Starting service config controller"
	I0603 13:37:47.338328       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 13:37:47.338366       1 config.go:101] "Starting endpoint slice config controller"
	I0603 13:37:47.338392       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 13:37:47.339948       1 config.go:319] "Starting node config controller"
	I0603 13:37:47.340072       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 13:37:47.440565       1 shared_informer.go:320] Caches are synced for node config
	I0603 13:37:47.440623       1 shared_informer.go:320] Caches are synced for service config
	I0603 13:37:47.440670       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [a5bb0ea114452d89b9a9c672315f8800acbe91c80b0716e8bb40643528e0e592] <==
	
	
	==> kube-scheduler [97feea60a1f2eed2fa73ec0056267bc817af405169342b8c8640b346ed9c3b3d] <==
	
	
	==> kube-scheduler [a69250a66f1441760777adb3165aa7372f5f9b52e75d69a40f604bd2be0f9d65] <==
	I0603 13:37:43.393384       1 serving.go:380] Generated self-signed cert in-memory
	W0603 13:37:46.281517       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0603 13:37:46.281641       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0603 13:37:46.281682       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0603 13:37:46.281713       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0603 13:37:46.344472       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0603 13:37:46.347092       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 13:37:46.353447       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0603 13:37:46.353769       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0603 13:37:46.355883       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 13:37:46.356001       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 13:37:46.456713       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 03 13:37:41 pause-374510 kubelet[3705]: I0603 13:37:41.951448    3705 kubelet_node_status.go:73] "Attempting to register node" node="pause-374510"
	Jun 03 13:37:41 pause-374510 kubelet[3705]: E0603 13:37:41.952498    3705 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.94.3:8443: connect: connection refused" node="pause-374510"
	Jun 03 13:37:42 pause-374510 kubelet[3705]: I0603 13:37:42.086290    3705 scope.go:117] "RemoveContainer" containerID="2ad021413893a1057877b37ab2a5aaf857fbc67d8904ffcdcb50cdf8f3cd5473"
	Jun 03 13:37:42 pause-374510 kubelet[3705]: I0603 13:37:42.089331    3705 scope.go:117] "RemoveContainer" containerID="a5958fa5fc08cd625441deb6a1c8f01186a492a25779925cead50389ac5778a2"
	Jun 03 13:37:42 pause-374510 kubelet[3705]: I0603 13:37:42.090191    3705 scope.go:117] "RemoveContainer" containerID="07d936e6df4f5e08035f9b4e2f8682831c0d5beda04da16180aa097741c81d0d"
	Jun 03 13:37:42 pause-374510 kubelet[3705]: I0603 13:37:42.094838    3705 scope.go:117] "RemoveContainer" containerID="97feea60a1f2eed2fa73ec0056267bc817af405169342b8c8640b346ed9c3b3d"
	Jun 03 13:37:42 pause-374510 kubelet[3705]: E0603 13:37:42.271282    3705 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-374510?timeout=10s\": dial tcp 192.168.94.3:8443: connect: connection refused" interval="800ms"
	Jun 03 13:37:42 pause-374510 kubelet[3705]: I0603 13:37:42.360166    3705 kubelet_node_status.go:73] "Attempting to register node" node="pause-374510"
	Jun 03 13:37:42 pause-374510 kubelet[3705]: E0603 13:37:42.364541    3705 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.94.3:8443: connect: connection refused" node="pause-374510"
	Jun 03 13:37:42 pause-374510 kubelet[3705]: W0603 13:37:42.458673    3705 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.94.3:8443: connect: connection refused
	Jun 03 13:37:42 pause-374510 kubelet[3705]: E0603 13:37:42.458910    3705 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.94.3:8443: connect: connection refused
	Jun 03 13:37:43 pause-374510 kubelet[3705]: I0603 13:37:43.166213    3705 kubelet_node_status.go:73] "Attempting to register node" node="pause-374510"
	Jun 03 13:37:46 pause-374510 kubelet[3705]: I0603 13:37:46.475375    3705 kubelet_node_status.go:112] "Node was previously registered" node="pause-374510"
	Jun 03 13:37:46 pause-374510 kubelet[3705]: I0603 13:37:46.475518    3705 kubelet_node_status.go:76] "Successfully registered node" node="pause-374510"
	Jun 03 13:37:46 pause-374510 kubelet[3705]: I0603 13:37:46.477493    3705 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jun 03 13:37:46 pause-374510 kubelet[3705]: I0603 13:37:46.479067    3705 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jun 03 13:37:46 pause-374510 kubelet[3705]: I0603 13:37:46.638098    3705 apiserver.go:52] "Watching apiserver"
	Jun 03 13:37:46 pause-374510 kubelet[3705]: I0603 13:37:46.642560    3705 topology_manager.go:215] "Topology Admit Handler" podUID="13008dfa-c9ca-4978-bb85-797ab01a9495" podNamespace="kube-system" podName="kube-proxy-6tc5r"
	Jun 03 13:37:46 pause-374510 kubelet[3705]: I0603 13:37:46.642854    3705 topology_manager.go:215] "Topology Admit Handler" podUID="ea63b48b-59b9-4fc1-ab16-4bd2452781d8" podNamespace="kube-system" podName="coredns-7db6d8ff4d-k4gdp"
	Jun 03 13:37:46 pause-374510 kubelet[3705]: I0603 13:37:46.646581    3705 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jun 03 13:37:46 pause-374510 kubelet[3705]: I0603 13:37:46.646707    3705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13008dfa-c9ca-4978-bb85-797ab01a9495-lib-modules\") pod \"kube-proxy-6tc5r\" (UID: \"13008dfa-c9ca-4978-bb85-797ab01a9495\") " pod="kube-system/kube-proxy-6tc5r"
	Jun 03 13:37:46 pause-374510 kubelet[3705]: I0603 13:37:46.646778    3705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13008dfa-c9ca-4978-bb85-797ab01a9495-xtables-lock\") pod \"kube-proxy-6tc5r\" (UID: \"13008dfa-c9ca-4978-bb85-797ab01a9495\") " pod="kube-system/kube-proxy-6tc5r"
	Jun 03 13:37:46 pause-374510 kubelet[3705]: I0603 13:37:46.943505    3705 scope.go:117] "RemoveContainer" containerID="a5bb0ea114452d89b9a9c672315f8800acbe91c80b0716e8bb40643528e0e592"
	Jun 03 13:37:46 pause-374510 kubelet[3705]: I0603 13:37:46.944377    3705 scope.go:117] "RemoveContainer" containerID="4c4a1aa1c0a528a93a6d1c0724b7347ae5975e50e0a4df0a56d406a85700ac82"
	Jun 03 13:37:53 pause-374510 kubelet[3705]: I0603 13:37:53.295467    3705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-374510 -n pause-374510
helpers_test.go:261: (dbg) Run:  kubectl --context pause-374510 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-374510 -n pause-374510
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-374510 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-374510 logs -n 25: (2.097518741s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-021279 sudo journalctl                       | auto-021279            | jenkins | v1.33.1 | 03 Jun 24 13:37 UTC | 03 Jun 24 13:37 UTC |
	|         | -xeu kubelet --all --full                            |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p auto-021279 sudo cat                              | auto-021279            | jenkins | v1.33.1 | 03 Jun 24 13:37 UTC | 03 Jun 24 13:37 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                        |         |         |                     |                     |
	| ssh     | -p auto-021279 sudo cat                              | auto-021279            | jenkins | v1.33.1 | 03 Jun 24 13:37 UTC | 03 Jun 24 13:37 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                        |         |         |                     |                     |
	| ssh     | -p auto-021279 sudo systemctl                        | auto-021279            | jenkins | v1.33.1 | 03 Jun 24 13:37 UTC |                     |
	|         | status docker --all --full                           |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| delete  | -p cert-expiration-925487                            | cert-expiration-925487 | jenkins | v1.33.1 | 03 Jun 24 13:37 UTC | 03 Jun 24 13:37 UTC |
	| ssh     | -p auto-021279 sudo systemctl                        | auto-021279            | jenkins | v1.33.1 | 03 Jun 24 13:37 UTC | 03 Jun 24 13:37 UTC |
	|         | cat docker --no-pager                                |                        |         |         |                     |                     |
	| ssh     | -p auto-021279 sudo cat                              | auto-021279            | jenkins | v1.33.1 | 03 Jun 24 13:37 UTC | 03 Jun 24 13:37 UTC |
	|         | /etc/docker/daemon.json                              |                        |         |         |                     |                     |
	| ssh     | -p auto-021279 sudo docker                           | auto-021279            | jenkins | v1.33.1 | 03 Jun 24 13:37 UTC |                     |
	|         | system info                                          |                        |         |         |                     |                     |
	| start   | -p custom-flannel-021279                             | custom-flannel-021279  | jenkins | v1.33.1 | 03 Jun 24 13:37 UTC |                     |
	|         | --memory=3072 --alsologtostderr                      |                        |         |         |                     |                     |
	|         | --wait=true --wait-timeout=15m                       |                        |         |         |                     |                     |
	|         | --cni=testdata/kube-flannel.yaml                     |                        |         |         |                     |                     |
	|         | --driver=kvm2                                        |                        |         |         |                     |                     |
	|         | --container-runtime=crio                             |                        |         |         |                     |                     |
	| ssh     | -p auto-021279 sudo systemctl                        | auto-021279            | jenkins | v1.33.1 | 03 Jun 24 13:37 UTC |                     |
	|         | status cri-docker --all --full                       |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p auto-021279 sudo systemctl                        | auto-021279            | jenkins | v1.33.1 | 03 Jun 24 13:37 UTC | 03 Jun 24 13:37 UTC |
	|         | cat cri-docker --no-pager                            |                        |         |         |                     |                     |
	| ssh     | -p auto-021279 sudo cat                              | auto-021279            | jenkins | v1.33.1 | 03 Jun 24 13:37 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                        |         |         |                     |                     |
	| ssh     | -p auto-021279 sudo cat                              | auto-021279            | jenkins | v1.33.1 | 03 Jun 24 13:37 UTC | 03 Jun 24 13:37 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                        |         |         |                     |                     |
	| ssh     | -p auto-021279 sudo                                  | auto-021279            | jenkins | v1.33.1 | 03 Jun 24 13:37 UTC | 03 Jun 24 13:37 UTC |
	|         | cri-dockerd --version                                |                        |         |         |                     |                     |
	| ssh     | -p auto-021279 sudo systemctl                        | auto-021279            | jenkins | v1.33.1 | 03 Jun 24 13:37 UTC |                     |
	|         | status containerd --all --full                       |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p auto-021279 sudo systemctl                        | auto-021279            | jenkins | v1.33.1 | 03 Jun 24 13:37 UTC | 03 Jun 24 13:37 UTC |
	|         | cat containerd --no-pager                            |                        |         |         |                     |                     |
	| ssh     | -p auto-021279 sudo cat                              | auto-021279            | jenkins | v1.33.1 | 03 Jun 24 13:37 UTC | 03 Jun 24 13:37 UTC |
	|         | /lib/systemd/system/containerd.service               |                        |         |         |                     |                     |
	| ssh     | -p auto-021279 sudo cat                              | auto-021279            | jenkins | v1.33.1 | 03 Jun 24 13:37 UTC | 03 Jun 24 13:37 UTC |
	|         | /etc/containerd/config.toml                          |                        |         |         |                     |                     |
	| ssh     | -p auto-021279 sudo containerd                       | auto-021279            | jenkins | v1.33.1 | 03 Jun 24 13:37 UTC | 03 Jun 24 13:37 UTC |
	|         | config dump                                          |                        |         |         |                     |                     |
	| ssh     | -p auto-021279 sudo systemctl                        | auto-021279            | jenkins | v1.33.1 | 03 Jun 24 13:37 UTC | 03 Jun 24 13:37 UTC |
	|         | status crio --all --full                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p auto-021279 sudo systemctl                        | auto-021279            | jenkins | v1.33.1 | 03 Jun 24 13:37 UTC | 03 Jun 24 13:37 UTC |
	|         | cat crio --no-pager                                  |                        |         |         |                     |                     |
	| ssh     | -p auto-021279 sudo find                             | auto-021279            | jenkins | v1.33.1 | 03 Jun 24 13:37 UTC | 03 Jun 24 13:37 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                        |         |         |                     |                     |
	| ssh     | -p auto-021279 sudo crio                             | auto-021279            | jenkins | v1.33.1 | 03 Jun 24 13:37 UTC | 03 Jun 24 13:37 UTC |
	|         | config                                               |                        |         |         |                     |                     |
	| delete  | -p auto-021279                                       | auto-021279            | jenkins | v1.33.1 | 03 Jun 24 13:37 UTC | 03 Jun 24 13:37 UTC |
	| start   | -p kindnet-021279                                    | kindnet-021279         | jenkins | v1.33.1 | 03 Jun 24 13:37 UTC |                     |
	|         | --memory=3072                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                        |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                        |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                          |                        |         |         |                     |                     |
	|         | --container-runtime=crio                             |                        |         |         |                     |                     |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 13:37:56
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 13:37:56.561662 1130573 out.go:291] Setting OutFile to fd 1 ...
	I0603 13:37:56.561967 1130573 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:37:56.561980 1130573 out.go:304] Setting ErrFile to fd 2...
	I0603 13:37:56.561987 1130573 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:37:56.562220 1130573 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
	I0603 13:37:56.562876 1130573 out.go:298] Setting JSON to false
	I0603 13:37:56.564092 1130573 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":15624,"bootTime":1717406253,"procs":302,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 13:37:56.564173 1130573 start.go:139] virtualization: kvm guest
	I0603 13:37:56.565901 1130573 out.go:177] * [kindnet-021279] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 13:37:56.567965 1130573 out.go:177]   - MINIKUBE_LOCATION=19011
	I0603 13:37:56.567965 1130573 notify.go:220] Checking for updates...
	I0603 13:37:56.569673 1130573 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 13:37:56.571562 1130573 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 13:37:56.573256 1130573 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 13:37:56.574774 1130573 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 13:37:56.576220 1130573 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 13:37:56.579146 1130573 config.go:182] Loaded profile config "calico-021279": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:37:56.579283 1130573 config.go:182] Loaded profile config "custom-flannel-021279": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:37:56.579444 1130573 config.go:182] Loaded profile config "pause-374510": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:37:56.579613 1130573 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 13:37:56.623810 1130573 out.go:177] * Using the kvm2 driver based on user configuration
	I0603 13:37:56.625323 1130573 start.go:297] selected driver: kvm2
	I0603 13:37:56.625369 1130573 start.go:901] validating driver "kvm2" against <nil>
	I0603 13:37:56.625388 1130573 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 13:37:56.626482 1130573 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 13:37:56.626623 1130573 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19011-1078924/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0603 13:37:56.644932 1130573 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0603 13:37:56.645004 1130573 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0603 13:37:56.645331 1130573 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 13:37:56.645375 1130573 cni.go:84] Creating CNI manager for "kindnet"
	I0603 13:37:56.645387 1130573 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0603 13:37:56.645476 1130573 start.go:340] cluster config:
	{Name:kindnet-021279 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:kindnet-021279 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:37:56.645622 1130573 iso.go:125] acquiring lock: {Name:mka26d6a83f88b83737ccc78b57cc462fbe70fe1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 13:37:56.647720 1130573 out.go:177] * Starting "kindnet-021279" primary control-plane node in "kindnet-021279" cluster
	I0603 13:37:53.641702 1128402 pod_ready.go:92] pod "coredns-7db6d8ff4d-k4gdp" in "kube-system" namespace has status "Ready":"True"
	I0603 13:37:53.641734 1128402 pod_ready.go:81] duration metric: took 5.508773941s for pod "coredns-7db6d8ff4d-k4gdp" in "kube-system" namespace to be "Ready" ...
	I0603 13:37:53.641747 1128402 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-374510" in "kube-system" namespace to be "Ready" ...
	I0603 13:37:55.650752 1128402 pod_ready.go:92] pod "etcd-pause-374510" in "kube-system" namespace has status "Ready":"True"
	I0603 13:37:55.650789 1128402 pod_ready.go:81] duration metric: took 2.009032722s for pod "etcd-pause-374510" in "kube-system" namespace to be "Ready" ...
	I0603 13:37:55.650804 1128402 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-374510" in "kube-system" namespace to be "Ready" ...
	I0603 13:37:56.159124 1128402 pod_ready.go:92] pod "kube-apiserver-pause-374510" in "kube-system" namespace has status "Ready":"True"
	I0603 13:37:56.159156 1128402 pod_ready.go:81] duration metric: took 508.342427ms for pod "kube-apiserver-pause-374510" in "kube-system" namespace to be "Ready" ...
	I0603 13:37:56.159195 1128402 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-374510" in "kube-system" namespace to be "Ready" ...
	I0603 13:37:56.166278 1128402 pod_ready.go:92] pod "kube-controller-manager-pause-374510" in "kube-system" namespace has status "Ready":"True"
	I0603 13:37:56.166310 1128402 pod_ready.go:81] duration metric: took 7.106132ms for pod "kube-controller-manager-pause-374510" in "kube-system" namespace to be "Ready" ...
	I0603 13:37:56.166325 1128402 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6tc5r" in "kube-system" namespace to be "Ready" ...
	I0603 13:37:56.172136 1128402 pod_ready.go:92] pod "kube-proxy-6tc5r" in "kube-system" namespace has status "Ready":"True"
	I0603 13:37:56.172165 1128402 pod_ready.go:81] duration metric: took 5.831785ms for pod "kube-proxy-6tc5r" in "kube-system" namespace to be "Ready" ...
	I0603 13:37:56.172178 1128402 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-374510" in "kube-system" namespace to be "Ready" ...
	I0603 13:37:58.179321 1128402 pod_ready.go:102] pod "kube-scheduler-pause-374510" in "kube-system" namespace has status "Ready":"False"
	I0603 13:37:59.179524 1128402 pod_ready.go:92] pod "kube-scheduler-pause-374510" in "kube-system" namespace has status "Ready":"True"
	I0603 13:37:59.179552 1128402 pod_ready.go:81] duration metric: took 3.007365098s for pod "kube-scheduler-pause-374510" in "kube-system" namespace to be "Ready" ...
	I0603 13:37:59.179560 1128402 pod_ready.go:38] duration metric: took 11.055483912s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:37:59.179579 1128402 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 13:37:59.195447 1128402 ops.go:34] apiserver oom_adj: -16
	I0603 13:37:59.195481 1128402 kubeadm.go:591] duration metric: took 29.391734123s to restartPrimaryControlPlane
	I0603 13:37:59.195502 1128402 kubeadm.go:393] duration metric: took 29.556098168s to StartCluster
	I0603 13:37:59.195521 1128402 settings.go:142] acquiring lock: {Name:mka7155af15d143794eb08b8670f7d850f44839e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:37:59.195609 1128402 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 13:37:59.196881 1128402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/kubeconfig: {Name:mk082a4c41fd0f4876b4085806e1bc5ef6533b14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:37:59.197184 1128402 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.94.3 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 13:37:59.199057 1128402 out.go:177] * Verifying Kubernetes components...
	I0603 13:37:59.197289 1128402 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 13:37:59.197508 1128402 config.go:182] Loaded profile config "pause-374510": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:37:59.200865 1128402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:37:59.200873 1128402 out.go:177] * Enabled addons: 
	I0603 13:37:57.287525 1128335 node_ready.go:53] node "calico-021279" has status "Ready":"False"
	I0603 13:37:58.286008 1128335 node_ready.go:49] node "calico-021279" has status "Ready":"True"
	I0603 13:37:58.286039 1128335 node_ready.go:38] duration metric: took 8.003135211s for node "calico-021279" to be "Ready" ...
	I0603 13:37:58.286051 1128335 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:37:58.301560 1128335 pod_ready.go:78] waiting up to 15m0s for pod "calico-kube-controllers-564985c589-znvpg" in "kube-system" namespace to be "Ready" ...
	I0603 13:38:00.308720 1128335 pod_ready.go:102] pod "calico-kube-controllers-564985c589-znvpg" in "kube-system" namespace has status "Ready":"False"
	I0603 13:37:56.299502 1129957 main.go:141] libmachine: (custom-flannel-021279) DBG | domain custom-flannel-021279 has defined MAC address 52:54:00:0b:62:a0 in network mk-custom-flannel-021279
	I0603 13:37:56.327982 1129957 main.go:141] libmachine: (custom-flannel-021279) DBG | unable to find current IP address of domain custom-flannel-021279 in network mk-custom-flannel-021279
	I0603 13:37:56.328007 1129957 main.go:141] libmachine: (custom-flannel-021279) DBG | I0603 13:37:56.300218 1129998 retry.go:31] will retry after 634.620815ms: waiting for machine to come up
	I0603 13:37:56.936416 1129957 main.go:141] libmachine: (custom-flannel-021279) DBG | domain custom-flannel-021279 has defined MAC address 52:54:00:0b:62:a0 in network mk-custom-flannel-021279
	I0603 13:37:56.937110 1129957 main.go:141] libmachine: (custom-flannel-021279) DBG | unable to find current IP address of domain custom-flannel-021279 in network mk-custom-flannel-021279
	I0603 13:37:56.937142 1129957 main.go:141] libmachine: (custom-flannel-021279) DBG | I0603 13:37:56.937047 1129998 retry.go:31] will retry after 966.248782ms: waiting for machine to come up
	I0603 13:37:57.904883 1129957 main.go:141] libmachine: (custom-flannel-021279) DBG | domain custom-flannel-021279 has defined MAC address 52:54:00:0b:62:a0 in network mk-custom-flannel-021279
	I0603 13:37:57.905954 1129957 main.go:141] libmachine: (custom-flannel-021279) DBG | unable to find current IP address of domain custom-flannel-021279 in network mk-custom-flannel-021279
	I0603 13:37:57.905982 1129957 main.go:141] libmachine: (custom-flannel-021279) DBG | I0603 13:37:57.905883 1129998 retry.go:31] will retry after 1.019724207s: waiting for machine to come up
	I0603 13:37:58.927151 1129957 main.go:141] libmachine: (custom-flannel-021279) DBG | domain custom-flannel-021279 has defined MAC address 52:54:00:0b:62:a0 in network mk-custom-flannel-021279
	I0603 13:37:58.927706 1129957 main.go:141] libmachine: (custom-flannel-021279) DBG | unable to find current IP address of domain custom-flannel-021279 in network mk-custom-flannel-021279
	I0603 13:37:58.927737 1129957 main.go:141] libmachine: (custom-flannel-021279) DBG | I0603 13:37:58.927650 1129998 retry.go:31] will retry after 1.440630461s: waiting for machine to come up
	I0603 13:38:00.369529 1129957 main.go:141] libmachine: (custom-flannel-021279) DBG | domain custom-flannel-021279 has defined MAC address 52:54:00:0b:62:a0 in network mk-custom-flannel-021279
	I0603 13:38:00.370177 1129957 main.go:141] libmachine: (custom-flannel-021279) DBG | unable to find current IP address of domain custom-flannel-021279 in network mk-custom-flannel-021279
	I0603 13:38:00.370206 1129957 main.go:141] libmachine: (custom-flannel-021279) DBG | I0603 13:38:00.370095 1129998 retry.go:31] will retry after 1.420803394s: waiting for machine to come up
	I0603 13:37:56.649075 1130573 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 13:37:56.649134 1130573 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0603 13:37:56.649146 1130573 cache.go:56] Caching tarball of preloaded images
	I0603 13:37:56.649307 1130573 preload.go:173] Found /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 13:37:56.649326 1130573 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0603 13:37:56.649515 1130573 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kindnet-021279/config.json ...
	I0603 13:37:56.649542 1130573 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kindnet-021279/config.json: {Name:mka7e6568c5ab33747f807bb6c3d3f010f4b2853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:37:56.649796 1130573 start.go:360] acquireMachinesLock for kindnet-021279: {Name:mk20baaab39609d00406b78ad309423511e633ec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 13:37:59.202251 1128402 addons.go:510] duration metric: took 4.96381ms for enable addons: enabled=[]
	I0603 13:37:59.430699 1128402 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:37:59.447061 1128402 node_ready.go:35] waiting up to 6m0s for node "pause-374510" to be "Ready" ...
	I0603 13:37:59.450786 1128402 node_ready.go:49] node "pause-374510" has status "Ready":"True"
	I0603 13:37:59.450812 1128402 node_ready.go:38] duration metric: took 3.683419ms for node "pause-374510" to be "Ready" ...
	I0603 13:37:59.450821 1128402 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:37:59.455872 1128402 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-k4gdp" in "kube-system" namespace to be "Ready" ...
	I0603 13:37:59.461935 1128402 pod_ready.go:92] pod "coredns-7db6d8ff4d-k4gdp" in "kube-system" namespace has status "Ready":"True"
	I0603 13:37:59.461955 1128402 pod_ready.go:81] duration metric: took 6.05415ms for pod "coredns-7db6d8ff4d-k4gdp" in "kube-system" namespace to be "Ready" ...
	I0603 13:37:59.461964 1128402 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-374510" in "kube-system" namespace to be "Ready" ...
	I0603 13:37:59.645780 1128402 pod_ready.go:92] pod "etcd-pause-374510" in "kube-system" namespace has status "Ready":"True"
	I0603 13:37:59.645808 1128402 pod_ready.go:81] duration metric: took 183.838709ms for pod "etcd-pause-374510" in "kube-system" namespace to be "Ready" ...
	I0603 13:37:59.645819 1128402 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-374510" in "kube-system" namespace to be "Ready" ...
	I0603 13:38:00.046057 1128402 pod_ready.go:92] pod "kube-apiserver-pause-374510" in "kube-system" namespace has status "Ready":"True"
	I0603 13:38:00.046091 1128402 pod_ready.go:81] duration metric: took 400.263199ms for pod "kube-apiserver-pause-374510" in "kube-system" namespace to be "Ready" ...
	I0603 13:38:00.046106 1128402 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-374510" in "kube-system" namespace to be "Ready" ...
	I0603 13:38:00.449364 1128402 pod_ready.go:92] pod "kube-controller-manager-pause-374510" in "kube-system" namespace has status "Ready":"True"
	I0603 13:38:00.449401 1128402 pod_ready.go:81] duration metric: took 403.285311ms for pod "kube-controller-manager-pause-374510" in "kube-system" namespace to be "Ready" ...
	I0603 13:38:00.449443 1128402 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6tc5r" in "kube-system" namespace to be "Ready" ...
	I0603 13:38:00.846767 1128402 pod_ready.go:92] pod "kube-proxy-6tc5r" in "kube-system" namespace has status "Ready":"True"
	I0603 13:38:00.846801 1128402 pod_ready.go:81] duration metric: took 397.348896ms for pod "kube-proxy-6tc5r" in "kube-system" namespace to be "Ready" ...
	I0603 13:38:00.846814 1128402 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-374510" in "kube-system" namespace to be "Ready" ...
	I0603 13:38:01.247647 1128402 pod_ready.go:92] pod "kube-scheduler-pause-374510" in "kube-system" namespace has status "Ready":"True"
	I0603 13:38:01.247682 1128402 pod_ready.go:81] duration metric: took 400.859083ms for pod "kube-scheduler-pause-374510" in "kube-system" namespace to be "Ready" ...
	I0603 13:38:01.247694 1128402 pod_ready.go:38] duration metric: took 1.796862399s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:38:01.247713 1128402 api_server.go:52] waiting for apiserver process to appear ...
	I0603 13:38:01.247801 1128402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:38:01.270397 1128402 api_server.go:72] duration metric: took 2.073152123s to wait for apiserver process to appear ...
	I0603 13:38:01.270464 1128402 api_server.go:88] waiting for apiserver healthz status ...
	I0603 13:38:01.270506 1128402 api_server.go:253] Checking apiserver healthz at https://192.168.94.3:8443/healthz ...
	I0603 13:38:01.276496 1128402 api_server.go:279] https://192.168.94.3:8443/healthz returned 200:
	ok
	I0603 13:38:01.277754 1128402 api_server.go:141] control plane version: v1.30.1
	I0603 13:38:01.277828 1128402 api_server.go:131] duration metric: took 7.355365ms to wait for apiserver health ...
	I0603 13:38:01.277842 1128402 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 13:38:01.449829 1128402 system_pods.go:59] 6 kube-system pods found
	I0603 13:38:01.449874 1128402 system_pods.go:61] "coredns-7db6d8ff4d-k4gdp" [ea63b48b-59b9-4fc1-ab16-4bd2452781d8] Running
	I0603 13:38:01.449884 1128402 system_pods.go:61] "etcd-pause-374510" [6802cbfa-1b6a-4f49-93d4-8fd472f4f1ba] Running
	I0603 13:38:01.449889 1128402 system_pods.go:61] "kube-apiserver-pause-374510" [b54aa0cb-5d8f-4499-9db7-b8b9f8435cf9] Running
	I0603 13:38:01.449896 1128402 system_pods.go:61] "kube-controller-manager-pause-374510" [4b6dcc0e-479d-4f24-876b-6933b62af65e] Running
	I0603 13:38:01.449901 1128402 system_pods.go:61] "kube-proxy-6tc5r" [13008dfa-c9ca-4978-bb85-797ab01a9495] Running
	I0603 13:38:01.449906 1128402 system_pods.go:61] "kube-scheduler-pause-374510" [8c97b04d-8936-438d-bf6b-4c192d34e4d4] Running
	I0603 13:38:01.449915 1128402 system_pods.go:74] duration metric: took 172.065387ms to wait for pod list to return data ...
	I0603 13:38:01.449926 1128402 default_sa.go:34] waiting for default service account to be created ...
	I0603 13:38:01.647422 1128402 default_sa.go:45] found service account: "default"
	I0603 13:38:01.647474 1128402 default_sa.go:55] duration metric: took 197.537996ms for default service account to be created ...
	I0603 13:38:01.647489 1128402 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 13:38:01.849376 1128402 system_pods.go:86] 6 kube-system pods found
	I0603 13:38:01.849423 1128402 system_pods.go:89] "coredns-7db6d8ff4d-k4gdp" [ea63b48b-59b9-4fc1-ab16-4bd2452781d8] Running
	I0603 13:38:01.849432 1128402 system_pods.go:89] "etcd-pause-374510" [6802cbfa-1b6a-4f49-93d4-8fd472f4f1ba] Running
	I0603 13:38:01.849439 1128402 system_pods.go:89] "kube-apiserver-pause-374510" [b54aa0cb-5d8f-4499-9db7-b8b9f8435cf9] Running
	I0603 13:38:01.849446 1128402 system_pods.go:89] "kube-controller-manager-pause-374510" [4b6dcc0e-479d-4f24-876b-6933b62af65e] Running
	I0603 13:38:01.849452 1128402 system_pods.go:89] "kube-proxy-6tc5r" [13008dfa-c9ca-4978-bb85-797ab01a9495] Running
	I0603 13:38:01.849459 1128402 system_pods.go:89] "kube-scheduler-pause-374510" [8c97b04d-8936-438d-bf6b-4c192d34e4d4] Running
	I0603 13:38:01.849468 1128402 system_pods.go:126] duration metric: took 201.971373ms to wait for k8s-apps to be running ...
	I0603 13:38:01.849477 1128402 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 13:38:01.849664 1128402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:38:01.870409 1128402 system_svc.go:56] duration metric: took 20.917384ms WaitForService to wait for kubelet
	I0603 13:38:01.870454 1128402 kubeadm.go:576] duration metric: took 2.673226559s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 13:38:01.870482 1128402 node_conditions.go:102] verifying NodePressure condition ...
	I0603 13:38:02.047147 1128402 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 13:38:02.047179 1128402 node_conditions.go:123] node cpu capacity is 2
	I0603 13:38:02.047192 1128402 node_conditions.go:105] duration metric: took 176.703823ms to run NodePressure ...
	I0603 13:38:02.047207 1128402 start.go:240] waiting for startup goroutines ...
	I0603 13:38:02.047215 1128402 start.go:245] waiting for cluster config update ...
	I0603 13:38:02.047227 1128402 start.go:254] writing updated cluster config ...
	I0603 13:38:02.047574 1128402 ssh_runner.go:195] Run: rm -f paused
	I0603 13:38:02.117595 1128402 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 13:38:02.119949 1128402 out.go:177] * Done! kubectl is now configured to use "pause-374510" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jun 03 13:38:05 pause-374510 crio[3006]: time="2024-06-03 13:38:05.569388799Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9a58deb4-0399-41a4-9e23-074c62e64d42 name=/runtime.v1.RuntimeService/Version
	Jun 03 13:38:05 pause-374510 crio[3006]: time="2024-06-03 13:38:05.571546007Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5f166ef0-bcba-4c82-807c-8204d68cbaaf name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 13:38:05 pause-374510 crio[3006]: time="2024-06-03 13:38:05.572157278Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717421885572127069,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5f166ef0-bcba-4c82-807c-8204d68cbaaf name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 13:38:05 pause-374510 crio[3006]: time="2024-06-03 13:38:05.573427572Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=23a3fb4b-b18e-45ba-b229-942460a01d8d name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:38:05 pause-374510 crio[3006]: time="2024-06-03 13:38:05.573486222Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=23a3fb4b-b18e-45ba-b229-942460a01d8d name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:38:05 pause-374510 crio[3006]: time="2024-06-03 13:38:05.573817996Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2c3ef99bcb80f03e2a2ddf94b11e826201743f0dd48929f09c80b068e344215e,PodSandboxId:85b02571ace707f0e4c5b222d0995e8d55a9c1c3e4851755af821f31b714bd63,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717421866966502370,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6tc5r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13008dfa-c9ca-4978-bb85-797ab01a9495,},Annotations:map[string]string{io.kubernetes.container.hash: 1ee71ac3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cb23db9f8a3dd78b3d53081b709a1d3d1156cf4c9723b592f545a51b616d60d,PodSandboxId:9842f1d13a85b715b4029ee646388a47e9ab0ccdbc2acc765bb2132e7f832ce8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717421866985560331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k4gdp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea63b48b-59b9-4fc1-ab16-4bd2452781d8,},Annotations:map[string]string{io.kubernetes.container.hash: bfed4508,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a69250a66f1441760777adb3165aa7372f5f9b52e75d69a40f604bd2be0f9d65,PodSandboxId:3973517155d58d495053d072e8bfa888e8dd8ae229a0316c18b7d8e00fd7b13f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717421862148224241,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-374510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50a237ba4f
3cdf7f5165fcfcc243c780,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53fabfa8c7517b5dd93ca8586044c5afe02f25933453645a4bc07434517fec98,PodSandboxId:94b137d75ba345a1764a7316d025be19cf2bb136da2a00646ce75422605cb2a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717421862138387418,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-374510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddffbe442e034723d60bf0b98bb4
12a8,},Annotations:map[string]string{io.kubernetes.container.hash: 86c219cc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d64e33e05a7ee34059a437610585db1b0e60e597731ef4f35efc8f23ebfd52d0,PodSandboxId:138d4c7d1e874cdb202fb0fdd6882666ca5e79e79ef3b17a7f77b5d2dd1f606c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717421862107412796,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-374510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfdc9599d2feaa906e61fb7a6e4cf2b1,},Annotations:map[string]string{io.kubernet
es.container.hash: 45a6e82c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0712cd1cac74f0eab2e6823ff7c28f1c14c20828fca7018183948cb3c515ad2,PodSandboxId:1fdf623c19e30a9d6eeed0489e4b05439916b481f77d85a3487147bb38b55c7e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717421862114220466,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-374510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 377763bf209825dc0f3733b4fb073b5b,},Annotations:map[string]string{io
.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c4a1aa1c0a528a93a6d1c0724b7347ae5975e50e0a4df0a56d406a85700ac82,PodSandboxId:9842f1d13a85b715b4029ee646388a47e9ab0ccdbc2acc765bb2132e7f832ce8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717421849478569966,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k4gdp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea63b48b-59b9-4fc1-ab16-4bd2452781d8,},Annotations:map[string]string{io.kubernetes.container.hash: bfed
4508,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5bb0ea114452d89b9a9c672315f8800acbe91c80b0716e8bb40643528e0e592,PodSandboxId:940e6913de02cd9d44fe61e933c51d87d0d0b5562bf737f64fc2a466d2f1a765,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717421846179624669,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-6tc5r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13008dfa-c9ca-4978-bb85-797ab01a9495,},Annotations:map[string]string{io.kubernetes.container.hash: 1ee71ac3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad021413893a1057877b37ab2a5aaf857fbc67d8904ffcdcb50cdf8f3cd5473,PodSandboxId:586d69d890612ad63e928507a6b9d23d8d3d3b680494dcfac417105f2fa3af15,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717421846265523371,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-374510,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: dfdc9599d2feaa906e61fb7a6e4cf2b1,},Annotations:map[string]string{io.kubernetes.container.hash: 45a6e82c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5958fa5fc08cd625441deb6a1c8f01186a492a25779925cead50389ac5778a2,PodSandboxId:7f5e32f9e93875b5fab213d27ba8761561aaac4b57db73ec9b4b0ea4bea385c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717421846191550779,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-374510,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: ddffbe442e034723d60bf0b98bb412a8,},Annotations:map[string]string{io.kubernetes.container.hash: 86c219cc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07d936e6df4f5e08035f9b4e2f8682831c0d5beda04da16180aa097741c81d0d,PodSandboxId:4dc1fb3aa6d92c1a0f0f8f13424851a5b4e0019139f39be4d53440dd223b7153,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717421846123420339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-374510,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 377763bf209825dc0f3733b4fb073b5b,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97feea60a1f2eed2fa73ec0056267bc817af405169342b8c8640b346ed9c3b3d,PodSandboxId:aa8e932c5cdcb9aa97979ca2705f20053db1c445916f2cb7c80e8039c0beae75,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717421846048695570,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-374510,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 50a237ba4f3cdf7f5165fcfcc243c780,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=23a3fb4b-b18e-45ba-b229-942460a01d8d name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:38:05 pause-374510 crio[3006]: time="2024-06-03 13:38:05.625824175Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=61dffe07-e09b-4c3b-8781-d3ab0de2dfac name=/runtime.v1.RuntimeService/Version
	Jun 03 13:38:05 pause-374510 crio[3006]: time="2024-06-03 13:38:05.625923996Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=61dffe07-e09b-4c3b-8781-d3ab0de2dfac name=/runtime.v1.RuntimeService/Version
	Jun 03 13:38:05 pause-374510 crio[3006]: time="2024-06-03 13:38:05.628025629Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2ca9225d-3b0f-4996-8021-5f6324a9b8a2 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 13:38:05 pause-374510 crio[3006]: time="2024-06-03 13:38:05.628689000Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717421885628652358,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2ca9225d-3b0f-4996-8021-5f6324a9b8a2 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 13:38:05 pause-374510 crio[3006]: time="2024-06-03 13:38:05.629881162Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=73e0cb98-9ea5-4148-8922-fd6b7de37134 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:38:05 pause-374510 crio[3006]: time="2024-06-03 13:38:05.630045405Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=73e0cb98-9ea5-4148-8922-fd6b7de37134 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:38:05 pause-374510 crio[3006]: time="2024-06-03 13:38:05.630431816Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2c3ef99bcb80f03e2a2ddf94b11e826201743f0dd48929f09c80b068e344215e,PodSandboxId:85b02571ace707f0e4c5b222d0995e8d55a9c1c3e4851755af821f31b714bd63,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717421866966502370,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6tc5r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13008dfa-c9ca-4978-bb85-797ab01a9495,},Annotations:map[string]string{io.kubernetes.container.hash: 1ee71ac3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cb23db9f8a3dd78b3d53081b709a1d3d1156cf4c9723b592f545a51b616d60d,PodSandboxId:9842f1d13a85b715b4029ee646388a47e9ab0ccdbc2acc765bb2132e7f832ce8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717421866985560331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k4gdp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea63b48b-59b9-4fc1-ab16-4bd2452781d8,},Annotations:map[string]string{io.kubernetes.container.hash: bfed4508,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a69250a66f1441760777adb3165aa7372f5f9b52e75d69a40f604bd2be0f9d65,PodSandboxId:3973517155d58d495053d072e8bfa888e8dd8ae229a0316c18b7d8e00fd7b13f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717421862148224241,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-374510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50a237ba4f
3cdf7f5165fcfcc243c780,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53fabfa8c7517b5dd93ca8586044c5afe02f25933453645a4bc07434517fec98,PodSandboxId:94b137d75ba345a1764a7316d025be19cf2bb136da2a00646ce75422605cb2a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717421862138387418,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-374510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddffbe442e034723d60bf0b98bb4
12a8,},Annotations:map[string]string{io.kubernetes.container.hash: 86c219cc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d64e33e05a7ee34059a437610585db1b0e60e597731ef4f35efc8f23ebfd52d0,PodSandboxId:138d4c7d1e874cdb202fb0fdd6882666ca5e79e79ef3b17a7f77b5d2dd1f606c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717421862107412796,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-374510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfdc9599d2feaa906e61fb7a6e4cf2b1,},Annotations:map[string]string{io.kubernet
es.container.hash: 45a6e82c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0712cd1cac74f0eab2e6823ff7c28f1c14c20828fca7018183948cb3c515ad2,PodSandboxId:1fdf623c19e30a9d6eeed0489e4b05439916b481f77d85a3487147bb38b55c7e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717421862114220466,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-374510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 377763bf209825dc0f3733b4fb073b5b,},Annotations:map[string]string{io
.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c4a1aa1c0a528a93a6d1c0724b7347ae5975e50e0a4df0a56d406a85700ac82,PodSandboxId:9842f1d13a85b715b4029ee646388a47e9ab0ccdbc2acc765bb2132e7f832ce8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717421849478569966,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k4gdp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea63b48b-59b9-4fc1-ab16-4bd2452781d8,},Annotations:map[string]string{io.kubernetes.container.hash: bfed
4508,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5bb0ea114452d89b9a9c672315f8800acbe91c80b0716e8bb40643528e0e592,PodSandboxId:940e6913de02cd9d44fe61e933c51d87d0d0b5562bf737f64fc2a466d2f1a765,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717421846179624669,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-6tc5r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13008dfa-c9ca-4978-bb85-797ab01a9495,},Annotations:map[string]string{io.kubernetes.container.hash: 1ee71ac3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad021413893a1057877b37ab2a5aaf857fbc67d8904ffcdcb50cdf8f3cd5473,PodSandboxId:586d69d890612ad63e928507a6b9d23d8d3d3b680494dcfac417105f2fa3af15,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717421846265523371,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-374510,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: dfdc9599d2feaa906e61fb7a6e4cf2b1,},Annotations:map[string]string{io.kubernetes.container.hash: 45a6e82c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5958fa5fc08cd625441deb6a1c8f01186a492a25779925cead50389ac5778a2,PodSandboxId:7f5e32f9e93875b5fab213d27ba8761561aaac4b57db73ec9b4b0ea4bea385c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717421846191550779,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-374510,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: ddffbe442e034723d60bf0b98bb412a8,},Annotations:map[string]string{io.kubernetes.container.hash: 86c219cc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07d936e6df4f5e08035f9b4e2f8682831c0d5beda04da16180aa097741c81d0d,PodSandboxId:4dc1fb3aa6d92c1a0f0f8f13424851a5b4e0019139f39be4d53440dd223b7153,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717421846123420339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-374510,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 377763bf209825dc0f3733b4fb073b5b,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97feea60a1f2eed2fa73ec0056267bc817af405169342b8c8640b346ed9c3b3d,PodSandboxId:aa8e932c5cdcb9aa97979ca2705f20053db1c445916f2cb7c80e8039c0beae75,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717421846048695570,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-374510,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 50a237ba4f3cdf7f5165fcfcc243c780,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=73e0cb98-9ea5-4148-8922-fd6b7de37134 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:38:05 pause-374510 crio[3006]: time="2024-06-03 13:38:05.684626284Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=88e488da-7d51-4eb8-9724-5672d1779d31 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 03 13:38:05 pause-374510 crio[3006]: time="2024-06-03 13:38:05.684977869Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:9842f1d13a85b715b4029ee646388a47e9ab0ccdbc2acc765bb2132e7f832ce8,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-k4gdp,Uid:ea63b48b-59b9-4fc1-ab16-4bd2452781d8,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1717421849024327768,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-k4gdp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea63b48b-59b9-4fc1-ab16-4bd2452781d8,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03T13:35:50.763148968Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:85b02571ace707f0e4c5b222d0995e8d55a9c1c3e4851755af821f31b714bd63,Metadata:&PodSandboxMetadata{Name:kube-proxy-6tc5r,Uid:13008dfa-c9ca-4978-bb85-797ab01a9495,Namespace:kube-system,Attempt
:2,},State:SANDBOX_READY,CreatedAt:1717421848953662809,Labels:map[string]string{controller-revision-hash: 5dbf89796d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-6tc5r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13008dfa-c9ca-4978-bb85-797ab01a9495,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03T13:35:50.639634654Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:138d4c7d1e874cdb202fb0fdd6882666ca5e79e79ef3b17a7f77b5d2dd1f606c,Metadata:&PodSandboxMetadata{Name:etcd-pause-374510,Uid:dfdc9599d2feaa906e61fb7a6e4cf2b1,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1717421848909611263,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-374510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfdc9599d2feaa906e61fb7a6e4cf2b1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/
etcd.advertise-client-urls: https://192.168.94.3:2379,kubernetes.io/config.hash: dfdc9599d2feaa906e61fb7a6e4cf2b1,kubernetes.io/config.seen: 2024-06-03T13:35:36.984917848Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1fdf623c19e30a9d6eeed0489e4b05439916b481f77d85a3487147bb38b55c7e,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-374510,Uid:377763bf209825dc0f3733b4fb073b5b,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1717421848832105505,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-374510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 377763bf209825dc0f3733b4fb073b5b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 377763bf209825dc0f3733b4fb073b5b,kubernetes.io/config.seen: 2024-06-03T13:35:36.984922774Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3973517155d58d495053d072e8bfa888e8d
d8ae229a0316c18b7d8e00fd7b13f,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-374510,Uid:50a237ba4f3cdf7f5165fcfcc243c780,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1717421848776941977,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-374510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50a237ba4f3cdf7f5165fcfcc243c780,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 50a237ba4f3cdf7f5165fcfcc243c780,kubernetes.io/config.seen: 2024-06-03T13:35:36.984923553Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:94b137d75ba345a1764a7316d025be19cf2bb136da2a00646ce75422605cb2a6,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-374510,Uid:ddffbe442e034723d60bf0b98bb412a8,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1717421848762421250,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.ku
bernetes.pod.name: kube-apiserver-pause-374510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddffbe442e034723d60bf0b98bb412a8,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.94.3:8443,kubernetes.io/config.hash: ddffbe442e034723d60bf0b98bb412a8,kubernetes.io/config.seen: 2024-06-03T13:35:36.984921575Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=88e488da-7d51-4eb8-9724-5672d1779d31 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 03 13:38:05 pause-374510 crio[3006]: time="2024-06-03 13:38:05.685856475Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ba1a5e9c-2a68-4ab6-87c3-59069a798073 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:38:05 pause-374510 crio[3006]: time="2024-06-03 13:38:05.685981315Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ba1a5e9c-2a68-4ab6-87c3-59069a798073 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:38:05 pause-374510 crio[3006]: time="2024-06-03 13:38:05.686171185Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2c3ef99bcb80f03e2a2ddf94b11e826201743f0dd48929f09c80b068e344215e,PodSandboxId:85b02571ace707f0e4c5b222d0995e8d55a9c1c3e4851755af821f31b714bd63,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717421866966502370,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6tc5r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13008dfa-c9ca-4978-bb85-797ab01a9495,},Annotations:map[string]string{io.kubernetes.container.hash: 1ee71ac3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cb23db9f8a3dd78b3d53081b709a1d3d1156cf4c9723b592f545a51b616d60d,PodSandboxId:9842f1d13a85b715b4029ee646388a47e9ab0ccdbc2acc765bb2132e7f832ce8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717421866985560331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k4gdp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea63b48b-59b9-4fc1-ab16-4bd2452781d8,},Annotations:map[string]string{io.kubernetes.container.hash: bfed4508,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a69250a66f1441760777adb3165aa7372f5f9b52e75d69a40f604bd2be0f9d65,PodSandboxId:3973517155d58d495053d072e8bfa888e8dd8ae229a0316c18b7d8e00fd7b13f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717421862148224241,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-374510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50a237ba4f
3cdf7f5165fcfcc243c780,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53fabfa8c7517b5dd93ca8586044c5afe02f25933453645a4bc07434517fec98,PodSandboxId:94b137d75ba345a1764a7316d025be19cf2bb136da2a00646ce75422605cb2a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717421862138387418,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-374510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddffbe442e034723d60bf0b98bb4
12a8,},Annotations:map[string]string{io.kubernetes.container.hash: 86c219cc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d64e33e05a7ee34059a437610585db1b0e60e597731ef4f35efc8f23ebfd52d0,PodSandboxId:138d4c7d1e874cdb202fb0fdd6882666ca5e79e79ef3b17a7f77b5d2dd1f606c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717421862107412796,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-374510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfdc9599d2feaa906e61fb7a6e4cf2b1,},Annotations:map[string]string{io.kubernet
es.container.hash: 45a6e82c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0712cd1cac74f0eab2e6823ff7c28f1c14c20828fca7018183948cb3c515ad2,PodSandboxId:1fdf623c19e30a9d6eeed0489e4b05439916b481f77d85a3487147bb38b55c7e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717421862114220466,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-374510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 377763bf209825dc0f3733b4fb073b5b,},Annotations:map[string]string{io
.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ba1a5e9c-2a68-4ab6-87c3-59069a798073 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:38:05 pause-374510 crio[3006]: time="2024-06-03 13:38:05.710467532Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c4741397-20ab-415e-8a0a-03b2426c58b1 name=/runtime.v1.RuntimeService/Version
	Jun 03 13:38:05 pause-374510 crio[3006]: time="2024-06-03 13:38:05.710590485Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c4741397-20ab-415e-8a0a-03b2426c58b1 name=/runtime.v1.RuntimeService/Version
	Jun 03 13:38:05 pause-374510 crio[3006]: time="2024-06-03 13:38:05.712481940Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=66255547-dbb4-4875-80b3-47a69e0dbbca name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 13:38:05 pause-374510 crio[3006]: time="2024-06-03 13:38:05.713222496Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717421885713192356,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=66255547-dbb4-4875-80b3-47a69e0dbbca name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 13:38:05 pause-374510 crio[3006]: time="2024-06-03 13:38:05.714239725Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c6308850-d937-4b01-b21e-0dc29743d29c name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:38:05 pause-374510 crio[3006]: time="2024-06-03 13:38:05.714324385Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c6308850-d937-4b01-b21e-0dc29743d29c name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:38:05 pause-374510 crio[3006]: time="2024-06-03 13:38:05.714790338Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2c3ef99bcb80f03e2a2ddf94b11e826201743f0dd48929f09c80b068e344215e,PodSandboxId:85b02571ace707f0e4c5b222d0995e8d55a9c1c3e4851755af821f31b714bd63,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717421866966502370,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6tc5r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13008dfa-c9ca-4978-bb85-797ab01a9495,},Annotations:map[string]string{io.kubernetes.container.hash: 1ee71ac3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cb23db9f8a3dd78b3d53081b709a1d3d1156cf4c9723b592f545a51b616d60d,PodSandboxId:9842f1d13a85b715b4029ee646388a47e9ab0ccdbc2acc765bb2132e7f832ce8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717421866985560331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k4gdp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea63b48b-59b9-4fc1-ab16-4bd2452781d8,},Annotations:map[string]string{io.kubernetes.container.hash: bfed4508,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a69250a66f1441760777adb3165aa7372f5f9b52e75d69a40f604bd2be0f9d65,PodSandboxId:3973517155d58d495053d072e8bfa888e8dd8ae229a0316c18b7d8e00fd7b13f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717421862148224241,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-374510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50a237ba4f
3cdf7f5165fcfcc243c780,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53fabfa8c7517b5dd93ca8586044c5afe02f25933453645a4bc07434517fec98,PodSandboxId:94b137d75ba345a1764a7316d025be19cf2bb136da2a00646ce75422605cb2a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717421862138387418,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-374510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddffbe442e034723d60bf0b98bb4
12a8,},Annotations:map[string]string{io.kubernetes.container.hash: 86c219cc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d64e33e05a7ee34059a437610585db1b0e60e597731ef4f35efc8f23ebfd52d0,PodSandboxId:138d4c7d1e874cdb202fb0fdd6882666ca5e79e79ef3b17a7f77b5d2dd1f606c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717421862107412796,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-374510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfdc9599d2feaa906e61fb7a6e4cf2b1,},Annotations:map[string]string{io.kubernet
es.container.hash: 45a6e82c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0712cd1cac74f0eab2e6823ff7c28f1c14c20828fca7018183948cb3c515ad2,PodSandboxId:1fdf623c19e30a9d6eeed0489e4b05439916b481f77d85a3487147bb38b55c7e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717421862114220466,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-374510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 377763bf209825dc0f3733b4fb073b5b,},Annotations:map[string]string{io
.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c4a1aa1c0a528a93a6d1c0724b7347ae5975e50e0a4df0a56d406a85700ac82,PodSandboxId:9842f1d13a85b715b4029ee646388a47e9ab0ccdbc2acc765bb2132e7f832ce8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717421849478569966,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k4gdp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea63b48b-59b9-4fc1-ab16-4bd2452781d8,},Annotations:map[string]string{io.kubernetes.container.hash: bfed
4508,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5bb0ea114452d89b9a9c672315f8800acbe91c80b0716e8bb40643528e0e592,PodSandboxId:940e6913de02cd9d44fe61e933c51d87d0d0b5562bf737f64fc2a466d2f1a765,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717421846179624669,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-6tc5r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13008dfa-c9ca-4978-bb85-797ab01a9495,},Annotations:map[string]string{io.kubernetes.container.hash: 1ee71ac3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad021413893a1057877b37ab2a5aaf857fbc67d8904ffcdcb50cdf8f3cd5473,PodSandboxId:586d69d890612ad63e928507a6b9d23d8d3d3b680494dcfac417105f2fa3af15,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717421846265523371,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-374510,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: dfdc9599d2feaa906e61fb7a6e4cf2b1,},Annotations:map[string]string{io.kubernetes.container.hash: 45a6e82c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5958fa5fc08cd625441deb6a1c8f01186a492a25779925cead50389ac5778a2,PodSandboxId:7f5e32f9e93875b5fab213d27ba8761561aaac4b57db73ec9b4b0ea4bea385c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717421846191550779,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-374510,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: ddffbe442e034723d60bf0b98bb412a8,},Annotations:map[string]string{io.kubernetes.container.hash: 86c219cc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07d936e6df4f5e08035f9b4e2f8682831c0d5beda04da16180aa097741c81d0d,PodSandboxId:4dc1fb3aa6d92c1a0f0f8f13424851a5b4e0019139f39be4d53440dd223b7153,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717421846123420339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-374510,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 377763bf209825dc0f3733b4fb073b5b,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97feea60a1f2eed2fa73ec0056267bc817af405169342b8c8640b346ed9c3b3d,PodSandboxId:aa8e932c5cdcb9aa97979ca2705f20053db1c445916f2cb7c80e8039c0beae75,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717421846048695570,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-374510,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 50a237ba4f3cdf7f5165fcfcc243c780,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c6308850-d937-4b01-b21e-0dc29743d29c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0cb23db9f8a3d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   18 seconds ago      Running             coredns                   2                   9842f1d13a85b       coredns-7db6d8ff4d-k4gdp
	2c3ef99bcb80f       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   18 seconds ago      Running             kube-proxy                2                   85b02571ace70       kube-proxy-6tc5r
	a69250a66f144       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   23 seconds ago      Running             kube-scheduler            2                   3973517155d58       kube-scheduler-pause-374510
	53fabfa8c7517       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   23 seconds ago      Running             kube-apiserver            2                   94b137d75ba34       kube-apiserver-pause-374510
	b0712cd1cac74       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   23 seconds ago      Running             kube-controller-manager   2                   1fdf623c19e30       kube-controller-manager-pause-374510
	d64e33e05a7ee       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   23 seconds ago      Running             etcd                      2                   138d4c7d1e874       etcd-pause-374510
	4c4a1aa1c0a52       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   36 seconds ago      Exited              coredns                   1                   9842f1d13a85b       coredns-7db6d8ff4d-k4gdp
	2ad021413893a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   39 seconds ago      Exited              etcd                      1                   586d69d890612       etcd-pause-374510
	a5958fa5fc08c       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   39 seconds ago      Exited              kube-apiserver            1                   7f5e32f9e9387       kube-apiserver-pause-374510
	a5bb0ea114452       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   39 seconds ago      Exited              kube-proxy                1                   940e6913de02c       kube-proxy-6tc5r
	07d936e6df4f5       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   39 seconds ago      Exited              kube-controller-manager   1                   4dc1fb3aa6d92       kube-controller-manager-pause-374510
	97feea60a1f2e       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   39 seconds ago      Exited              kube-scheduler            1                   aa8e932c5cdcb       kube-scheduler-pause-374510
	
	
	==> coredns [0cb23db9f8a3dd78b3d53081b709a1d3d1156cf4c9723b592f545a51b616d60d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:45251 - 46558 "HINFO IN 5808006950088449286.6134672951148051959. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027946014s
	
	
	==> coredns [4c4a1aa1c0a528a93a6d1c0724b7347ae5975e50e0a4df0a56d406a85700ac82] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:49054 - 55389 "HINFO IN 3417706863083808389.5363907523611926070. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.029407227s
	
	
	==> describe nodes <==
	Name:               pause-374510
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-374510
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	                    minikube.k8s.io/name=pause-374510
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_03T13_35_37_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 13:35:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-374510
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 13:37:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 13:37:46 +0000   Mon, 03 Jun 2024 13:35:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 13:37:46 +0000   Mon, 03 Jun 2024 13:35:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 13:37:46 +0000   Mon, 03 Jun 2024 13:35:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 13:37:46 +0000   Mon, 03 Jun 2024 13:35:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.3
	  Hostname:    pause-374510
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 3082e70007634b0d9204380c016436a0
	  System UUID:                3082e700-0763-4b0d-9204-380c016436a0
	  Boot ID:                    cb8d017a-cbb4-436a-88cc-b091df51b88e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-k4gdp                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     2m16s
	  kube-system                 etcd-pause-374510                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         2m29s
	  kube-system                 kube-apiserver-pause-374510             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  kube-system                 kube-controller-manager-pause-374510    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m30s
	  kube-system                 kube-proxy-6tc5r                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m16s
	  kube-system                 kube-scheduler-pause-374510             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m14s                  kube-proxy       
	  Normal  Starting                 19s                    kube-proxy       
	  Normal  Starting                 2m35s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m35s (x2 over 2m35s)  kubelet          Node pause-374510 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m35s                  kubelet          Node pause-374510 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m35s                  kubelet          Node pause-374510 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m30s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    2m29s                  kubelet          Node pause-374510 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m29s                  kubelet          Node pause-374510 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m29s                  kubelet          Node pause-374510 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m28s                  kubelet          Node pause-374510 status is now: NodeReady
	  Normal  RegisteredNode           2m17s                  node-controller  Node pause-374510 event: Registered Node pause-374510 in Controller
	  Normal  Starting                 25s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  25s (x8 over 25s)      kubelet          Node pause-374510 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s (x8 over 25s)      kubelet          Node pause-374510 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s (x7 over 25s)      kubelet          Node pause-374510 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  25s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8s                     node-controller  Node pause-374510 event: Registered Node pause-374510 in Controller
	
	
	==> dmesg <==
	[  +0.062443] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065590] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.212042] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.145033] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.314882] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +4.666701] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +0.064933] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.942301] systemd-fstab-generator[949]: Ignoring "noauto" option for root device
	[  +0.063494] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.498188] systemd-fstab-generator[1283]: Ignoring "noauto" option for root device
	[  +0.090933] kauditd_printk_skb: 69 callbacks suppressed
	[ +13.903602] systemd-fstab-generator[1504]: Ignoring "noauto" option for root device
	[  +0.126379] kauditd_printk_skb: 21 callbacks suppressed
	[Jun 3 13:36] kauditd_printk_skb: 88 callbacks suppressed
	[Jun 3 13:37] systemd-fstab-generator[2390]: Ignoring "noauto" option for root device
	[  +0.204694] systemd-fstab-generator[2402]: Ignoring "noauto" option for root device
	[  +0.569953] systemd-fstab-generator[2617]: Ignoring "noauto" option for root device
	[  +0.229602] systemd-fstab-generator[2680]: Ignoring "noauto" option for root device
	[  +0.659753] systemd-fstab-generator[2886]: Ignoring "noauto" option for root device
	[  +1.347047] systemd-fstab-generator[3162]: Ignoring "noauto" option for root device
	[  +8.708188] kauditd_printk_skb: 243 callbacks suppressed
	[  +4.301494] systemd-fstab-generator[3698]: Ignoring "noauto" option for root device
	[  +0.843235] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.321768] kauditd_printk_skb: 20 callbacks suppressed
	[ +11.725756] systemd-fstab-generator[4103]: Ignoring "noauto" option for root device
	
	
	==> etcd [2ad021413893a1057877b37ab2a5aaf857fbc67d8904ffcdcb50cdf8f3cd5473] <==
	{"level":"warn","ts":"2024-06-03T13:37:26.896121Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-06-03T13:37:26.896294Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.94.3:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.94.3:2380","--initial-cluster=pause-374510=https://192.168.94.3:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.94.3:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.94.3:2380","--name=pause-374510","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-fil
e=/var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2024-06-03T13:37:26.896408Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2024-06-03T13:37:26.896474Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-06-03T13:37:26.896514Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.94.3:2380"]}
	{"level":"info","ts":"2024-06-03T13:37:26.896798Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-03T13:37:26.897934Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.3:2379"]}
	{"level":"info","ts":"2024-06-03T13:37:26.905967Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"pause-374510","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.94.3:2380"],"listen-peer-urls":["https://192.168.94.3:2380"],"advertise-client-urls":["https://192.168.94.3:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.3:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-to
ken":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2024-06-03T13:37:26.994295Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"86.932254ms"}
	{"level":"info","ts":"2024-06-03T13:37:27.084666Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	
	
	==> etcd [d64e33e05a7ee34059a437610585db1b0e60e597731ef4f35efc8f23ebfd52d0] <==
	{"level":"info","ts":"2024-06-03T13:37:42.629196Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-03T13:37:42.629205Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-03T13:37:42.629445Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0316bfca4b3b866 switched to configuration voters=(17307733575800698982)"}
	{"level":"info","ts":"2024-06-03T13:37:42.629537Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d28f0976ab114d3","local-member-id":"f0316bfca4b3b866","added-peer-id":"f0316bfca4b3b866","added-peer-peer-urls":["https://192.168.94.3:2380"]}
	{"level":"info","ts":"2024-06-03T13:37:42.629678Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d28f0976ab114d3","local-member-id":"f0316bfca4b3b866","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T13:37:42.629769Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T13:37:42.634557Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-03T13:37:42.645165Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f0316bfca4b3b866","initial-advertise-peer-urls":["https://192.168.94.3:2380"],"listen-peer-urls":["https://192.168.94.3:2380"],"advertise-client-urls":["https://192.168.94.3:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.3:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-03T13:37:42.645221Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-03T13:37:42.634698Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.94.3:2380"}
	{"level":"info","ts":"2024-06-03T13:37:42.64525Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.94.3:2380"}
	{"level":"info","ts":"2024-06-03T13:37:44.393309Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0316bfca4b3b866 is starting a new election at term 2"}
	{"level":"info","ts":"2024-06-03T13:37:44.393381Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0316bfca4b3b866 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-06-03T13:37:44.393406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0316bfca4b3b866 received MsgPreVoteResp from f0316bfca4b3b866 at term 2"}
	{"level":"info","ts":"2024-06-03T13:37:44.393419Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0316bfca4b3b866 became candidate at term 3"}
	{"level":"info","ts":"2024-06-03T13:37:44.393425Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0316bfca4b3b866 received MsgVoteResp from f0316bfca4b3b866 at term 3"}
	{"level":"info","ts":"2024-06-03T13:37:44.393433Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0316bfca4b3b866 became leader at term 3"}
	{"level":"info","ts":"2024-06-03T13:37:44.39344Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f0316bfca4b3b866 elected leader f0316bfca4b3b866 at term 3"}
	{"level":"info","ts":"2024-06-03T13:37:44.399191Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"f0316bfca4b3b866","local-member-attributes":"{Name:pause-374510 ClientURLs:[https://192.168.94.3:2379]}","request-path":"/0/members/f0316bfca4b3b866/attributes","cluster-id":"d28f0976ab114d3","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-03T13:37:44.399211Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-03T13:37:44.399605Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-03T13:37:44.399664Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-03T13:37:44.399231Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-03T13:37:44.402083Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-03T13:37:44.402598Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.3:2379"}
	
	
	==> kernel <==
	 13:38:06 up 3 min,  0 users,  load average: 0.99, 0.51, 0.20
	Linux pause-374510 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [53fabfa8c7517b5dd93ca8586044c5afe02f25933453645a4bc07434517fec98] <==
	I0603 13:37:46.344713       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0603 13:37:46.344944       1 aggregator.go:165] initial CRD sync complete...
	I0603 13:37:46.345003       1 autoregister_controller.go:141] Starting autoregister controller
	I0603 13:37:46.345033       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0603 13:37:46.345063       1 cache.go:39] Caches are synced for autoregister controller
	I0603 13:37:46.415041       1 shared_informer.go:320] Caches are synced for configmaps
	I0603 13:37:46.415442       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0603 13:37:46.415465       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0603 13:37:46.415481       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0603 13:37:46.416285       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0603 13:37:46.415510       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0603 13:37:46.418826       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0603 13:37:46.422038       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0603 13:37:46.422080       1 policy_source.go:224] refreshing policies
	I0603 13:37:46.429306       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0603 13:37:46.432595       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0603 13:37:46.437890       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0603 13:37:47.210337       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0603 13:37:47.922122       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0603 13:37:47.940683       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0603 13:37:47.998934       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0603 13:37:48.070696       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0603 13:37:48.090302       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0603 13:37:58.971578       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0603 13:37:59.154837       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [a5958fa5fc08cd625441deb6a1c8f01186a492a25779925cead50389ac5778a2] <==
	I0603 13:37:26.957827       1 options.go:221] external host was not specified, using 192.168.94.3
	I0603 13:37:26.959923       1 server.go:148] Version: v1.30.1
	I0603 13:37:26.960275       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-controller-manager [07d936e6df4f5e08035f9b4e2f8682831c0d5beda04da16180aa097741c81d0d] <==
	
	
	==> kube-controller-manager [b0712cd1cac74f0eab2e6823ff7c28f1c14c20828fca7018183948cb3c515ad2] <==
	I0603 13:37:58.980093       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0603 13:37:58.983795       1 shared_informer.go:320] Caches are synced for ephemeral
	I0603 13:37:58.983838       1 shared_informer.go:320] Caches are synced for TTL
	I0603 13:37:58.986486       1 shared_informer.go:320] Caches are synced for crt configmap
	I0603 13:37:58.988385       1 shared_informer.go:320] Caches are synced for expand
	I0603 13:37:58.989823       1 shared_informer.go:320] Caches are synced for GC
	I0603 13:37:58.990484       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0603 13:37:58.992060       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0603 13:37:58.993164       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0603 13:37:59.012657       1 shared_informer.go:320] Caches are synced for attach detach
	I0603 13:37:59.014270       1 shared_informer.go:320] Caches are synced for persistent volume
	I0603 13:37:59.021233       1 shared_informer.go:320] Caches are synced for deployment
	I0603 13:37:59.024384       1 shared_informer.go:320] Caches are synced for service account
	I0603 13:37:59.027801       1 shared_informer.go:320] Caches are synced for disruption
	I0603 13:37:59.030827       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0603 13:37:59.058989       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0603 13:37:59.067818       1 shared_informer.go:320] Caches are synced for cronjob
	I0603 13:37:59.117832       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0603 13:37:59.132973       1 shared_informer.go:320] Caches are synced for job
	I0603 13:37:59.144778       1 shared_informer.go:320] Caches are synced for endpoint
	I0603 13:37:59.215905       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 13:37:59.219185       1 shared_informer.go:320] Caches are synced for resource quota
	I0603 13:37:59.644960       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 13:37:59.680336       1 shared_informer.go:320] Caches are synced for garbage collector
	I0603 13:37:59.680426       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [2c3ef99bcb80f03e2a2ddf94b11e826201743f0dd48929f09c80b068e344215e] <==
	I0603 13:37:47.222820       1 server_linux.go:69] "Using iptables proxy"
	I0603 13:37:47.243702       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.94.3"]
	I0603 13:37:47.327503       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 13:37:47.327564       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 13:37:47.327590       1 server_linux.go:165] "Using iptables Proxier"
	I0603 13:37:47.335858       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 13:37:47.336134       1 server.go:872] "Version info" version="v1.30.1"
	I0603 13:37:47.336180       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 13:37:47.338281       1 config.go:192] "Starting service config controller"
	I0603 13:37:47.338328       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 13:37:47.338366       1 config.go:101] "Starting endpoint slice config controller"
	I0603 13:37:47.338392       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 13:37:47.339948       1 config.go:319] "Starting node config controller"
	I0603 13:37:47.340072       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 13:37:47.440565       1 shared_informer.go:320] Caches are synced for node config
	I0603 13:37:47.440623       1 shared_informer.go:320] Caches are synced for service config
	I0603 13:37:47.440670       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [a5bb0ea114452d89b9a9c672315f8800acbe91c80b0716e8bb40643528e0e592] <==
	
	
	==> kube-scheduler [97feea60a1f2eed2fa73ec0056267bc817af405169342b8c8640b346ed9c3b3d] <==
	
	
	==> kube-scheduler [a69250a66f1441760777adb3165aa7372f5f9b52e75d69a40f604bd2be0f9d65] <==
	I0603 13:37:43.393384       1 serving.go:380] Generated self-signed cert in-memory
	W0603 13:37:46.281517       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0603 13:37:46.281641       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0603 13:37:46.281682       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0603 13:37:46.281713       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0603 13:37:46.344472       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0603 13:37:46.347092       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 13:37:46.353447       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0603 13:37:46.353769       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0603 13:37:46.355883       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 13:37:46.356001       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 13:37:46.456713       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 03 13:37:41 pause-374510 kubelet[3705]: I0603 13:37:41.951448    3705 kubelet_node_status.go:73] "Attempting to register node" node="pause-374510"
	Jun 03 13:37:41 pause-374510 kubelet[3705]: E0603 13:37:41.952498    3705 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.94.3:8443: connect: connection refused" node="pause-374510"
	Jun 03 13:37:42 pause-374510 kubelet[3705]: I0603 13:37:42.086290    3705 scope.go:117] "RemoveContainer" containerID="2ad021413893a1057877b37ab2a5aaf857fbc67d8904ffcdcb50cdf8f3cd5473"
	Jun 03 13:37:42 pause-374510 kubelet[3705]: I0603 13:37:42.089331    3705 scope.go:117] "RemoveContainer" containerID="a5958fa5fc08cd625441deb6a1c8f01186a492a25779925cead50389ac5778a2"
	Jun 03 13:37:42 pause-374510 kubelet[3705]: I0603 13:37:42.090191    3705 scope.go:117] "RemoveContainer" containerID="07d936e6df4f5e08035f9b4e2f8682831c0d5beda04da16180aa097741c81d0d"
	Jun 03 13:37:42 pause-374510 kubelet[3705]: I0603 13:37:42.094838    3705 scope.go:117] "RemoveContainer" containerID="97feea60a1f2eed2fa73ec0056267bc817af405169342b8c8640b346ed9c3b3d"
	Jun 03 13:37:42 pause-374510 kubelet[3705]: E0603 13:37:42.271282    3705 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-374510?timeout=10s\": dial tcp 192.168.94.3:8443: connect: connection refused" interval="800ms"
	Jun 03 13:37:42 pause-374510 kubelet[3705]: I0603 13:37:42.360166    3705 kubelet_node_status.go:73] "Attempting to register node" node="pause-374510"
	Jun 03 13:37:42 pause-374510 kubelet[3705]: E0603 13:37:42.364541    3705 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.94.3:8443: connect: connection refused" node="pause-374510"
	Jun 03 13:37:42 pause-374510 kubelet[3705]: W0603 13:37:42.458673    3705 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.94.3:8443: connect: connection refused
	Jun 03 13:37:42 pause-374510 kubelet[3705]: E0603 13:37:42.458910    3705 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.94.3:8443: connect: connection refused
	Jun 03 13:37:43 pause-374510 kubelet[3705]: I0603 13:37:43.166213    3705 kubelet_node_status.go:73] "Attempting to register node" node="pause-374510"
	Jun 03 13:37:46 pause-374510 kubelet[3705]: I0603 13:37:46.475375    3705 kubelet_node_status.go:112] "Node was previously registered" node="pause-374510"
	Jun 03 13:37:46 pause-374510 kubelet[3705]: I0603 13:37:46.475518    3705 kubelet_node_status.go:76] "Successfully registered node" node="pause-374510"
	Jun 03 13:37:46 pause-374510 kubelet[3705]: I0603 13:37:46.477493    3705 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jun 03 13:37:46 pause-374510 kubelet[3705]: I0603 13:37:46.479067    3705 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jun 03 13:37:46 pause-374510 kubelet[3705]: I0603 13:37:46.638098    3705 apiserver.go:52] "Watching apiserver"
	Jun 03 13:37:46 pause-374510 kubelet[3705]: I0603 13:37:46.642560    3705 topology_manager.go:215] "Topology Admit Handler" podUID="13008dfa-c9ca-4978-bb85-797ab01a9495" podNamespace="kube-system" podName="kube-proxy-6tc5r"
	Jun 03 13:37:46 pause-374510 kubelet[3705]: I0603 13:37:46.642854    3705 topology_manager.go:215] "Topology Admit Handler" podUID="ea63b48b-59b9-4fc1-ab16-4bd2452781d8" podNamespace="kube-system" podName="coredns-7db6d8ff4d-k4gdp"
	Jun 03 13:37:46 pause-374510 kubelet[3705]: I0603 13:37:46.646581    3705 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jun 03 13:37:46 pause-374510 kubelet[3705]: I0603 13:37:46.646707    3705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13008dfa-c9ca-4978-bb85-797ab01a9495-lib-modules\") pod \"kube-proxy-6tc5r\" (UID: \"13008dfa-c9ca-4978-bb85-797ab01a9495\") " pod="kube-system/kube-proxy-6tc5r"
	Jun 03 13:37:46 pause-374510 kubelet[3705]: I0603 13:37:46.646778    3705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13008dfa-c9ca-4978-bb85-797ab01a9495-xtables-lock\") pod \"kube-proxy-6tc5r\" (UID: \"13008dfa-c9ca-4978-bb85-797ab01a9495\") " pod="kube-system/kube-proxy-6tc5r"
	Jun 03 13:37:46 pause-374510 kubelet[3705]: I0603 13:37:46.943505    3705 scope.go:117] "RemoveContainer" containerID="a5bb0ea114452d89b9a9c672315f8800acbe91c80b0716e8bb40643528e0e592"
	Jun 03 13:37:46 pause-374510 kubelet[3705]: I0603 13:37:46.944377    3705 scope.go:117] "RemoveContainer" containerID="4c4a1aa1c0a528a93a6d1c0724b7347ae5975e50e0a4df0a56d406a85700ac82"
	Jun 03 13:37:53 pause-374510 kubelet[3705]: I0603 13:37:53.295467    3705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-374510 -n pause-374510
helpers_test.go:261: (dbg) Run:  kubectl --context pause-374510 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (94.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (284.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-151788 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-151788 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m43.964482812s)

                                                
                                                
-- stdout --
	* [old-k8s-version-151788] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19011
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19011-1078924/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19011-1078924/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-151788" primary control-plane node in "old-k8s-version-151788" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0603 13:40:05.639115 1136266 out.go:291] Setting OutFile to fd 1 ...
	I0603 13:40:05.639399 1136266 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:40:05.639410 1136266 out.go:304] Setting ErrFile to fd 2...
	I0603 13:40:05.639414 1136266 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:40:05.639630 1136266 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
	I0603 13:40:05.640259 1136266 out.go:298] Setting JSON to false
	I0603 13:40:05.641895 1136266 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":15753,"bootTime":1717406253,"procs":302,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 13:40:05.641989 1136266 start.go:139] virtualization: kvm guest
	I0603 13:40:05.644300 1136266 out.go:177] * [old-k8s-version-151788] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 13:40:05.645932 1136266 out.go:177]   - MINIKUBE_LOCATION=19011
	I0603 13:40:05.645869 1136266 notify.go:220] Checking for updates...
	I0603 13:40:05.647374 1136266 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 13:40:05.648838 1136266 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 13:40:05.650428 1136266 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 13:40:05.651880 1136266 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 13:40:05.653123 1136266 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 13:40:05.655103 1136266 config.go:182] Loaded profile config "bridge-021279": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:40:05.655296 1136266 config.go:182] Loaded profile config "enable-default-cni-021279": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:40:05.655429 1136266 config.go:182] Loaded profile config "flannel-021279": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:40:05.655571 1136266 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 13:40:05.697299 1136266 out.go:177] * Using the kvm2 driver based on user configuration
	I0603 13:40:05.698790 1136266 start.go:297] selected driver: kvm2
	I0603 13:40:05.698822 1136266 start.go:901] validating driver "kvm2" against <nil>
	I0603 13:40:05.698841 1136266 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 13:40:05.699741 1136266 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 13:40:05.699834 1136266 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19011-1078924/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0603 13:40:05.716529 1136266 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0603 13:40:05.716595 1136266 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0603 13:40:05.716925 1136266 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 13:40:05.717015 1136266 cni.go:84] Creating CNI manager for ""
	I0603 13:40:05.717033 1136266 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:40:05.717051 1136266 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0603 13:40:05.717149 1136266 start.go:340] cluster config:
	{Name:old-k8s-version-151788 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-151788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:40:05.717290 1136266 iso.go:125] acquiring lock: {Name:mka26d6a83f88b83737ccc78b57cc462fbe70fe1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 13:40:05.719456 1136266 out.go:177] * Starting "old-k8s-version-151788" primary control-plane node in "old-k8s-version-151788" cluster
	I0603 13:40:05.720968 1136266 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0603 13:40:05.721021 1136266 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0603 13:40:05.721047 1136266 cache.go:56] Caching tarball of preloaded images
	I0603 13:40:05.721142 1136266 preload.go:173] Found /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 13:40:05.721152 1136266 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0603 13:40:05.721242 1136266 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/config.json ...
	I0603 13:40:05.721260 1136266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/config.json: {Name:mkc1c07bf827ca375ec8402d2cd9b4f091ca3c64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:40:05.721395 1136266 start.go:360] acquireMachinesLock for old-k8s-version-151788: {Name:mk20baaab39609d00406b78ad309423511e633ec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 13:40:14.574893 1136266 start.go:364] duration metric: took 8.853414626s to acquireMachinesLock for "old-k8s-version-151788"
	I0603 13:40:14.574976 1136266 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-151788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-151788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 13:40:14.575096 1136266 start.go:125] createHost starting for "" (driver="kvm2")
	I0603 13:40:14.577469 1136266 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0603 13:40:14.577662 1136266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:40:14.577745 1136266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:40:14.595004 1136266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32987
	I0603 13:40:14.595577 1136266 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:40:14.596177 1136266 main.go:141] libmachine: Using API Version  1
	I0603 13:40:14.596207 1136266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:40:14.596652 1136266 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:40:14.596898 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetMachineName
	I0603 13:40:14.597062 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:40:14.597246 1136266 start.go:159] libmachine.API.Create for "old-k8s-version-151788" (driver="kvm2")
	I0603 13:40:14.597282 1136266 client.go:168] LocalClient.Create starting
	I0603 13:40:14.597317 1136266 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem
	I0603 13:40:14.597364 1136266 main.go:141] libmachine: Decoding PEM data...
	I0603 13:40:14.597392 1136266 main.go:141] libmachine: Parsing certificate...
	I0603 13:40:14.597499 1136266 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem
	I0603 13:40:14.597527 1136266 main.go:141] libmachine: Decoding PEM data...
	I0603 13:40:14.597547 1136266 main.go:141] libmachine: Parsing certificate...
	I0603 13:40:14.597571 1136266 main.go:141] libmachine: Running pre-create checks...
	I0603 13:40:14.597583 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .PreCreateCheck
	I0603 13:40:14.598095 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetConfigRaw
	I0603 13:40:14.598607 1136266 main.go:141] libmachine: Creating machine...
	I0603 13:40:14.598624 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .Create
	I0603 13:40:14.598777 1136266 main.go:141] libmachine: (old-k8s-version-151788) Creating KVM machine...
	I0603 13:40:14.600396 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | found existing default KVM network
	I0603 13:40:14.602271 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:40:14.602075 1136388 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:11:8f:ea} reservation:<nil>}
	I0603 13:40:14.604123 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:40:14.604021 1136388 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002b4bc0}
	I0603 13:40:14.604188 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | created network xml: 
	I0603 13:40:14.604228 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | <network>
	I0603 13:40:14.604262 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG |   <name>mk-old-k8s-version-151788</name>
	I0603 13:40:14.604284 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG |   <dns enable='no'/>
	I0603 13:40:14.604295 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG |   
	I0603 13:40:14.604308 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0603 13:40:14.604327 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG |     <dhcp>
	I0603 13:40:14.604339 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0603 13:40:14.604368 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG |     </dhcp>
	I0603 13:40:14.604379 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG |   </ip>
	I0603 13:40:14.604389 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG |   
	I0603 13:40:14.604399 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | </network>
	I0603 13:40:14.604409 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | 
	I0603 13:40:14.610460 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | trying to create private KVM network mk-old-k8s-version-151788 192.168.50.0/24...
	I0603 13:40:14.689775 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | private KVM network mk-old-k8s-version-151788 192.168.50.0/24 created
	I0603 13:40:14.689825 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:40:14.689766 1136388 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 13:40:14.689868 1136266 main.go:141] libmachine: (old-k8s-version-151788) Setting up store path in /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788 ...
	I0603 13:40:14.689897 1136266 main.go:141] libmachine: (old-k8s-version-151788) Building disk image from file:///home/jenkins/minikube-integration/19011-1078924/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso
	I0603 13:40:14.689963 1136266 main.go:141] libmachine: (old-k8s-version-151788) Downloading /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19011-1078924/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0603 13:40:14.981766 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:40:14.981668 1136388 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/id_rsa...
	I0603 13:40:15.107946 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:40:15.107804 1136388 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/old-k8s-version-151788.rawdisk...
	I0603 13:40:15.107982 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | Writing magic tar header
	I0603 13:40:15.108003 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | Writing SSH key tar header
	I0603 13:40:15.108016 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:40:15.107973 1136388 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788 ...
	I0603 13:40:15.108168 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788
	I0603 13:40:15.108196 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines
	I0603 13:40:15.108224 1136266 main.go:141] libmachine: (old-k8s-version-151788) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788 (perms=drwx------)
	I0603 13:40:15.108244 1136266 main.go:141] libmachine: (old-k8s-version-151788) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924/.minikube/machines (perms=drwxr-xr-x)
	I0603 13:40:15.108252 1136266 main.go:141] libmachine: (old-k8s-version-151788) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924/.minikube (perms=drwxr-xr-x)
	I0603 13:40:15.108261 1136266 main.go:141] libmachine: (old-k8s-version-151788) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924 (perms=drwxrwxr-x)
	I0603 13:40:15.108268 1136266 main.go:141] libmachine: (old-k8s-version-151788) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0603 13:40:15.108278 1136266 main.go:141] libmachine: (old-k8s-version-151788) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0603 13:40:15.108283 1136266 main.go:141] libmachine: (old-k8s-version-151788) Creating domain...
	I0603 13:40:15.108290 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 13:40:15.108298 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924
	I0603 13:40:15.108304 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0603 13:40:15.108310 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | Checking permissions on dir: /home/jenkins
	I0603 13:40:15.108315 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | Checking permissions on dir: /home
	I0603 13:40:15.108322 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | Skipping /home - not owner
	I0603 13:40:15.109673 1136266 main.go:141] libmachine: (old-k8s-version-151788) define libvirt domain using xml: 
	I0603 13:40:15.109697 1136266 main.go:141] libmachine: (old-k8s-version-151788) <domain type='kvm'>
	I0603 13:40:15.109707 1136266 main.go:141] libmachine: (old-k8s-version-151788)   <name>old-k8s-version-151788</name>
	I0603 13:40:15.109720 1136266 main.go:141] libmachine: (old-k8s-version-151788)   <memory unit='MiB'>2200</memory>
	I0603 13:40:15.109733 1136266 main.go:141] libmachine: (old-k8s-version-151788)   <vcpu>2</vcpu>
	I0603 13:40:15.109740 1136266 main.go:141] libmachine: (old-k8s-version-151788)   <features>
	I0603 13:40:15.109749 1136266 main.go:141] libmachine: (old-k8s-version-151788)     <acpi/>
	I0603 13:40:15.109756 1136266 main.go:141] libmachine: (old-k8s-version-151788)     <apic/>
	I0603 13:40:15.109789 1136266 main.go:141] libmachine: (old-k8s-version-151788)     <pae/>
	I0603 13:40:15.109815 1136266 main.go:141] libmachine: (old-k8s-version-151788)     
	I0603 13:40:15.109838 1136266 main.go:141] libmachine: (old-k8s-version-151788)   </features>
	I0603 13:40:15.109850 1136266 main.go:141] libmachine: (old-k8s-version-151788)   <cpu mode='host-passthrough'>
	I0603 13:40:15.109861 1136266 main.go:141] libmachine: (old-k8s-version-151788)   
	I0603 13:40:15.109867 1136266 main.go:141] libmachine: (old-k8s-version-151788)   </cpu>
	I0603 13:40:15.109876 1136266 main.go:141] libmachine: (old-k8s-version-151788)   <os>
	I0603 13:40:15.109883 1136266 main.go:141] libmachine: (old-k8s-version-151788)     <type>hvm</type>
	I0603 13:40:15.109901 1136266 main.go:141] libmachine: (old-k8s-version-151788)     <boot dev='cdrom'/>
	I0603 13:40:15.109912 1136266 main.go:141] libmachine: (old-k8s-version-151788)     <boot dev='hd'/>
	I0603 13:40:15.109924 1136266 main.go:141] libmachine: (old-k8s-version-151788)     <bootmenu enable='no'/>
	I0603 13:40:15.109934 1136266 main.go:141] libmachine: (old-k8s-version-151788)   </os>
	I0603 13:40:15.109942 1136266 main.go:141] libmachine: (old-k8s-version-151788)   <devices>
	I0603 13:40:15.109954 1136266 main.go:141] libmachine: (old-k8s-version-151788)     <disk type='file' device='cdrom'>
	I0603 13:40:15.109972 1136266 main.go:141] libmachine: (old-k8s-version-151788)       <source file='/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/boot2docker.iso'/>
	I0603 13:40:15.109984 1136266 main.go:141] libmachine: (old-k8s-version-151788)       <target dev='hdc' bus='scsi'/>
	I0603 13:40:15.110005 1136266 main.go:141] libmachine: (old-k8s-version-151788)       <readonly/>
	I0603 13:40:15.110017 1136266 main.go:141] libmachine: (old-k8s-version-151788)     </disk>
	I0603 13:40:15.110026 1136266 main.go:141] libmachine: (old-k8s-version-151788)     <disk type='file' device='disk'>
	I0603 13:40:15.110040 1136266 main.go:141] libmachine: (old-k8s-version-151788)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0603 13:40:15.110057 1136266 main.go:141] libmachine: (old-k8s-version-151788)       <source file='/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/old-k8s-version-151788.rawdisk'/>
	I0603 13:40:15.110068 1136266 main.go:141] libmachine: (old-k8s-version-151788)       <target dev='hda' bus='virtio'/>
	I0603 13:40:15.110079 1136266 main.go:141] libmachine: (old-k8s-version-151788)     </disk>
	I0603 13:40:15.110086 1136266 main.go:141] libmachine: (old-k8s-version-151788)     <interface type='network'>
	I0603 13:40:15.110096 1136266 main.go:141] libmachine: (old-k8s-version-151788)       <source network='mk-old-k8s-version-151788'/>
	I0603 13:40:15.110106 1136266 main.go:141] libmachine: (old-k8s-version-151788)       <model type='virtio'/>
	I0603 13:40:15.110115 1136266 main.go:141] libmachine: (old-k8s-version-151788)     </interface>
	I0603 13:40:15.110127 1136266 main.go:141] libmachine: (old-k8s-version-151788)     <interface type='network'>
	I0603 13:40:15.110137 1136266 main.go:141] libmachine: (old-k8s-version-151788)       <source network='default'/>
	I0603 13:40:15.110147 1136266 main.go:141] libmachine: (old-k8s-version-151788)       <model type='virtio'/>
	I0603 13:40:15.110156 1136266 main.go:141] libmachine: (old-k8s-version-151788)     </interface>
	I0603 13:40:15.110167 1136266 main.go:141] libmachine: (old-k8s-version-151788)     <serial type='pty'>
	I0603 13:40:15.110176 1136266 main.go:141] libmachine: (old-k8s-version-151788)       <target port='0'/>
	I0603 13:40:15.110186 1136266 main.go:141] libmachine: (old-k8s-version-151788)     </serial>
	I0603 13:40:15.110195 1136266 main.go:141] libmachine: (old-k8s-version-151788)     <console type='pty'>
	I0603 13:40:15.110207 1136266 main.go:141] libmachine: (old-k8s-version-151788)       <target type='serial' port='0'/>
	I0603 13:40:15.110217 1136266 main.go:141] libmachine: (old-k8s-version-151788)     </console>
	I0603 13:40:15.110228 1136266 main.go:141] libmachine: (old-k8s-version-151788)     <rng model='virtio'>
	I0603 13:40:15.110241 1136266 main.go:141] libmachine: (old-k8s-version-151788)       <backend model='random'>/dev/random</backend>
	I0603 13:40:15.110251 1136266 main.go:141] libmachine: (old-k8s-version-151788)     </rng>
	I0603 13:40:15.110259 1136266 main.go:141] libmachine: (old-k8s-version-151788)     
	I0603 13:40:15.110269 1136266 main.go:141] libmachine: (old-k8s-version-151788)     
	I0603 13:40:15.110278 1136266 main.go:141] libmachine: (old-k8s-version-151788)   </devices>
	I0603 13:40:15.110288 1136266 main.go:141] libmachine: (old-k8s-version-151788) </domain>
	I0603 13:40:15.110303 1136266 main.go:141] libmachine: (old-k8s-version-151788) 
	I0603 13:40:15.115616 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:d2:91:7e in network default
	I0603 13:40:15.116375 1136266 main.go:141] libmachine: (old-k8s-version-151788) Ensuring networks are active...
	I0603 13:40:15.116401 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:15.117474 1136266 main.go:141] libmachine: (old-k8s-version-151788) Ensuring network default is active
	I0603 13:40:15.117790 1136266 main.go:141] libmachine: (old-k8s-version-151788) Ensuring network mk-old-k8s-version-151788 is active
	I0603 13:40:15.118520 1136266 main.go:141] libmachine: (old-k8s-version-151788) Getting domain xml...
	I0603 13:40:15.119442 1136266 main.go:141] libmachine: (old-k8s-version-151788) Creating domain...
	I0603 13:40:16.531903 1136266 main.go:141] libmachine: (old-k8s-version-151788) Waiting to get IP...
	I0603 13:40:16.532649 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:16.533202 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:40:16.533227 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:40:16.533154 1136388 retry.go:31] will retry after 294.416071ms: waiting for machine to come up
	I0603 13:40:16.830006 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:16.830666 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:40:16.830696 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:40:16.830625 1136388 retry.go:31] will retry after 374.015365ms: waiting for machine to come up
	I0603 13:40:17.206716 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:17.207924 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:40:17.207951 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:40:17.207856 1136388 retry.go:31] will retry after 366.235311ms: waiting for machine to come up
	I0603 13:40:17.578092 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:17.582539 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:40:17.582566 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:40:17.582451 1136388 retry.go:31] will retry after 553.37394ms: waiting for machine to come up
	I0603 13:40:18.137070 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:18.137693 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:40:18.137728 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:40:18.137640 1136388 retry.go:31] will retry after 761.980085ms: waiting for machine to come up
	I0603 13:40:18.901923 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:18.902635 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:40:18.902672 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:40:18.902570 1136388 retry.go:31] will retry after 740.632153ms: waiting for machine to come up
	I0603 13:40:19.645291 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:19.646036 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:40:19.646059 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:40:19.645973 1136388 retry.go:31] will retry after 988.351766ms: waiting for machine to come up
	I0603 13:40:20.636069 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:20.636844 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:40:20.636861 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:40:20.636790 1136388 retry.go:31] will retry after 1.38077277s: waiting for machine to come up
	I0603 13:40:22.018825 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:22.019431 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:40:22.019467 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:40:22.019348 1136388 retry.go:31] will retry after 1.556582694s: waiting for machine to come up
	I0603 13:40:23.578094 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:23.578765 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:40:23.578806 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:40:23.578680 1136388 retry.go:31] will retry after 1.805943796s: waiting for machine to come up
	I0603 13:40:25.386065 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:25.386598 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:40:25.386627 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:40:25.386550 1136388 retry.go:31] will retry after 2.57512651s: waiting for machine to come up
	I0603 13:40:27.963668 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:27.964276 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:40:27.964307 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:40:27.964219 1136388 retry.go:31] will retry after 2.972366952s: waiting for machine to come up
	I0603 13:40:30.938564 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:30.939095 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:40:30.939124 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:40:30.939035 1136388 retry.go:31] will retry after 3.065653217s: waiting for machine to come up
	I0603 13:40:34.007087 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:34.007703 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:40:34.007728 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:40:34.007649 1136388 retry.go:31] will retry after 4.388056129s: waiting for machine to come up
	I0603 13:40:38.398798 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:38.399383 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has current primary IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:38.399426 1136266 main.go:141] libmachine: (old-k8s-version-151788) Found IP for machine: 192.168.50.65
	I0603 13:40:38.399441 1136266 main.go:141] libmachine: (old-k8s-version-151788) Reserving static IP address...
	I0603 13:40:38.400030 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-151788", mac: "52:54:00:56:4e:c1", ip: "192.168.50.65"} in network mk-old-k8s-version-151788
	I0603 13:40:38.488070 1136266 main.go:141] libmachine: (old-k8s-version-151788) Reserved static IP address: 192.168.50.65
	I0603 13:40:38.488098 1136266 main.go:141] libmachine: (old-k8s-version-151788) Waiting for SSH to be available...
	I0603 13:40:38.488118 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | Getting to WaitForSSH function...
	I0603 13:40:38.491244 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:38.491607 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788
	I0603 13:40:38.491627 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find defined IP address of network mk-old-k8s-version-151788 interface with MAC address 52:54:00:56:4e:c1
	I0603 13:40:38.491919 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | Using SSH client type: external
	I0603 13:40:38.491944 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | Using SSH private key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/id_rsa (-rw-------)
	I0603 13:40:38.491971 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 13:40:38.491980 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | About to run SSH command:
	I0603 13:40:38.491990 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | exit 0
	I0603 13:40:38.496067 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | SSH cmd err, output: exit status 255: 
	I0603 13:40:38.496087 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0603 13:40:38.496093 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | command : exit 0
	I0603 13:40:38.496098 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | err     : exit status 255
	I0603 13:40:38.496106 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | output  : 
	I0603 13:40:41.497862 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | Getting to WaitForSSH function...
	I0603 13:40:41.500650 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:41.501154 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:40:31 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:40:41.501185 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:41.501310 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | Using SSH client type: external
	I0603 13:40:41.501337 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | Using SSH private key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/id_rsa (-rw-------)
	I0603 13:40:41.501358 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.65 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 13:40:41.501377 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | About to run SSH command:
	I0603 13:40:41.501390 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | exit 0
	I0603 13:40:41.634973 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | SSH cmd err, output: <nil>: 
	I0603 13:40:41.635194 1136266 main.go:141] libmachine: (old-k8s-version-151788) KVM machine creation complete!
	I0603 13:40:41.635549 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetConfigRaw
	I0603 13:40:41.636215 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:40:41.636518 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:40:41.636727 1136266 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0603 13:40:41.636745 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetState
	I0603 13:40:41.638307 1136266 main.go:141] libmachine: Detecting operating system of created instance...
	I0603 13:40:41.638336 1136266 main.go:141] libmachine: Waiting for SSH to be available...
	I0603 13:40:41.638345 1136266 main.go:141] libmachine: Getting to WaitForSSH function...
	I0603 13:40:41.638363 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:40:41.641163 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:41.641578 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:40:31 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:40:41.641601 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:41.641810 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:40:41.642026 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:40:41.642200 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:40:41.642441 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:40:41.642717 1136266 main.go:141] libmachine: Using SSH client type: native
	I0603 13:40:41.642909 1136266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0603 13:40:41.642921 1136266 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0603 13:40:41.755202 1136266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 13:40:41.755230 1136266 main.go:141] libmachine: Detecting the provisioner...
	I0603 13:40:41.755240 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:40:41.758712 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:41.759117 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:40:31 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:40:41.759156 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:41.759391 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:40:41.759596 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:40:41.759786 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:40:41.759920 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:40:41.760081 1136266 main.go:141] libmachine: Using SSH client type: native
	I0603 13:40:41.760283 1136266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0603 13:40:41.760311 1136266 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0603 13:40:41.867506 1136266 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0603 13:40:41.867595 1136266 main.go:141] libmachine: found compatible host: buildroot
	I0603 13:40:41.867606 1136266 main.go:141] libmachine: Provisioning with buildroot...
	I0603 13:40:41.867614 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetMachineName
	I0603 13:40:41.867881 1136266 buildroot.go:166] provisioning hostname "old-k8s-version-151788"
	I0603 13:40:41.867916 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetMachineName
	I0603 13:40:41.868170 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:40:41.871769 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:41.872248 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:40:31 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:40:41.872274 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:41.872733 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:40:41.872962 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:40:41.873187 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:40:41.873369 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:40:41.873608 1136266 main.go:141] libmachine: Using SSH client type: native
	I0603 13:40:41.873835 1136266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0603 13:40:41.873856 1136266 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-151788 && echo "old-k8s-version-151788" | sudo tee /etc/hostname
	I0603 13:40:42.009952 1136266 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-151788
	
	I0603 13:40:42.009988 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:40:42.013716 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:42.014293 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:40:31 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:40:42.014318 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:42.014583 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:40:42.014794 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:40:42.015019 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:40:42.015202 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:40:42.015459 1136266 main.go:141] libmachine: Using SSH client type: native
	I0603 13:40:42.015697 1136266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0603 13:40:42.015727 1136266 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-151788' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-151788/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-151788' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 13:40:42.142695 1136266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 13:40:42.142730 1136266 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19011-1078924/.minikube CaCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19011-1078924/.minikube}
	I0603 13:40:42.142752 1136266 buildroot.go:174] setting up certificates
	I0603 13:40:42.142771 1136266 provision.go:84] configureAuth start
	I0603 13:40:42.142788 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetMachineName
	I0603 13:40:42.143129 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetIP
	I0603 13:40:42.148413 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:42.149034 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:40:31 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:40:42.149065 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:42.149237 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:40:42.154658 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:42.155205 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:40:31 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:40:42.155220 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:42.155273 1136266 provision.go:143] copyHostCerts
	I0603 13:40:42.155342 1136266 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem, removing ...
	I0603 13:40:42.155360 1136266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 13:40:42.155439 1136266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem (1078 bytes)
	I0603 13:40:42.155569 1136266 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem, removing ...
	I0603 13:40:42.155578 1136266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 13:40:42.155611 1136266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem (1123 bytes)
	I0603 13:40:42.155701 1136266 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem, removing ...
	I0603 13:40:42.155719 1136266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 13:40:42.155750 1136266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem (1675 bytes)
	I0603 13:40:42.155831 1136266 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-151788 san=[127.0.0.1 192.168.50.65 localhost minikube old-k8s-version-151788]
	I0603 13:40:42.334654 1136266 provision.go:177] copyRemoteCerts
	I0603 13:40:42.334712 1136266 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 13:40:42.334740 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:40:42.338444 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:42.338973 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:40:31 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:40:42.339002 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:42.339192 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:40:42.339464 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:40:42.339652 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:40:42.339848 1136266 sshutil.go:53] new ssh client: &{IP:192.168.50.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/id_rsa Username:docker}
	I0603 13:40:42.426563 1136266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 13:40:42.458445 1136266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0603 13:40:42.490606 1136266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 13:40:42.519115 1136266 provision.go:87] duration metric: took 376.324051ms to configureAuth
	I0603 13:40:42.519144 1136266 buildroot.go:189] setting minikube options for container-runtime
	I0603 13:40:42.519376 1136266 config.go:182] Loaded profile config "old-k8s-version-151788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0603 13:40:42.519521 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:40:42.522609 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:42.523051 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:40:31 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:40:42.523086 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:42.523318 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:40:42.523520 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:40:42.523709 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:40:42.523864 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:40:42.524067 1136266 main.go:141] libmachine: Using SSH client type: native
	I0603 13:40:42.524306 1136266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0603 13:40:42.524332 1136266 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 13:40:42.851069 1136266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 13:40:42.851102 1136266 main.go:141] libmachine: Checking connection to Docker...
	I0603 13:40:42.851113 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetURL
	I0603 13:40:42.853223 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | Using libvirt version 6000000
	I0603 13:40:42.856403 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:42.856857 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:40:31 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:40:42.856873 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:42.857059 1136266 main.go:141] libmachine: Docker is up and running!
	I0603 13:40:42.857069 1136266 main.go:141] libmachine: Reticulating splines...
	I0603 13:40:42.857087 1136266 client.go:171] duration metric: took 28.259785817s to LocalClient.Create
	I0603 13:40:42.857107 1136266 start.go:167] duration metric: took 28.259873955s to libmachine.API.Create "old-k8s-version-151788"
	I0603 13:40:42.857115 1136266 start.go:293] postStartSetup for "old-k8s-version-151788" (driver="kvm2")
	I0603 13:40:42.857123 1136266 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 13:40:42.857143 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:40:42.857375 1136266 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 13:40:42.857396 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:40:42.860084 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:42.860472 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:40:31 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:40:42.860495 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:42.860660 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:40:42.860809 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:40:42.860941 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:40:42.861026 1136266 sshutil.go:53] new ssh client: &{IP:192.168.50.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/id_rsa Username:docker}
	I0603 13:40:42.956352 1136266 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 13:40:42.961085 1136266 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 13:40:42.961108 1136266 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/addons for local assets ...
	I0603 13:40:42.961157 1136266 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/files for local assets ...
	I0603 13:40:42.961255 1136266 filesync.go:149] local asset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> 10862512.pem in /etc/ssl/certs
	I0603 13:40:42.961383 1136266 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 13:40:42.972615 1136266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:40:43.010662 1136266 start.go:296] duration metric: took 153.536715ms for postStartSetup
	I0603 13:40:43.010720 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetConfigRaw
	I0603 13:40:43.011283 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetIP
	I0603 13:40:43.014338 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:43.014736 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:40:31 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:40:43.014781 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:43.015047 1136266 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/config.json ...
	I0603 13:40:43.015280 1136266 start.go:128] duration metric: took 28.440169118s to createHost
	I0603 13:40:43.015315 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:40:43.018459 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:43.018765 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:40:31 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:40:43.018799 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:43.018916 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:40:43.019114 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:40:43.019354 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:40:43.019549 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:40:43.019742 1136266 main.go:141] libmachine: Using SSH client type: native
	I0603 13:40:43.019928 1136266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0603 13:40:43.019935 1136266 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0603 13:40:43.136080 1136266 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717422043.122995135
	
	I0603 13:40:43.136109 1136266 fix.go:216] guest clock: 1717422043.122995135
	I0603 13:40:43.136118 1136266 fix.go:229] Guest: 2024-06-03 13:40:43.122995135 +0000 UTC Remote: 2024-06-03 13:40:43.015299729 +0000 UTC m=+37.415322568 (delta=107.695406ms)
	I0603 13:40:43.136160 1136266 fix.go:200] guest clock delta is within tolerance: 107.695406ms
	I0603 13:40:43.136167 1136266 start.go:83] releasing machines lock for "old-k8s-version-151788", held for 28.561229359s
	I0603 13:40:43.136192 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:40:43.136488 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetIP
	I0603 13:40:43.140117 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:43.140509 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:40:31 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:40:43.140536 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:43.140727 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:40:43.141286 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:40:43.141534 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:40:43.141632 1136266 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 13:40:43.141683 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:40:43.141924 1136266 ssh_runner.go:195] Run: cat /version.json
	I0603 13:40:43.141945 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:40:43.145834 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:43.146306 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:40:31 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:40:43.146330 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:43.146418 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:40:43.146593 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:40:43.146716 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:40:43.146815 1136266 sshutil.go:53] new ssh client: &{IP:192.168.50.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/id_rsa Username:docker}
	I0603 13:40:43.147163 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:43.153528 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:40:43.153603 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:40:31 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:40:43.153631 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:43.153772 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:40:43.153971 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:40:43.154115 1136266 sshutil.go:53] new ssh client: &{IP:192.168.50.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/id_rsa Username:docker}
	I0603 13:40:43.241236 1136266 ssh_runner.go:195] Run: systemctl --version
	I0603 13:40:43.267771 1136266 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 13:40:43.441383 1136266 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 13:40:43.448465 1136266 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 13:40:43.448554 1136266 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 13:40:43.474999 1136266 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 13:40:43.475028 1136266 start.go:494] detecting cgroup driver to use...
	I0603 13:40:43.475082 1136266 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 13:40:43.495957 1136266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 13:40:43.515040 1136266 docker.go:217] disabling cri-docker service (if available) ...
	I0603 13:40:43.515104 1136266 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 13:40:43.535022 1136266 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 13:40:43.554336 1136266 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 13:40:43.715606 1136266 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 13:40:43.930389 1136266 docker.go:233] disabling docker service ...
	I0603 13:40:43.930476 1136266 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 13:40:43.955605 1136266 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 13:40:43.970613 1136266 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 13:40:44.121372 1136266 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 13:40:44.291154 1136266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 13:40:44.314650 1136266 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 13:40:44.341118 1136266 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0603 13:40:44.341181 1136266 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:40:44.355167 1136266 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 13:40:44.355225 1136266 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:40:44.369092 1136266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:40:44.383371 1136266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:40:44.397873 1136266 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 13:40:44.411166 1136266 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 13:40:44.424159 1136266 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 13:40:44.424217 1136266 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 13:40:44.438120 1136266 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 13:40:44.448972 1136266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:40:44.597085 1136266 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 13:40:44.793895 1136266 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 13:40:44.793976 1136266 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 13:40:44.803325 1136266 start.go:562] Will wait 60s for crictl version
	I0603 13:40:44.803372 1136266 ssh_runner.go:195] Run: which crictl
	I0603 13:40:44.809573 1136266 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 13:40:44.860879 1136266 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 13:40:44.860969 1136266 ssh_runner.go:195] Run: crio --version
	I0603 13:40:44.899271 1136266 ssh_runner.go:195] Run: crio --version
	I0603 13:40:44.937696 1136266 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0603 13:40:44.938872 1136266 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetIP
	I0603 13:40:44.942241 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:44.942614 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:40:31 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:40:44.942653 1136266 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:40:44.942879 1136266 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0603 13:40:44.948965 1136266 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:40:44.965845 1136266 kubeadm.go:877] updating cluster {Name:old-k8s-version-151788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-151788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 13:40:44.965992 1136266 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0603 13:40:44.966055 1136266 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:40:45.009241 1136266 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0603 13:40:45.009453 1136266 ssh_runner.go:195] Run: which lz4
	I0603 13:40:45.015238 1136266 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0603 13:40:45.021221 1136266 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 13:40:45.021254 1136266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0603 13:40:47.345415 1136266 crio.go:462] duration metric: took 2.330199977s to copy over tarball
	I0603 13:40:47.345606 1136266 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 13:40:50.617534 1136266 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.271873316s)
	I0603 13:40:50.617584 1136266 crio.go:469] duration metric: took 3.272121434s to extract the tarball
	I0603 13:40:50.617595 1136266 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 13:40:50.663999 1136266 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:40:50.769141 1136266 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0603 13:40:50.769176 1136266 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0603 13:40:50.769253 1136266 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:40:50.769257 1136266 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 13:40:50.769279 1136266 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0603 13:40:50.769295 1136266 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0603 13:40:50.769328 1136266 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 13:40:50.769326 1136266 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0603 13:40:50.769337 1136266 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 13:40:50.769381 1136266 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 13:40:50.771015 1136266 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 13:40:50.771031 1136266 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 13:40:50.771035 1136266 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0603 13:40:50.771050 1136266 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 13:40:50.771019 1136266 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0603 13:40:50.771019 1136266 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:40:50.771174 1136266 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0603 13:40:50.771238 1136266 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 13:40:50.940558 1136266 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0603 13:40:50.963427 1136266 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0603 13:40:50.964630 1136266 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0603 13:40:50.973065 1136266 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:40:50.980542 1136266 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0603 13:40:50.984322 1136266 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 13:40:51.001861 1136266 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0603 13:40:51.007413 1136266 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0603 13:40:51.007492 1136266 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 13:40:51.007544 1136266 ssh_runner.go:195] Run: which crictl
	I0603 13:40:51.022054 1136266 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0603 13:40:51.148559 1136266 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0603 13:40:51.148610 1136266 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0603 13:40:51.148634 1136266 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0603 13:40:51.148658 1136266 ssh_runner.go:195] Run: which crictl
	I0603 13:40:51.148660 1136266 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 13:40:51.148688 1136266 ssh_runner.go:195] Run: which crictl
	I0603 13:40:51.238597 1136266 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0603 13:40:51.238909 1136266 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0603 13:40:51.238972 1136266 ssh_runner.go:195] Run: which crictl
	I0603 13:40:51.239103 1136266 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0603 13:40:51.239130 1136266 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 13:40:51.239158 1136266 ssh_runner.go:195] Run: which crictl
	I0603 13:40:51.239243 1136266 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0603 13:40:51.239266 1136266 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 13:40:51.239292 1136266 ssh_runner.go:195] Run: which crictl
	I0603 13:40:51.239363 1136266 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0603 13:40:51.239439 1136266 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0603 13:40:51.239474 1136266 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0603 13:40:51.239500 1136266 ssh_runner.go:195] Run: which crictl
	I0603 13:40:51.239573 1136266 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0603 13:40:51.239638 1136266 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0603 13:40:51.338811 1136266 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0603 13:40:51.338865 1136266 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0603 13:40:51.338957 1136266 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0603 13:40:51.339007 1136266 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0603 13:40:51.339017 1136266 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0603 13:40:51.339181 1136266 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 13:40:51.339249 1136266 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0603 13:40:51.443841 1136266 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0603 13:40:51.443910 1136266 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0603 13:40:51.443995 1136266 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0603 13:40:51.444016 1136266 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0603 13:40:51.444043 1136266 cache_images.go:92] duration metric: took 674.85194ms to LoadCachedImages
	W0603 13:40:51.444118 1136266 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0603 13:40:51.444131 1136266 kubeadm.go:928] updating node { 192.168.50.65 8443 v1.20.0 crio true true} ...
	I0603 13:40:51.444269 1136266 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-151788 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-151788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 13:40:51.444344 1136266 ssh_runner.go:195] Run: crio config
	I0603 13:40:51.513067 1136266 cni.go:84] Creating CNI manager for ""
	I0603 13:40:51.513094 1136266 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:40:51.513107 1136266 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 13:40:51.513127 1136266 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.65 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-151788 NodeName:old-k8s-version-151788 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.65"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.65 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0603 13:40:51.513271 1136266 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.65
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-151788"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.65
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.65"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 13:40:51.513329 1136266 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0603 13:40:51.525014 1136266 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 13:40:51.525091 1136266 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 13:40:51.536145 1136266 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0603 13:40:51.554038 1136266 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 13:40:51.573474 1136266 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0603 13:40:51.596190 1136266 ssh_runner.go:195] Run: grep 192.168.50.65	control-plane.minikube.internal$ /etc/hosts
	I0603 13:40:51.602027 1136266 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.65	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:40:51.615089 1136266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:40:51.756470 1136266 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:40:51.778801 1136266 certs.go:68] Setting up /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788 for IP: 192.168.50.65
	I0603 13:40:51.778825 1136266 certs.go:194] generating shared ca certs ...
	I0603 13:40:51.778846 1136266 certs.go:226] acquiring lock for ca certs: {Name:mkeec5aabce7c9540fcb31b78e4f96c2851d54f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:40:51.779019 1136266 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key
	I0603 13:40:51.779072 1136266 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key
	I0603 13:40:51.779082 1136266 certs.go:256] generating profile certs ...
	I0603 13:40:51.779154 1136266 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/client.key
	I0603 13:40:51.779180 1136266 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/client.crt with IP's: []
	I0603 13:40:51.989871 1136266 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/client.crt ...
	I0603 13:40:51.989972 1136266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/client.crt: {Name:mka0f2e1a916b15fe3ccb18f3ba6c6e86f03e8cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:40:51.990179 1136266 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/client.key ...
	I0603 13:40:51.990201 1136266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/client.key: {Name:mk32ef6a7e8deaff614e0f73382e7284354b83be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:40:51.990357 1136266 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/apiserver.key.9bfe4cc3
	I0603 13:40:51.990386 1136266 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/apiserver.crt.9bfe4cc3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.65]
	I0603 13:40:52.141196 1136266 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/apiserver.crt.9bfe4cc3 ...
	I0603 13:40:52.141228 1136266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/apiserver.crt.9bfe4cc3: {Name:mk22accb0317bd54f785855274a07060e595c6ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:40:52.151573 1136266 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/apiserver.key.9bfe4cc3 ...
	I0603 13:40:52.151613 1136266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/apiserver.key.9bfe4cc3: {Name:mk4c9087cd22337fdc13dff8c3b4abb2dfb97b8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:40:52.151768 1136266 certs.go:381] copying /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/apiserver.crt.9bfe4cc3 -> /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/apiserver.crt
	I0603 13:40:52.151861 1136266 certs.go:385] copying /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/apiserver.key.9bfe4cc3 -> /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/apiserver.key
	I0603 13:40:52.151933 1136266 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/proxy-client.key
	I0603 13:40:52.151956 1136266 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/proxy-client.crt with IP's: []
	I0603 13:40:52.409096 1136266 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/proxy-client.crt ...
	I0603 13:40:52.409134 1136266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/proxy-client.crt: {Name:mkcddbd32202e8c262b539accd1e8ac137b95aab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:40:52.409317 1136266 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/proxy-client.key ...
	I0603 13:40:52.409336 1136266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/proxy-client.key: {Name:mk5a7f4487a49c16c160c28d9c2d1b4e77c4af4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:40:52.409572 1136266 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem (1338 bytes)
	W0603 13:40:52.409627 1136266 certs.go:480] ignoring /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251_empty.pem, impossibly tiny 0 bytes
	I0603 13:40:52.409639 1136266 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 13:40:52.409672 1136266 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem (1078 bytes)
	I0603 13:40:52.409704 1136266 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem (1123 bytes)
	I0603 13:40:52.409735 1136266 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem (1675 bytes)
	I0603 13:40:52.409792 1136266 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:40:52.410446 1136266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 13:40:52.442650 1136266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 13:40:52.482353 1136266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 13:40:52.520067 1136266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0603 13:40:52.561582 1136266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0603 13:40:52.607571 1136266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0603 13:40:52.650119 1136266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 13:40:52.685380 1136266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 13:40:52.722136 1136266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 13:40:52.752982 1136266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem --> /usr/share/ca-certificates/1086251.pem (1338 bytes)
	I0603 13:40:52.793028 1136266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /usr/share/ca-certificates/10862512.pem (1708 bytes)
	I0603 13:40:52.833978 1136266 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 13:40:52.876008 1136266 ssh_runner.go:195] Run: openssl version
	I0603 13:40:52.886873 1136266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10862512.pem && ln -fs /usr/share/ca-certificates/10862512.pem /etc/ssl/certs/10862512.pem"
	I0603 13:40:52.924787 1136266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10862512.pem
	I0603 13:40:52.931410 1136266 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:37 /usr/share/ca-certificates/10862512.pem
	I0603 13:40:52.931481 1136266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10862512.pem
	I0603 13:40:52.938409 1136266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10862512.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 13:40:52.963085 1136266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 13:40:52.977535 1136266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:40:52.982820 1136266 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:24 /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:40:52.982879 1136266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:40:52.991731 1136266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 13:40:53.018045 1136266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1086251.pem && ln -fs /usr/share/ca-certificates/1086251.pem /etc/ssl/certs/1086251.pem"
	I0603 13:40:53.032689 1136266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1086251.pem
	I0603 13:40:53.038595 1136266 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:37 /usr/share/ca-certificates/1086251.pem
	I0603 13:40:53.038663 1136266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1086251.pem
	I0603 13:40:53.045063 1136266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1086251.pem /etc/ssl/certs/51391683.0"
	I0603 13:40:53.058149 1136266 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 13:40:53.063904 1136266 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 13:40:53.063957 1136266 kubeadm.go:391] StartCluster: {Name:old-k8s-version-151788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-151788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:40:53.064026 1136266 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 13:40:53.064070 1136266 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:40:53.109957 1136266 cri.go:89] found id: ""
	I0603 13:40:53.110029 1136266 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0603 13:40:53.121300 1136266 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 13:40:53.131836 1136266 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 13:40:53.142085 1136266 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 13:40:53.142108 1136266 kubeadm.go:156] found existing configuration files:
	
	I0603 13:40:53.142150 1136266 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 13:40:53.152469 1136266 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 13:40:53.152534 1136266 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 13:40:53.162851 1136266 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 13:40:53.172833 1136266 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 13:40:53.172887 1136266 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 13:40:53.182724 1136266 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 13:40:53.192695 1136266 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 13:40:53.192756 1136266 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 13:40:53.207332 1136266 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 13:40:53.217336 1136266 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 13:40:53.217401 1136266 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 13:40:53.230485 1136266 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 13:40:53.358562 1136266 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0603 13:40:53.358640 1136266 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 13:40:53.531356 1136266 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 13:40:53.531523 1136266 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 13:40:53.531660 1136266 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 13:40:53.777284 1136266 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 13:40:53.780496 1136266 out.go:204]   - Generating certificates and keys ...
	I0603 13:40:53.780620 1136266 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 13:40:53.780713 1136266 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 13:40:53.976281 1136266 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0603 13:40:54.211410 1136266 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0603 13:40:54.318760 1136266 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0603 13:40:54.562832 1136266 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0603 13:40:54.657953 1136266 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0603 13:40:54.658196 1136266 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-151788] and IPs [192.168.50.65 127.0.0.1 ::1]
	I0603 13:40:54.747092 1136266 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0603 13:40:54.747342 1136266 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-151788] and IPs [192.168.50.65 127.0.0.1 ::1]
	I0603 13:40:54.864446 1136266 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0603 13:40:55.064739 1136266 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0603 13:40:55.231890 1136266 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0603 13:40:55.232232 1136266 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 13:40:55.530283 1136266 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 13:40:55.824759 1136266 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 13:40:56.038771 1136266 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 13:40:56.258826 1136266 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 13:40:56.282248 1136266 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 13:40:56.283794 1136266 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 13:40:56.283876 1136266 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 13:40:56.449828 1136266 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 13:40:56.451290 1136266 out.go:204]   - Booting up control plane ...
	I0603 13:40:56.451433 1136266 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 13:40:56.462567 1136266 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 13:40:56.463731 1136266 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 13:40:56.464642 1136266 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 13:40:56.468990 1136266 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0603 13:41:36.468361 1136266 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0603 13:41:36.468640 1136266 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:41:36.468934 1136266 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:41:41.469442 1136266 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:41:41.472298 1136266 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:41:51.471399 1136266 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:41:51.472239 1136266 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:42:11.473559 1136266 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:42:11.473750 1136266 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:42:51.473475 1136266 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:42:51.473766 1136266 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:42:51.473790 1136266 kubeadm.go:309] 
	I0603 13:42:51.473841 1136266 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0603 13:42:51.473917 1136266 kubeadm.go:309] 		timed out waiting for the condition
	I0603 13:42:51.473938 1136266 kubeadm.go:309] 
	I0603 13:42:51.473980 1136266 kubeadm.go:309] 	This error is likely caused by:
	I0603 13:42:51.474031 1136266 kubeadm.go:309] 		- The kubelet is not running
	I0603 13:42:51.474168 1136266 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0603 13:42:51.474179 1136266 kubeadm.go:309] 
	I0603 13:42:51.474352 1136266 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0603 13:42:51.474407 1136266 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0603 13:42:51.474454 1136266 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0603 13:42:51.474463 1136266 kubeadm.go:309] 
	I0603 13:42:51.474602 1136266 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0603 13:42:51.474727 1136266 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0603 13:42:51.474737 1136266 kubeadm.go:309] 
	I0603 13:42:51.474875 1136266 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0603 13:42:51.474999 1136266 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0603 13:42:51.475105 1136266 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0603 13:42:51.475204 1136266 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0603 13:42:51.475217 1136266 kubeadm.go:309] 
	I0603 13:42:51.475413 1136266 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 13:42:51.475558 1136266 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0603 13:42:51.475671 1136266 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0603 13:42:51.475816 1136266 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-151788] and IPs [192.168.50.65 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-151788] and IPs [192.168.50.65 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-151788] and IPs [192.168.50.65 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-151788] and IPs [192.168.50.65 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0603 13:42:51.475871 1136266 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 13:42:52.543859 1136266 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.067955171s)
	I0603 13:42:52.543959 1136266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:42:52.559213 1136266 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 13:42:52.569294 1136266 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 13:42:52.569317 1136266 kubeadm.go:156] found existing configuration files:
	
	I0603 13:42:52.569369 1136266 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 13:42:52.579202 1136266 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 13:42:52.579267 1136266 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 13:42:52.588822 1136266 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 13:42:52.598381 1136266 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 13:42:52.598444 1136266 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 13:42:52.608115 1136266 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 13:42:52.617439 1136266 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 13:42:52.617498 1136266 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 13:42:52.627141 1136266 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 13:42:52.636434 1136266 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 13:42:52.636497 1136266 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 13:42:52.646152 1136266 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 13:42:52.721613 1136266 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0603 13:42:52.721734 1136266 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 13:42:52.872635 1136266 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 13:42:52.872816 1136266 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 13:42:52.872948 1136266 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 13:42:53.067781 1136266 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 13:42:53.069709 1136266 out.go:204]   - Generating certificates and keys ...
	I0603 13:42:53.069824 1136266 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 13:42:53.069919 1136266 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 13:42:53.070032 1136266 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 13:42:53.070118 1136266 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 13:42:53.070227 1136266 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 13:42:53.070304 1136266 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 13:42:53.070393 1136266 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 13:42:53.070481 1136266 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 13:42:53.070640 1136266 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 13:42:53.070726 1136266 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 13:42:53.070766 1136266 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 13:42:53.070898 1136266 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 13:42:53.172465 1136266 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 13:42:53.297707 1136266 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 13:42:53.502245 1136266 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 13:42:53.735761 1136266 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 13:42:53.751657 1136266 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 13:42:53.754465 1136266 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 13:42:53.754693 1136266 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 13:42:53.887103 1136266 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 13:42:53.888899 1136266 out.go:204]   - Booting up control plane ...
	I0603 13:42:53.889026 1136266 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 13:42:53.891599 1136266 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 13:42:53.892959 1136266 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 13:42:53.894104 1136266 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 13:42:53.898347 1136266 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0603 13:43:33.900738 1136266 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0603 13:43:33.900821 1136266 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:43:33.901097 1136266 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:43:38.901110 1136266 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:43:38.901360 1136266 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:43:48.901962 1136266 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:43:48.902220 1136266 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:44:08.903346 1136266 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:44:08.903618 1136266 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:44:48.903069 1136266 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:44:48.903330 1136266 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:44:48.903357 1136266 kubeadm.go:309] 
	I0603 13:44:48.903394 1136266 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0603 13:44:48.903426 1136266 kubeadm.go:309] 		timed out waiting for the condition
	I0603 13:44:48.903432 1136266 kubeadm.go:309] 
	I0603 13:44:48.903503 1136266 kubeadm.go:309] 	This error is likely caused by:
	I0603 13:44:48.903571 1136266 kubeadm.go:309] 		- The kubelet is not running
	I0603 13:44:48.903726 1136266 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0603 13:44:48.903737 1136266 kubeadm.go:309] 
	I0603 13:44:48.903873 1136266 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0603 13:44:48.903927 1136266 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0603 13:44:48.903967 1136266 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0603 13:44:48.903977 1136266 kubeadm.go:309] 
	I0603 13:44:48.904104 1136266 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0603 13:44:48.904175 1136266 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0603 13:44:48.904179 1136266 kubeadm.go:309] 
	I0603 13:44:48.904285 1136266 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0603 13:44:48.904388 1136266 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0603 13:44:48.904497 1136266 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0603 13:44:48.904602 1136266 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0603 13:44:48.904614 1136266 kubeadm.go:309] 
	I0603 13:44:48.905496 1136266 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 13:44:48.905667 1136266 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0603 13:44:48.905770 1136266 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0603 13:44:48.905863 1136266 kubeadm.go:393] duration metric: took 3m55.841910919s to StartCluster
	I0603 13:44:48.905946 1136266 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:44:48.906016 1136266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:44:48.955444 1136266 cri.go:89] found id: ""
	I0603 13:44:48.955475 1136266 logs.go:276] 0 containers: []
	W0603 13:44:48.955487 1136266 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:44:48.955496 1136266 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:44:48.955564 1136266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:44:48.998120 1136266 cri.go:89] found id: ""
	I0603 13:44:48.998153 1136266 logs.go:276] 0 containers: []
	W0603 13:44:48.998162 1136266 logs.go:278] No container was found matching "etcd"
	I0603 13:44:48.998168 1136266 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:44:48.998230 1136266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:44:49.034437 1136266 cri.go:89] found id: ""
	I0603 13:44:49.034473 1136266 logs.go:276] 0 containers: []
	W0603 13:44:49.034482 1136266 logs.go:278] No container was found matching "coredns"
	I0603 13:44:49.034491 1136266 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:44:49.034558 1136266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:44:49.069020 1136266 cri.go:89] found id: ""
	I0603 13:44:49.069049 1136266 logs.go:276] 0 containers: []
	W0603 13:44:49.069058 1136266 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:44:49.069065 1136266 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:44:49.069129 1136266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:44:49.102910 1136266 cri.go:89] found id: ""
	I0603 13:44:49.102939 1136266 logs.go:276] 0 containers: []
	W0603 13:44:49.102948 1136266 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:44:49.102954 1136266 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:44:49.103013 1136266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:44:49.150530 1136266 cri.go:89] found id: ""
	I0603 13:44:49.150563 1136266 logs.go:276] 0 containers: []
	W0603 13:44:49.150572 1136266 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:44:49.150578 1136266 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:44:49.150646 1136266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:44:49.192479 1136266 cri.go:89] found id: ""
	I0603 13:44:49.192520 1136266 logs.go:276] 0 containers: []
	W0603 13:44:49.192528 1136266 logs.go:278] No container was found matching "kindnet"
	I0603 13:44:49.192538 1136266 logs.go:123] Gathering logs for container status ...
	I0603 13:44:49.192553 1136266 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:44:49.257060 1136266 logs.go:123] Gathering logs for kubelet ...
	I0603 13:44:49.257092 1136266 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:44:49.304728 1136266 logs.go:123] Gathering logs for dmesg ...
	I0603 13:44:49.304768 1136266 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:44:49.319492 1136266 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:44:49.319528 1136266 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:44:49.451752 1136266 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:44:49.451778 1136266 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:44:49.451794 1136266 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0603 13:44:49.546259 1136266 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0603 13:44:49.546313 1136266 out.go:239] * 
	* 
	W0603 13:44:49.546392 1136266 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0603 13:44:49.546418 1136266 out.go:239] * 
	* 
	W0603 13:44:49.547513 1136266 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0603 13:44:49.551493 1136266 out.go:177] 
	W0603 13:44:49.552927 1136266 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0603 13:44:49.552972 1136266 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0603 13:44:49.552997 1136266 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0603 13:44:49.554476 1136266 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-151788 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-151788 -n old-k8s-version-151788
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-151788 -n old-k8s-version-151788: exit status 6 (239.194929ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0603 13:44:49.844460 1142745 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-151788" does not appear in /home/jenkins/minikube-integration/19011-1078924/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-151788" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (284.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-817450 --alsologtostderr -v=3
E0603 13:42:27.933659 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/auto-021279/client.crt: no such file or directory
E0603 13:42:27.938944 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/auto-021279/client.crt: no such file or directory
E0603 13:42:27.949313 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/auto-021279/client.crt: no such file or directory
E0603 13:42:27.969707 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/auto-021279/client.crt: no such file or directory
E0603 13:42:28.014240 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/auto-021279/client.crt: no such file or directory
E0603 13:42:28.094654 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/auto-021279/client.crt: no such file or directory
E0603 13:42:28.255242 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/auto-021279/client.crt: no such file or directory
E0603 13:42:28.575703 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/auto-021279/client.crt: no such file or directory
E0603 13:42:29.216346 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/auto-021279/client.crt: no such file or directory
E0603 13:42:30.497510 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/auto-021279/client.crt: no such file or directory
E0603 13:42:33.058686 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/auto-021279/client.crt: no such file or directory
E0603 13:42:38.179674 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/auto-021279/client.crt: no such file or directory
E0603 13:42:45.542119 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.crt: no such file or directory
E0603 13:42:48.420390 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/auto-021279/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-817450 --alsologtostderr -v=3: exit status 82 (2m0.604223162s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-817450"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0603 13:42:18.894329 1141804 out.go:291] Setting OutFile to fd 1 ...
	I0603 13:42:18.894617 1141804 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:42:18.894661 1141804 out.go:304] Setting ErrFile to fd 2...
	I0603 13:42:18.894683 1141804 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:42:18.895034 1141804 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
	I0603 13:42:18.895481 1141804 out.go:298] Setting JSON to false
	I0603 13:42:18.895663 1141804 mustload.go:65] Loading cluster: no-preload-817450
	I0603 13:42:18.896167 1141804 config.go:182] Loaded profile config "no-preload-817450": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:42:18.896317 1141804 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450/config.json ...
	I0603 13:42:18.896612 1141804 mustload.go:65] Loading cluster: no-preload-817450
	I0603 13:42:18.896818 1141804 config.go:182] Loaded profile config "no-preload-817450": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:42:18.896894 1141804 stop.go:39] StopHost: no-preload-817450
	I0603 13:42:18.897565 1141804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:42:18.897683 1141804 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:42:18.913660 1141804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37095
	I0603 13:42:18.914204 1141804 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:42:18.915074 1141804 main.go:141] libmachine: Using API Version  1
	I0603 13:42:18.915105 1141804 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:42:18.915613 1141804 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:42:18.918690 1141804 out.go:177] * Stopping node "no-preload-817450"  ...
	I0603 13:42:18.920634 1141804 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0603 13:42:18.920703 1141804 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:42:18.921045 1141804 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0603 13:42:18.921096 1141804 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:42:18.925226 1141804 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:42:18.925916 1141804 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:42:18.925957 1141804 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:42:18.926127 1141804 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:42:18.926461 1141804 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:42:18.926653 1141804 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:42:18.926798 1141804 sshutil.go:53] new ssh client: &{IP:192.168.72.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa Username:docker}
	I0603 13:42:19.046066 1141804 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0603 13:42:19.120069 1141804 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0603 13:42:19.202765 1141804 main.go:141] libmachine: Stopping "no-preload-817450"...
	I0603 13:42:19.202802 1141804 main.go:141] libmachine: (no-preload-817450) Calling .GetState
	I0603 13:42:19.204883 1141804 main.go:141] libmachine: (no-preload-817450) Calling .Stop
	I0603 13:42:19.209132 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 0/120
	I0603 13:42:20.211043 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 1/120
	I0603 13:42:21.213013 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 2/120
	I0603 13:42:22.215610 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 3/120
	I0603 13:42:23.217315 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 4/120
	I0603 13:42:24.219428 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 5/120
	I0603 13:42:25.220917 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 6/120
	I0603 13:42:26.222223 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 7/120
	I0603 13:42:27.224790 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 8/120
	I0603 13:42:28.226595 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 9/120
	I0603 13:42:29.228112 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 10/120
	I0603 13:42:30.229818 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 11/120
	I0603 13:42:31.231621 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 12/120
	I0603 13:42:32.233269 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 13/120
	I0603 13:42:33.234922 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 14/120
	I0603 13:42:34.237388 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 15/120
	I0603 13:42:35.239526 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 16/120
	I0603 13:42:36.241226 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 17/120
	I0603 13:42:37.242819 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 18/120
	I0603 13:42:38.244290 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 19/120
	I0603 13:42:39.246649 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 20/120
	I0603 13:42:40.248167 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 21/120
	I0603 13:42:41.249766 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 22/120
	I0603 13:42:42.251985 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 23/120
	I0603 13:42:43.253720 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 24/120
	I0603 13:42:44.255888 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 25/120
	I0603 13:42:45.257529 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 26/120
	I0603 13:42:46.258895 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 27/120
	I0603 13:42:47.260547 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 28/120
	I0603 13:42:48.261975 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 29/120
	I0603 13:42:49.263933 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 30/120
	I0603 13:42:50.266169 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 31/120
	I0603 13:42:51.267541 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 32/120
	I0603 13:42:52.268903 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 33/120
	I0603 13:42:53.270462 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 34/120
	I0603 13:42:54.272229 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 35/120
	I0603 13:42:55.274528 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 36/120
	I0603 13:42:56.276639 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 37/120
	I0603 13:42:57.277973 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 38/120
	I0603 13:42:58.280219 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 39/120
	I0603 13:42:59.282641 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 40/120
	I0603 13:43:00.284464 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 41/120
	I0603 13:43:01.285710 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 42/120
	I0603 13:43:02.287177 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 43/120
	I0603 13:43:03.288528 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 44/120
	I0603 13:43:04.290965 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 45/120
	I0603 13:43:05.292872 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 46/120
	I0603 13:43:06.294616 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 47/120
	I0603 13:43:07.296096 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 48/120
	I0603 13:43:08.297582 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 49/120
	I0603 13:43:09.300096 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 50/120
	I0603 13:43:10.302863 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 51/120
	I0603 13:43:11.304470 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 52/120
	I0603 13:43:12.306027 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 53/120
	I0603 13:43:13.307521 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 54/120
	I0603 13:43:14.310090 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 55/120
	I0603 13:43:15.312574 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 56/120
	I0603 13:43:16.314977 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 57/120
	I0603 13:43:17.316404 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 58/120
	I0603 13:43:18.318025 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 59/120
	I0603 13:43:19.319435 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 60/120
	I0603 13:43:20.321195 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 61/120
	I0603 13:43:21.322558 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 62/120
	I0603 13:43:22.323977 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 63/120
	I0603 13:43:23.325312 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 64/120
	I0603 13:43:24.327551 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 65/120
	I0603 13:43:25.329109 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 66/120
	I0603 13:43:26.330952 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 67/120
	I0603 13:43:27.332633 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 68/120
	I0603 13:43:28.334394 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 69/120
	I0603 13:43:29.336008 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 70/120
	I0603 13:43:30.337464 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 71/120
	I0603 13:43:31.338691 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 72/120
	I0603 13:43:32.340235 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 73/120
	I0603 13:43:33.341810 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 74/120
	I0603 13:43:34.344121 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 75/120
	I0603 13:43:35.345600 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 76/120
	I0603 13:43:36.347092 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 77/120
	I0603 13:43:37.349259 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 78/120
	I0603 13:43:38.351087 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 79/120
	I0603 13:43:39.353599 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 80/120
	I0603 13:43:40.355064 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 81/120
	I0603 13:43:41.356621 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 82/120
	I0603 13:43:42.358037 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 83/120
	I0603 13:43:43.359689 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 84/120
	I0603 13:43:44.361855 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 85/120
	I0603 13:43:45.364039 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 86/120
	I0603 13:43:46.365575 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 87/120
	I0603 13:43:47.366954 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 88/120
	I0603 13:43:48.368354 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 89/120
	I0603 13:43:49.370738 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 90/120
	I0603 13:43:50.372216 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 91/120
	I0603 13:43:51.373604 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 92/120
	I0603 13:43:52.375076 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 93/120
	I0603 13:43:53.376461 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 94/120
	I0603 13:43:54.378602 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 95/120
	I0603 13:43:55.380183 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 96/120
	I0603 13:43:56.381626 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 97/120
	I0603 13:43:57.383276 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 98/120
	I0603 13:43:58.384638 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 99/120
	I0603 13:43:59.387107 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 100/120
	I0603 13:44:00.388604 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 101/120
	I0603 13:44:01.389991 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 102/120
	I0603 13:44:02.391440 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 103/120
	I0603 13:44:03.393048 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 104/120
	I0603 13:44:04.395553 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 105/120
	I0603 13:44:05.397244 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 106/120
	I0603 13:44:06.398807 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 107/120
	I0603 13:44:07.400260 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 108/120
	I0603 13:44:08.402138 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 109/120
	I0603 13:44:09.404192 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 110/120
	I0603 13:44:10.405460 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 111/120
	I0603 13:44:11.407366 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 112/120
	I0603 13:44:12.409627 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 113/120
	I0603 13:44:13.411097 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 114/120
	I0603 13:44:14.413099 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 115/120
	I0603 13:44:15.414509 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 116/120
	I0603 13:44:16.415998 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 117/120
	I0603 13:44:17.417310 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 118/120
	I0603 13:44:18.418797 1141804 main.go:141] libmachine: (no-preload-817450) Waiting for machine to stop 119/120
	I0603 13:44:19.419629 1141804 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0603 13:44:19.419687 1141804 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0603 13:44:19.421713 1141804 out.go:177] 
	W0603 13:44:19.423223 1141804 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0603 13:44:19.423245 1141804 out.go:239] * 
	* 
	W0603 13:44:19.427903 1141804 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0603 13:44:19.429463 1141804 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-817450 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-817450 -n no-preload-817450
E0603 13:44:23.472870 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/custom-flannel-021279/client.crt: no such file or directory
E0603 13:44:30.123423 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kindnet-021279/client.crt: no such file or directory
E0603 13:44:30.128734 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kindnet-021279/client.crt: no such file or directory
E0603 13:44:30.139066 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kindnet-021279/client.crt: no such file or directory
E0603 13:44:30.159424 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kindnet-021279/client.crt: no such file or directory
E0603 13:44:30.199813 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kindnet-021279/client.crt: no such file or directory
E0603 13:44:30.280888 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kindnet-021279/client.crt: no such file or directory
E0603 13:44:30.441431 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kindnet-021279/client.crt: no such file or directory
E0603 13:44:30.762222 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kindnet-021279/client.crt: no such file or directory
E0603 13:44:31.402638 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kindnet-021279/client.crt: no such file or directory
E0603 13:44:32.683189 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kindnet-021279/client.crt: no such file or directory
E0603 13:44:33.713991 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/custom-flannel-021279/client.crt: no such file or directory
E0603 13:44:35.243910 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kindnet-021279/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-817450 -n no-preload-817450: exit status 3 (18.691162487s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0603 13:44:38.121828 1142536 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.125:22: connect: no route to host
	E0603 13:44:38.121850 1142536 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.125:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-817450" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-223260 --alsologtostderr -v=3
E0603 13:43:22.013930 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/calico-021279/client.crt: no such file or directory
E0603 13:43:22.019253 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/calico-021279/client.crt: no such file or directory
E0603 13:43:22.029633 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/calico-021279/client.crt: no such file or directory
E0603 13:43:22.049975 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/calico-021279/client.crt: no such file or directory
E0603 13:43:22.090334 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/calico-021279/client.crt: no such file or directory
E0603 13:43:22.171473 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/calico-021279/client.crt: no such file or directory
E0603 13:43:22.332289 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/calico-021279/client.crt: no such file or directory
E0603 13:43:22.652911 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/calico-021279/client.crt: no such file or directory
E0603 13:43:23.293692 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/calico-021279/client.crt: no such file or directory
E0603 13:43:24.574424 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/calico-021279/client.crt: no such file or directory
E0603 13:43:27.134980 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/calico-021279/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-223260 --alsologtostderr -v=3: exit status 82 (2m0.547267496s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-223260"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0603 13:43:11.862539 1142168 out.go:291] Setting OutFile to fd 1 ...
	I0603 13:43:11.862687 1142168 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:43:11.862697 1142168 out.go:304] Setting ErrFile to fd 2...
	I0603 13:43:11.862701 1142168 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:43:11.862898 1142168 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
	I0603 13:43:11.863131 1142168 out.go:298] Setting JSON to false
	I0603 13:43:11.863249 1142168 mustload.go:65] Loading cluster: embed-certs-223260
	I0603 13:43:11.864442 1142168 config.go:182] Loaded profile config "embed-certs-223260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:43:11.864665 1142168 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260/config.json ...
	I0603 13:43:11.864900 1142168 mustload.go:65] Loading cluster: embed-certs-223260
	I0603 13:43:11.865085 1142168 config.go:182] Loaded profile config "embed-certs-223260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:43:11.865131 1142168 stop.go:39] StopHost: embed-certs-223260
	I0603 13:43:11.866191 1142168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:43:11.866263 1142168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:43:11.881831 1142168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38313
	I0603 13:43:11.882479 1142168 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:43:11.883272 1142168 main.go:141] libmachine: Using API Version  1
	I0603 13:43:11.883299 1142168 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:43:11.883724 1142168 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:43:11.886254 1142168 out.go:177] * Stopping node "embed-certs-223260"  ...
	I0603 13:43:11.887578 1142168 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0603 13:43:11.887622 1142168 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:43:11.887907 1142168 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0603 13:43:11.887933 1142168 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:43:11.891529 1142168 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:43:11.892023 1142168 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:41:34 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:43:11.892071 1142168 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:43:11.892228 1142168 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:43:11.892425 1142168 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:43:11.892652 1142168 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:43:11.892820 1142168 sshutil.go:53] new ssh client: &{IP:192.168.83.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa Username:docker}
	I0603 13:43:11.993630 1142168 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0603 13:43:12.067892 1142168 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0603 13:43:12.138397 1142168 main.go:141] libmachine: Stopping "embed-certs-223260"...
	I0603 13:43:12.138431 1142168 main.go:141] libmachine: (embed-certs-223260) Calling .GetState
	I0603 13:43:12.140193 1142168 main.go:141] libmachine: (embed-certs-223260) Calling .Stop
	I0603 13:43:12.144325 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 0/120
	I0603 13:43:13.146455 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 1/120
	I0603 13:43:14.148087 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 2/120
	I0603 13:43:15.150185 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 3/120
	I0603 13:43:16.152058 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 4/120
	I0603 13:43:17.154721 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 5/120
	I0603 13:43:18.156504 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 6/120
	I0603 13:43:19.158315 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 7/120
	I0603 13:43:20.159812 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 8/120
	I0603 13:43:21.161703 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 9/120
	I0603 13:43:22.163164 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 10/120
	I0603 13:43:23.165029 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 11/120
	I0603 13:43:24.166667 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 12/120
	I0603 13:43:25.168230 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 13/120
	I0603 13:43:26.170066 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 14/120
	I0603 13:43:27.172418 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 15/120
	I0603 13:43:28.174056 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 16/120
	I0603 13:43:29.175794 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 17/120
	I0603 13:43:30.177228 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 18/120
	I0603 13:43:31.178678 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 19/120
	I0603 13:43:32.181879 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 20/120
	I0603 13:43:33.183878 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 21/120
	I0603 13:43:34.185294 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 22/120
	I0603 13:43:35.187211 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 23/120
	I0603 13:43:36.189054 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 24/120
	I0603 13:43:37.191315 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 25/120
	I0603 13:43:38.192964 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 26/120
	I0603 13:43:39.194551 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 27/120
	I0603 13:43:40.196263 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 28/120
	I0603 13:43:41.197693 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 29/120
	I0603 13:43:42.200028 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 30/120
	I0603 13:43:43.201580 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 31/120
	I0603 13:43:44.203299 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 32/120
	I0603 13:43:45.204770 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 33/120
	I0603 13:43:46.206324 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 34/120
	I0603 13:43:47.208571 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 35/120
	I0603 13:43:48.209996 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 36/120
	I0603 13:43:49.211952 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 37/120
	I0603 13:43:50.213527 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 38/120
	I0603 13:43:51.215045 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 39/120
	I0603 13:43:52.216720 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 40/120
	I0603 13:43:53.218287 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 41/120
	I0603 13:43:54.220259 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 42/120
	I0603 13:43:55.221898 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 43/120
	I0603 13:43:56.223473 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 44/120
	I0603 13:43:57.225896 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 45/120
	I0603 13:43:58.227884 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 46/120
	I0603 13:43:59.229465 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 47/120
	I0603 13:44:00.230747 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 48/120
	I0603 13:44:01.232033 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 49/120
	I0603 13:44:02.233464 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 50/120
	I0603 13:44:03.234914 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 51/120
	I0603 13:44:04.236293 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 52/120
	I0603 13:44:05.237765 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 53/120
	I0603 13:44:06.239124 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 54/120
	I0603 13:44:07.241339 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 55/120
	I0603 13:44:08.242659 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 56/120
	I0603 13:44:09.243931 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 57/120
	I0603 13:44:10.245318 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 58/120
	I0603 13:44:11.246608 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 59/120
	I0603 13:44:12.247878 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 60/120
	I0603 13:44:13.250148 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 61/120
	I0603 13:44:14.251752 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 62/120
	I0603 13:44:15.253438 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 63/120
	I0603 13:44:16.254777 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 64/120
	I0603 13:44:17.256978 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 65/120
	I0603 13:44:18.258529 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 66/120
	I0603 13:44:19.259916 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 67/120
	I0603 13:44:20.261288 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 68/120
	I0603 13:44:21.262752 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 69/120
	I0603 13:44:22.265003 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 70/120
	I0603 13:44:23.266364 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 71/120
	I0603 13:44:24.267834 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 72/120
	I0603 13:44:25.269523 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 73/120
	I0603 13:44:26.270749 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 74/120
	I0603 13:44:27.273040 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 75/120
	I0603 13:44:28.274517 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 76/120
	I0603 13:44:29.276025 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 77/120
	I0603 13:44:30.277555 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 78/120
	I0603 13:44:31.279233 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 79/120
	I0603 13:44:32.280542 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 80/120
	I0603 13:44:33.282231 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 81/120
	I0603 13:44:34.283925 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 82/120
	I0603 13:44:35.285616 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 83/120
	I0603 13:44:36.288203 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 84/120
	I0603 13:44:37.290509 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 85/120
	I0603 13:44:38.292274 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 86/120
	I0603 13:44:39.294159 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 87/120
	I0603 13:44:40.295970 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 88/120
	I0603 13:44:41.297491 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 89/120
	I0603 13:44:42.300119 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 90/120
	I0603 13:44:43.301605 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 91/120
	I0603 13:44:44.303041 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 92/120
	I0603 13:44:45.304732 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 93/120
	I0603 13:44:46.306253 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 94/120
	I0603 13:44:47.308249 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 95/120
	I0603 13:44:48.309899 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 96/120
	I0603 13:44:49.312196 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 97/120
	I0603 13:44:50.313904 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 98/120
	I0603 13:44:51.315760 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 99/120
	I0603 13:44:52.317730 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 100/120
	I0603 13:44:53.319504 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 101/120
	I0603 13:44:54.321539 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 102/120
	I0603 13:44:55.323201 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 103/120
	I0603 13:44:56.324703 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 104/120
	I0603 13:44:57.327016 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 105/120
	I0603 13:44:58.328506 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 106/120
	I0603 13:44:59.330424 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 107/120
	I0603 13:45:00.331852 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 108/120
	I0603 13:45:01.333341 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 109/120
	I0603 13:45:02.335312 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 110/120
	I0603 13:45:03.336914 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 111/120
	I0603 13:45:04.338441 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 112/120
	I0603 13:45:05.339844 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 113/120
	I0603 13:45:06.341485 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 114/120
	I0603 13:45:07.343755 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 115/120
	I0603 13:45:08.345291 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 116/120
	I0603 13:45:09.346795 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 117/120
	I0603 13:45:10.348143 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 118/120
	I0603 13:45:11.349929 1142168 main.go:141] libmachine: (embed-certs-223260) Waiting for machine to stop 119/120
	I0603 13:45:12.351410 1142168 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0603 13:45:12.351514 1142168 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0603 13:45:12.353675 1142168 out.go:177] 
	W0603 13:45:12.355211 1142168 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0603 13:45:12.355243 1142168 out.go:239] * 
	* 
	W0603 13:45:12.359853 1142168 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0603 13:45:12.361333 1142168 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-223260 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-223260 -n embed-certs-223260
E0603 13:45:12.805294 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/flannel-021279/client.crt: no such file or directory
E0603 13:45:14.086439 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/flannel-021279/client.crt: no such file or directory
E0603 13:45:15.397137 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/enable-default-cni-021279/client.crt: no such file or directory
E0603 13:45:15.402439 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/enable-default-cni-021279/client.crt: no such file or directory
E0603 13:45:15.412796 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/enable-default-cni-021279/client.crt: no such file or directory
E0603 13:45:15.433188 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/enable-default-cni-021279/client.crt: no such file or directory
E0603 13:45:15.473500 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/enable-default-cni-021279/client.crt: no such file or directory
E0603 13:45:15.553979 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/enable-default-cni-021279/client.crt: no such file or directory
E0603 13:45:15.714393 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/enable-default-cni-021279/client.crt: no such file or directory
E0603 13:45:16.035277 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/enable-default-cni-021279/client.crt: no such file or directory
E0603 13:45:16.647059 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/flannel-021279/client.crt: no such file or directory
E0603 13:45:16.676298 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/enable-default-cni-021279/client.crt: no such file or directory
E0603 13:45:17.956938 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/enable-default-cni-021279/client.crt: no such file or directory
E0603 13:45:20.518104 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/enable-default-cni-021279/client.crt: no such file or directory
E0603 13:45:21.767881 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/flannel-021279/client.crt: no such file or directory
E0603 13:45:25.638973 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/enable-default-cni-021279/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-223260 -n embed-certs-223260: exit status 3 (18.494389525s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0603 13:45:30.857803 1142998 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.83.246:22: connect: no route to host
	E0603 13:45:30.857831 1142998 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.83.246:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-223260" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-030870 --alsologtostderr -v=3
E0603 13:43:42.495678 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/calico-021279/client.crt: no such file or directory
E0603 13:43:49.861553 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/auto-021279/client.crt: no such file or directory
E0603 13:44:02.976733 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/calico-021279/client.crt: no such file or directory
E0603 13:44:08.592607 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.crt: no such file or directory
E0603 13:44:13.231966 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/custom-flannel-021279/client.crt: no such file or directory
E0603 13:44:13.237296 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/custom-flannel-021279/client.crt: no such file or directory
E0603 13:44:13.247582 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/custom-flannel-021279/client.crt: no such file or directory
E0603 13:44:13.268395 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/custom-flannel-021279/client.crt: no such file or directory
E0603 13:44:13.308709 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/custom-flannel-021279/client.crt: no such file or directory
E0603 13:44:13.389086 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/custom-flannel-021279/client.crt: no such file or directory
E0603 13:44:13.549545 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/custom-flannel-021279/client.crt: no such file or directory
E0603 13:44:13.870155 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/custom-flannel-021279/client.crt: no such file or directory
E0603 13:44:14.510461 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/custom-flannel-021279/client.crt: no such file or directory
E0603 13:44:15.791544 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/custom-flannel-021279/client.crt: no such file or directory
E0603 13:44:18.351944 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/custom-flannel-021279/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-030870 --alsologtostderr -v=3: exit status 82 (2m0.511341394s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-030870"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0603 13:43:38.606838 1142373 out.go:291] Setting OutFile to fd 1 ...
	I0603 13:43:38.607129 1142373 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:43:38.607140 1142373 out.go:304] Setting ErrFile to fd 2...
	I0603 13:43:38.607146 1142373 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:43:38.607367 1142373 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
	I0603 13:43:38.607618 1142373 out.go:298] Setting JSON to false
	I0603 13:43:38.607708 1142373 mustload.go:65] Loading cluster: default-k8s-diff-port-030870
	I0603 13:43:38.608063 1142373 config.go:182] Loaded profile config "default-k8s-diff-port-030870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:43:38.608154 1142373 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870/config.json ...
	I0603 13:43:38.608336 1142373 mustload.go:65] Loading cluster: default-k8s-diff-port-030870
	I0603 13:43:38.608463 1142373 config.go:182] Loaded profile config "default-k8s-diff-port-030870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:43:38.608510 1142373 stop.go:39] StopHost: default-k8s-diff-port-030870
	I0603 13:43:38.608956 1142373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:43:38.609031 1142373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:43:38.623912 1142373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42623
	I0603 13:43:38.624384 1142373 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:43:38.625019 1142373 main.go:141] libmachine: Using API Version  1
	I0603 13:43:38.625048 1142373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:43:38.625394 1142373 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:43:38.627928 1142373 out.go:177] * Stopping node "default-k8s-diff-port-030870"  ...
	I0603 13:43:38.629530 1142373 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0603 13:43:38.629572 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:43:38.629824 1142373 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0603 13:43:38.629860 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:43:38.633139 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:43:38.633585 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:42:05 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:43:38.633615 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:43:38.633806 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:43:38.634017 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:43:38.634195 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:43:38.634368 1142373 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa Username:docker}
	I0603 13:43:38.728947 1142373 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0603 13:43:38.791044 1142373 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0603 13:43:38.851767 1142373 main.go:141] libmachine: Stopping "default-k8s-diff-port-030870"...
	I0603 13:43:38.851816 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetState
	I0603 13:43:38.853596 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .Stop
	I0603 13:43:38.857078 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 0/120
	I0603 13:43:39.858812 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 1/120
	I0603 13:43:40.860207 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 2/120
	I0603 13:43:41.862061 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 3/120
	I0603 13:43:42.863451 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 4/120
	I0603 13:43:43.865624 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 5/120
	I0603 13:43:44.867045 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 6/120
	I0603 13:43:45.868477 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 7/120
	I0603 13:43:46.869848 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 8/120
	I0603 13:43:47.871400 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 9/120
	I0603 13:43:48.873985 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 10/120
	I0603 13:43:49.875294 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 11/120
	I0603 13:43:50.877143 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 12/120
	I0603 13:43:51.878700 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 13/120
	I0603 13:43:52.880445 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 14/120
	I0603 13:43:53.882668 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 15/120
	I0603 13:43:54.884201 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 16/120
	I0603 13:43:55.885775 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 17/120
	I0603 13:43:56.887284 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 18/120
	I0603 13:43:57.888796 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 19/120
	I0603 13:43:58.890245 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 20/120
	I0603 13:43:59.891755 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 21/120
	I0603 13:44:00.893226 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 22/120
	I0603 13:44:01.894624 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 23/120
	I0603 13:44:02.896172 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 24/120
	I0603 13:44:03.898701 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 25/120
	I0603 13:44:04.900145 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 26/120
	I0603 13:44:05.901680 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 27/120
	I0603 13:44:06.903057 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 28/120
	I0603 13:44:07.904671 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 29/120
	I0603 13:44:08.907145 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 30/120
	I0603 13:44:09.908314 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 31/120
	I0603 13:44:10.909687 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 32/120
	I0603 13:44:11.911773 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 33/120
	I0603 13:44:12.913094 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 34/120
	I0603 13:44:13.915211 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 35/120
	I0603 13:44:14.916626 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 36/120
	I0603 13:44:15.917933 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 37/120
	I0603 13:44:16.919396 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 38/120
	I0603 13:44:17.920765 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 39/120
	I0603 13:44:18.922953 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 40/120
	I0603 13:44:19.924465 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 41/120
	I0603 13:44:20.926124 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 42/120
	I0603 13:44:21.927582 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 43/120
	I0603 13:44:22.929120 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 44/120
	I0603 13:44:23.931174 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 45/120
	I0603 13:44:24.932652 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 46/120
	I0603 13:44:25.934120 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 47/120
	I0603 13:44:26.935954 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 48/120
	I0603 13:44:27.937523 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 49/120
	I0603 13:44:28.939814 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 50/120
	I0603 13:44:29.941726 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 51/120
	I0603 13:44:30.943600 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 52/120
	I0603 13:44:31.945963 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 53/120
	I0603 13:44:32.947432 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 54/120
	I0603 13:44:33.949728 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 55/120
	I0603 13:44:34.951287 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 56/120
	I0603 13:44:35.953018 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 57/120
	I0603 13:44:36.954885 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 58/120
	I0603 13:44:37.956790 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 59/120
	I0603 13:44:38.958707 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 60/120
	I0603 13:44:39.960225 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 61/120
	I0603 13:44:40.962016 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 62/120
	I0603 13:44:41.964365 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 63/120
	I0603 13:44:42.966354 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 64/120
	I0603 13:44:43.968867 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 65/120
	I0603 13:44:44.970556 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 66/120
	I0603 13:44:45.972300 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 67/120
	I0603 13:44:46.973839 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 68/120
	I0603 13:44:47.975189 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 69/120
	I0603 13:44:48.977880 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 70/120
	I0603 13:44:49.979774 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 71/120
	I0603 13:44:50.982381 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 72/120
	I0603 13:44:51.984027 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 73/120
	I0603 13:44:52.985998 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 74/120
	I0603 13:44:53.988525 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 75/120
	I0603 13:44:54.989934 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 76/120
	I0603 13:44:55.991570 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 77/120
	I0603 13:44:56.992983 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 78/120
	I0603 13:44:57.994690 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 79/120
	I0603 13:44:58.996524 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 80/120
	I0603 13:44:59.998001 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 81/120
	I0603 13:45:00.999530 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 82/120
	I0603 13:45:02.001594 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 83/120
	I0603 13:45:03.003246 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 84/120
	I0603 13:45:04.005649 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 85/120
	I0603 13:45:05.007180 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 86/120
	I0603 13:45:06.008736 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 87/120
	I0603 13:45:07.010571 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 88/120
	I0603 13:45:08.012031 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 89/120
	I0603 13:45:09.014373 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 90/120
	I0603 13:45:10.015942 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 91/120
	I0603 13:45:11.017279 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 92/120
	I0603 13:45:12.018659 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 93/120
	I0603 13:45:13.019917 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 94/120
	I0603 13:45:14.022046 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 95/120
	I0603 13:45:15.023446 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 96/120
	I0603 13:45:16.024902 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 97/120
	I0603 13:45:17.026300 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 98/120
	I0603 13:45:18.027878 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 99/120
	I0603 13:45:19.029392 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 100/120
	I0603 13:45:20.030862 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 101/120
	I0603 13:45:21.032484 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 102/120
	I0603 13:45:22.034193 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 103/120
	I0603 13:45:23.035804 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 104/120
	I0603 13:45:24.038026 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 105/120
	I0603 13:45:25.039612 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 106/120
	I0603 13:45:26.041092 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 107/120
	I0603 13:45:27.042507 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 108/120
	I0603 13:45:28.044055 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 109/120
	I0603 13:45:29.046717 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 110/120
	I0603 13:45:30.048230 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 111/120
	I0603 13:45:31.049769 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 112/120
	I0603 13:45:32.051212 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 113/120
	I0603 13:45:33.052747 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 114/120
	I0603 13:45:34.054944 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 115/120
	I0603 13:45:35.056884 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 116/120
	I0603 13:45:36.058385 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 117/120
	I0603 13:45:37.060000 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 118/120
	I0603 13:45:38.061438 1142373 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for machine to stop 119/120
	I0603 13:45:39.062563 1142373 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0603 13:45:39.062644 1142373 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0603 13:45:39.064799 1142373 out.go:177] 
	W0603 13:45:39.066226 1142373 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0603 13:45:39.066250 1142373 out.go:239] * 
	* 
	W0603 13:45:39.071430 1142373 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0603 13:45:39.072601 1142373 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-030870 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-030870 -n default-k8s-diff-port-030870
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-030870 -n default-k8s-diff-port-030870: exit status 3 (18.662719054s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0603 13:45:57.737830 1143177 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.177:22: connect: no route to host
	E0603 13:45:57.737853 1143177 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.177:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-030870" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-817450 -n no-preload-817450
E0603 13:44:40.364417 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kindnet-021279/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-817450 -n no-preload-817450: exit status 3 (3.167321083s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0603 13:44:41.289819 1142631 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.125:22: connect: no route to host
	E0603 13:44:41.289842 1142631 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.125:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-817450 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0603 13:44:43.937539 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/calico-021279/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-817450 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.155565721s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.125:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-817450 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-817450 -n no-preload-817450
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-817450 -n no-preload-817450: exit status 3 (3.061211178s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0603 13:44:50.505808 1142713 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.125:22: connect: no route to host
	E0603 13:44:50.505831 1142713 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.125:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-817450" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-151788 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-151788 create -f testdata/busybox.yaml: exit status 1 (47.052401ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-151788" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-151788 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-151788 -n old-k8s-version-151788
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-151788 -n old-k8s-version-151788: exit status 6 (223.03439ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0603 13:44:50.115700 1142784 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-151788" does not appear in /home/jenkins/minikube-integration/19011-1078924/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-151788" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-151788 -n old-k8s-version-151788
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-151788 -n old-k8s-version-151788: exit status 6 (222.449923ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0603 13:44:50.338661 1142814 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-151788" does not appear in /home/jenkins/minikube-integration/19011-1078924/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-151788" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (89.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-151788 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-151788 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m29.191870494s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-151788 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-151788 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-151788 describe deploy/metrics-server -n kube-system: exit status 1 (45.621516ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-151788" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-151788 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-151788 -n old-k8s-version-151788
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-151788 -n old-k8s-version-151788: exit status 6 (224.754468ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0603 13:46:19.799803 1143546 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-151788" does not appear in /home/jenkins/minikube-integration/19011-1078924/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-151788" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (89.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-223260 -n embed-certs-223260
E0603 13:45:32.008187 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/flannel-021279/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-223260 -n embed-certs-223260: exit status 3 (3.167711453s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0603 13:45:34.025820 1143096 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.83.246:22: connect: no route to host
	E0603 13:45:34.025844 1143096 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.83.246:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-223260 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0603 13:45:35.156304 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/custom-flannel-021279/client.crt: no such file or directory
E0603 13:45:35.879668 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/enable-default-cni-021279/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-223260 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.155581871s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.83.246:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-223260 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-223260 -n embed-certs-223260
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-223260 -n embed-certs-223260: exit status 3 (3.060354246s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0603 13:45:43.241846 1143206 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.83.246:22: connect: no route to host
	E0603 13:45:43.241869 1143206 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.83.246:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-223260" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-030870 -n default-k8s-diff-port-030870
E0603 13:45:59.474455 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/bridge-021279/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-030870 -n default-k8s-diff-port-030870: exit status 3 (3.167616098s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0603 13:46:00.905844 1143337 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.177:22: connect: no route to host
	E0603 13:46:00.905866 1143337 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.177:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-030870 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0603 13:46:04.595124 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/bridge-021279/client.crt: no such file or directory
E0603 13:46:05.857886 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/calico-021279/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-030870 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.156528107s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.177:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-030870 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-030870 -n default-k8s-diff-port-030870
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-030870 -n default-k8s-diff-port-030870: exit status 3 (3.059340249s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0603 13:46:10.121819 1143418 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.177:22: connect: no route to host
	E0603 13:46:10.121842 1143418 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.177:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-030870" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (744.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-151788 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0603 13:46:33.450005 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/flannel-021279/client.crt: no such file or directory
E0603 13:46:35.315880 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/bridge-021279/client.crt: no such file or directory
E0603 13:46:37.321037 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/enable-default-cni-021279/client.crt: no such file or directory
E0603 13:46:57.077597 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/custom-flannel-021279/client.crt: no such file or directory
E0603 13:47:13.967162 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kindnet-021279/client.crt: no such file or directory
E0603 13:47:16.276492 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/bridge-021279/client.crt: no such file or directory
E0603 13:47:27.933218 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/auto-021279/client.crt: no such file or directory
E0603 13:47:45.541563 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.crt: no such file or directory
E0603 13:47:55.371137 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/flannel-021279/client.crt: no such file or directory
E0603 13:47:55.623114 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/auto-021279/client.crt: no such file or directory
E0603 13:47:59.241992 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/enable-default-cni-021279/client.crt: no such file or directory
E0603 13:48:22.013206 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/calico-021279/client.crt: no such file or directory
E0603 13:48:38.197672 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/bridge-021279/client.crt: no such file or directory
E0603 13:48:49.698339 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/calico-021279/client.crt: no such file or directory
E0603 13:49:13.231875 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/custom-flannel-021279/client.crt: no such file or directory
E0603 13:49:30.123068 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kindnet-021279/client.crt: no such file or directory
E0603 13:49:40.918720 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/custom-flannel-021279/client.crt: no such file or directory
E0603 13:49:57.807594 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kindnet-021279/client.crt: no such file or directory
E0603 13:49:58.228760 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/functional-093300/client.crt: no such file or directory
E0603 13:50:11.526204 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/flannel-021279/client.crt: no such file or directory
E0603 13:50:15.397603 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/enable-default-cni-021279/client.crt: no such file or directory
E0603 13:50:39.212283 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/flannel-021279/client.crt: no such file or directory
E0603 13:50:43.082424 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/enable-default-cni-021279/client.crt: no such file or directory
E0603 13:50:54.355141 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/bridge-021279/client.crt: no such file or directory
E0603 13:51:21.279067 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/functional-093300/client.crt: no such file or directory
E0603 13:51:22.038789 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/bridge-021279/client.crt: no such file or directory
E0603 13:52:27.933482 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/auto-021279/client.crt: no such file or directory
E0603 13:52:45.541666 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.crt: no such file or directory
E0603 13:53:22.013696 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/calico-021279/client.crt: no such file or directory
E0603 13:54:13.231196 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/custom-flannel-021279/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-151788 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m20.218651339s)

                                                
                                                
-- stdout --
	* [old-k8s-version-151788] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19011
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19011-1078924/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19011-1078924/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-151788" primary control-plane node in "old-k8s-version-151788" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-151788" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0603 13:46:22.347386 1143678 out.go:291] Setting OutFile to fd 1 ...
	I0603 13:46:22.347655 1143678 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:46:22.347666 1143678 out.go:304] Setting ErrFile to fd 2...
	I0603 13:46:22.347672 1143678 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:46:22.347855 1143678 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
	I0603 13:46:22.348458 1143678 out.go:298] Setting JSON to false
	I0603 13:46:22.349502 1143678 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":16129,"bootTime":1717406253,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 13:46:22.349571 1143678 start.go:139] virtualization: kvm guest
	I0603 13:46:22.351720 1143678 out.go:177] * [old-k8s-version-151788] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 13:46:22.353180 1143678 out.go:177]   - MINIKUBE_LOCATION=19011
	I0603 13:46:22.353235 1143678 notify.go:220] Checking for updates...
	I0603 13:46:22.354400 1143678 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 13:46:22.355680 1143678 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 13:46:22.356796 1143678 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 13:46:22.357952 1143678 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 13:46:22.359052 1143678 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 13:46:22.360807 1143678 config.go:182] Loaded profile config "old-k8s-version-151788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0603 13:46:22.361230 1143678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:46:22.361306 1143678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:46:22.376241 1143678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42729
	I0603 13:46:22.376679 1143678 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:46:22.377267 1143678 main.go:141] libmachine: Using API Version  1
	I0603 13:46:22.377292 1143678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:46:22.377663 1143678 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:46:22.377897 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:46:22.379705 1143678 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0603 13:46:22.380895 1143678 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 13:46:22.381188 1143678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:46:22.381222 1143678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:46:22.396163 1143678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43961
	I0603 13:46:22.396669 1143678 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:46:22.397158 1143678 main.go:141] libmachine: Using API Version  1
	I0603 13:46:22.397180 1143678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:46:22.397509 1143678 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:46:22.397693 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:46:22.433731 1143678 out.go:177] * Using the kvm2 driver based on existing profile
	I0603 13:46:22.434876 1143678 start.go:297] selected driver: kvm2
	I0603 13:46:22.434897 1143678 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-151788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-151788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:46:22.435028 1143678 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 13:46:22.435716 1143678 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 13:46:22.435807 1143678 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19011-1078924/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0603 13:46:22.451200 1143678 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0603 13:46:22.451663 1143678 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 13:46:22.451755 1143678 cni.go:84] Creating CNI manager for ""
	I0603 13:46:22.451773 1143678 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:46:22.451832 1143678 start.go:340] cluster config:
	{Name:old-k8s-version-151788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-151788 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:46:22.451961 1143678 iso.go:125] acquiring lock: {Name:mka26d6a83f88b83737ccc78b57cc462fbe70fe1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 13:46:22.454327 1143678 out.go:177] * Starting "old-k8s-version-151788" primary control-plane node in "old-k8s-version-151788" cluster
	I0603 13:46:22.455453 1143678 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0603 13:46:22.455492 1143678 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0603 13:46:22.455501 1143678 cache.go:56] Caching tarball of preloaded images
	I0603 13:46:22.455591 1143678 preload.go:173] Found /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 13:46:22.455604 1143678 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0603 13:46:22.455685 1143678 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/config.json ...
	I0603 13:46:22.455860 1143678 start.go:360] acquireMachinesLock for old-k8s-version-151788: {Name:mk20baaab39609d00406b78ad309423511e633ec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 13:50:08.083130 1143678 start.go:364] duration metric: took 3m45.627229097s to acquireMachinesLock for "old-k8s-version-151788"
	I0603 13:50:08.083256 1143678 start.go:96] Skipping create...Using existing machine configuration
	I0603 13:50:08.083266 1143678 fix.go:54] fixHost starting: 
	I0603 13:50:08.083762 1143678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:08.083812 1143678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:08.103187 1143678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36483
	I0603 13:50:08.103693 1143678 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:08.104269 1143678 main.go:141] libmachine: Using API Version  1
	I0603 13:50:08.104299 1143678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:08.104746 1143678 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:08.105115 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:50:08.105347 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetState
	I0603 13:50:08.107125 1143678 fix.go:112] recreateIfNeeded on old-k8s-version-151788: state=Stopped err=<nil>
	I0603 13:50:08.107173 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	W0603 13:50:08.107340 1143678 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 13:50:08.109207 1143678 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-151788" ...
	I0603 13:50:08.110706 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .Start
	I0603 13:50:08.110954 1143678 main.go:141] libmachine: (old-k8s-version-151788) Ensuring networks are active...
	I0603 13:50:08.111890 1143678 main.go:141] libmachine: (old-k8s-version-151788) Ensuring network default is active
	I0603 13:50:08.112291 1143678 main.go:141] libmachine: (old-k8s-version-151788) Ensuring network mk-old-k8s-version-151788 is active
	I0603 13:50:08.112708 1143678 main.go:141] libmachine: (old-k8s-version-151788) Getting domain xml...
	I0603 13:50:08.113547 1143678 main.go:141] libmachine: (old-k8s-version-151788) Creating domain...
	I0603 13:50:09.528855 1143678 main.go:141] libmachine: (old-k8s-version-151788) Waiting to get IP...
	I0603 13:50:09.529978 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:09.530410 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:09.530453 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:09.530382 1144654 retry.go:31] will retry after 208.935457ms: waiting for machine to come up
	I0603 13:50:09.741245 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:09.741816 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:09.741864 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:09.741769 1144654 retry.go:31] will retry after 376.532154ms: waiting for machine to come up
	I0603 13:50:10.120533 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:10.121261 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:10.121337 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:10.121239 1144654 retry.go:31] will retry after 339.126643ms: waiting for machine to come up
	I0603 13:50:10.461708 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:10.462488 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:10.462514 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:10.462425 1144654 retry.go:31] will retry after 490.057426ms: waiting for machine to come up
	I0603 13:50:10.954107 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:10.954887 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:10.954921 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:10.954840 1144654 retry.go:31] will retry after 711.209001ms: waiting for machine to come up
	I0603 13:50:11.667459 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:11.668198 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:11.668231 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:11.668135 1144654 retry.go:31] will retry after 928.879285ms: waiting for machine to come up
	I0603 13:50:12.598536 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:12.598972 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:12.599008 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:12.598948 1144654 retry.go:31] will retry after 882.970422ms: waiting for machine to come up
	I0603 13:50:13.483171 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:13.483723 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:13.483758 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:13.483640 1144654 retry.go:31] will retry after 1.215665556s: waiting for machine to come up
	I0603 13:50:14.701392 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:14.701960 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:14.701991 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:14.701899 1144654 retry.go:31] will retry after 1.614371992s: waiting for machine to come up
	I0603 13:50:16.318708 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:16.319127 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:16.319148 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:16.319103 1144654 retry.go:31] will retry after 2.146267337s: waiting for machine to come up
	I0603 13:50:18.466825 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:18.467260 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:18.467292 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:18.467187 1144654 retry.go:31] will retry after 2.752334209s: waiting for machine to come up
	I0603 13:50:21.220813 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:21.221235 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:21.221267 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:21.221182 1144654 retry.go:31] will retry after 3.082080728s: waiting for machine to come up
	I0603 13:50:24.304462 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:24.305104 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:24.305175 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:24.305099 1144654 retry.go:31] will retry after 4.178596743s: waiting for machine to come up
	I0603 13:50:28.485041 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.485598 1143678 main.go:141] libmachine: (old-k8s-version-151788) Found IP for machine: 192.168.50.65
	I0603 13:50:28.485624 1143678 main.go:141] libmachine: (old-k8s-version-151788) Reserving static IP address...
	I0603 13:50:28.485639 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has current primary IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.486053 1143678 main.go:141] libmachine: (old-k8s-version-151788) Reserved static IP address: 192.168.50.65
	I0603 13:50:28.486109 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "old-k8s-version-151788", mac: "52:54:00:56:4e:c1", ip: "192.168.50.65"} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.486123 1143678 main.go:141] libmachine: (old-k8s-version-151788) Waiting for SSH to be available...
	I0603 13:50:28.486144 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | skip adding static IP to network mk-old-k8s-version-151788 - found existing host DHCP lease matching {name: "old-k8s-version-151788", mac: "52:54:00:56:4e:c1", ip: "192.168.50.65"}
	I0603 13:50:28.486156 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | Getting to WaitForSSH function...
	I0603 13:50:28.488305 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.488754 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.488788 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.489025 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | Using SSH client type: external
	I0603 13:50:28.489048 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | Using SSH private key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/id_rsa (-rw-------)
	I0603 13:50:28.489114 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.65 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 13:50:28.489147 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | About to run SSH command:
	I0603 13:50:28.489167 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | exit 0
	I0603 13:50:28.613732 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | SSH cmd err, output: <nil>: 
	I0603 13:50:28.614183 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetConfigRaw
	I0603 13:50:28.614879 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetIP
	I0603 13:50:28.617742 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.618235 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.618270 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.618481 1143678 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/config.json ...
	I0603 13:50:28.618699 1143678 machine.go:94] provisionDockerMachine start ...
	I0603 13:50:28.618719 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:50:28.618967 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:28.621356 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.621655 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.621685 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.621897 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:28.622117 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:28.622321 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:28.622511 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:28.622750 1143678 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:28.622946 1143678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0603 13:50:28.622958 1143678 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 13:50:28.726383 1143678 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 13:50:28.726419 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetMachineName
	I0603 13:50:28.726740 1143678 buildroot.go:166] provisioning hostname "old-k8s-version-151788"
	I0603 13:50:28.726777 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetMachineName
	I0603 13:50:28.727042 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:28.729901 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.730372 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.730402 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.730599 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:28.730824 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:28.731031 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:28.731205 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:28.731403 1143678 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:28.731585 1143678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0603 13:50:28.731599 1143678 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-151788 && echo "old-k8s-version-151788" | sudo tee /etc/hostname
	I0603 13:50:28.848834 1143678 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-151788
	
	I0603 13:50:28.848867 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:28.852250 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.852698 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.852721 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.852980 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:28.853239 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:28.853536 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:28.853819 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:28.854093 1143678 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:28.854338 1143678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0603 13:50:28.854367 1143678 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-151788' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-151788/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-151788' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 13:50:28.967427 1143678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 13:50:28.967461 1143678 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19011-1078924/.minikube CaCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19011-1078924/.minikube}
	I0603 13:50:28.967520 1143678 buildroot.go:174] setting up certificates
	I0603 13:50:28.967538 1143678 provision.go:84] configureAuth start
	I0603 13:50:28.967550 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetMachineName
	I0603 13:50:28.967946 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetIP
	I0603 13:50:28.970841 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.971226 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.971256 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.971449 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:28.974316 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.974702 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.974732 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.974911 1143678 provision.go:143] copyHostCerts
	I0603 13:50:28.974994 1143678 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem, removing ...
	I0603 13:50:28.975010 1143678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 13:50:28.975068 1143678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem (1675 bytes)
	I0603 13:50:28.975247 1143678 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem, removing ...
	I0603 13:50:28.975260 1143678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 13:50:28.975283 1143678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem (1078 bytes)
	I0603 13:50:28.975354 1143678 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem, removing ...
	I0603 13:50:28.975362 1143678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 13:50:28.975385 1143678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem (1123 bytes)
	I0603 13:50:28.975463 1143678 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-151788 san=[127.0.0.1 192.168.50.65 localhost minikube old-k8s-version-151788]
	I0603 13:50:29.096777 1143678 provision.go:177] copyRemoteCerts
	I0603 13:50:29.096835 1143678 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 13:50:29.096865 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:29.099989 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.100408 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:29.100434 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.100644 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:29.100831 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.100975 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:29.101144 1143678 sshutil.go:53] new ssh client: &{IP:192.168.50.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/id_rsa Username:docker}
	I0603 13:50:29.184886 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 13:50:29.211432 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0603 13:50:29.238552 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 13:50:29.266803 1143678 provision.go:87] duration metric: took 299.247567ms to configureAuth
	I0603 13:50:29.266844 1143678 buildroot.go:189] setting minikube options for container-runtime
	I0603 13:50:29.267107 1143678 config.go:182] Loaded profile config "old-k8s-version-151788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0603 13:50:29.267220 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:29.270966 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.271417 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:29.271472 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.271688 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:29.271893 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.272121 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.272327 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:29.272544 1143678 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:29.272787 1143678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0603 13:50:29.272811 1143678 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 13:50:29.548407 1143678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 13:50:29.548437 1143678 machine.go:97] duration metric: took 929.724002ms to provisionDockerMachine
	I0603 13:50:29.548449 1143678 start.go:293] postStartSetup for "old-k8s-version-151788" (driver="kvm2")
	I0603 13:50:29.548461 1143678 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 13:50:29.548486 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:50:29.548924 1143678 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 13:50:29.548992 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:29.552127 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.552531 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:29.552571 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.552756 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:29.552974 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.553166 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:29.553364 1143678 sshutil.go:53] new ssh client: &{IP:192.168.50.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/id_rsa Username:docker}
	I0603 13:50:29.637026 1143678 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 13:50:29.641264 1143678 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 13:50:29.641293 1143678 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/addons for local assets ...
	I0603 13:50:29.641376 1143678 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/files for local assets ...
	I0603 13:50:29.641509 1143678 filesync.go:149] local asset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> 10862512.pem in /etc/ssl/certs
	I0603 13:50:29.641600 1143678 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 13:50:29.657273 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:50:29.688757 1143678 start.go:296] duration metric: took 140.291954ms for postStartSetup
	I0603 13:50:29.688806 1143678 fix.go:56] duration metric: took 21.605539652s for fixHost
	I0603 13:50:29.688843 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:29.691764 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.692170 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:29.692216 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.692356 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:29.692623 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.692814 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.692996 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:29.693180 1143678 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:29.693372 1143678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0603 13:50:29.693384 1143678 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0603 13:50:29.798629 1143678 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717422629.770375968
	
	I0603 13:50:29.798655 1143678 fix.go:216] guest clock: 1717422629.770375968
	I0603 13:50:29.798662 1143678 fix.go:229] Guest: 2024-06-03 13:50:29.770375968 +0000 UTC Remote: 2024-06-03 13:50:29.688811675 +0000 UTC m=+247.377673500 (delta=81.564293ms)
	I0603 13:50:29.798683 1143678 fix.go:200] guest clock delta is within tolerance: 81.564293ms
	I0603 13:50:29.798688 1143678 start.go:83] releasing machines lock for "old-k8s-version-151788", held for 21.715483341s
	I0603 13:50:29.798712 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:50:29.799019 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetIP
	I0603 13:50:29.802078 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.802479 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:29.802522 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.802674 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:50:29.803271 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:50:29.803496 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:50:29.803584 1143678 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 13:50:29.803646 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:29.803961 1143678 ssh_runner.go:195] Run: cat /version.json
	I0603 13:50:29.803988 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:29.806505 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.806863 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.806926 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:29.806961 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.807093 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:29.807299 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.807345 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:29.807386 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.807476 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:29.807670 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:29.807669 1143678 sshutil.go:53] new ssh client: &{IP:192.168.50.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/id_rsa Username:docker}
	I0603 13:50:29.807841 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.807947 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:29.808183 1143678 sshutil.go:53] new ssh client: &{IP:192.168.50.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/id_rsa Username:docker}
	I0603 13:50:29.890622 1143678 ssh_runner.go:195] Run: systemctl --version
	I0603 13:50:29.918437 1143678 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 13:50:30.064471 1143678 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 13:50:30.073881 1143678 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 13:50:30.073969 1143678 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 13:50:30.097037 1143678 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 13:50:30.097070 1143678 start.go:494] detecting cgroup driver to use...
	I0603 13:50:30.097147 1143678 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 13:50:30.114374 1143678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 13:50:30.132000 1143678 docker.go:217] disabling cri-docker service (if available) ...
	I0603 13:50:30.132075 1143678 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 13:50:30.148156 1143678 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 13:50:30.164601 1143678 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 13:50:30.303125 1143678 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 13:50:30.475478 1143678 docker.go:233] disabling docker service ...
	I0603 13:50:30.475578 1143678 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 13:50:30.494632 1143678 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 13:50:30.513383 1143678 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 13:50:30.691539 1143678 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 13:50:30.849280 1143678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 13:50:30.869107 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 13:50:30.893451 1143678 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0603 13:50:30.893528 1143678 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:30.909358 1143678 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 13:50:30.909465 1143678 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:30.926891 1143678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:30.941879 1143678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:30.957985 1143678 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 13:50:30.971349 1143678 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 13:50:30.984948 1143678 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 13:50:30.985023 1143678 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 13:50:30.999255 1143678 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 13:50:31.011615 1143678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:50:31.162848 1143678 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 13:50:31.352121 1143678 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 13:50:31.352190 1143678 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 13:50:31.357946 1143678 start.go:562] Will wait 60s for crictl version
	I0603 13:50:31.358032 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:31.362540 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 13:50:31.410642 1143678 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 13:50:31.410757 1143678 ssh_runner.go:195] Run: crio --version
	I0603 13:50:31.444750 1143678 ssh_runner.go:195] Run: crio --version
	I0603 13:50:31.482404 1143678 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0603 13:50:31.484218 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetIP
	I0603 13:50:31.488049 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:31.488663 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:31.488695 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:31.488985 1143678 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0603 13:50:31.494813 1143678 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:50:31.511436 1143678 kubeadm.go:877] updating cluster {Name:old-k8s-version-151788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-151788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 13:50:31.511597 1143678 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0603 13:50:31.511659 1143678 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:50:31.571733 1143678 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0603 13:50:31.571819 1143678 ssh_runner.go:195] Run: which lz4
	I0603 13:50:31.577765 1143678 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0603 13:50:31.583983 1143678 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 13:50:31.584025 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0603 13:50:33.399678 1143678 crio.go:462] duration metric: took 1.821959808s to copy over tarball
	I0603 13:50:33.399768 1143678 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 13:50:36.631033 1143678 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.231219364s)
	I0603 13:50:36.631081 1143678 crio.go:469] duration metric: took 3.231364789s to extract the tarball
	I0603 13:50:36.631092 1143678 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 13:50:36.677954 1143678 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:50:36.718160 1143678 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0603 13:50:36.718197 1143678 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0603 13:50:36.718295 1143678 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 13:50:36.718335 1143678 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 13:50:36.718295 1143678 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:36.718456 1143678 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0603 13:50:36.718302 1143678 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 13:50:36.718343 1143678 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0603 13:50:36.718335 1143678 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0603 13:50:36.718858 1143678 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 13:50:36.720574 1143678 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 13:50:36.720644 1143678 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:36.720573 1143678 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0603 13:50:36.720574 1143678 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 13:50:36.720576 1143678 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 13:50:36.720603 1143678 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0603 13:50:36.720608 1143678 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 13:50:36.721118 1143678 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0603 13:50:36.907182 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0603 13:50:36.907179 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0603 13:50:36.910017 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:36.920969 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 13:50:36.925739 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0603 13:50:36.935710 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0603 13:50:36.946767 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0603 13:50:36.973425 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0603 13:50:37.050763 1143678 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0603 13:50:37.050817 1143678 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0603 13:50:37.050846 1143678 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0603 13:50:37.050876 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:37.050880 1143678 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 13:50:37.050906 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:37.162505 1143678 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0603 13:50:37.162561 1143678 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 13:50:37.162608 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:37.162706 1143678 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0603 13:50:37.162727 1143678 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 13:50:37.162754 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:37.162858 1143678 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0603 13:50:37.162898 1143678 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 13:50:37.162922 1143678 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0603 13:50:37.162965 1143678 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0603 13:50:37.163001 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:37.162943 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:37.164963 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0603 13:50:37.165019 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0603 13:50:37.165136 1143678 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0603 13:50:37.165260 1143678 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0603 13:50:37.165295 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:37.188179 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0603 13:50:37.188292 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0603 13:50:37.188315 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0603 13:50:37.188371 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0603 13:50:37.188561 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 13:50:37.300592 1143678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0603 13:50:37.300642 1143678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0603 13:50:37.360149 1143678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0603 13:50:37.360196 1143678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0603 13:50:37.360346 1143678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0603 13:50:37.360371 1143678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0603 13:50:37.360436 1143678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0603 13:50:37.543409 1143678 cache_images.go:92] duration metric: took 825.189409ms to LoadCachedImages
	W0603 13:50:37.543559 1143678 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0603 13:50:37.543581 1143678 kubeadm.go:928] updating node { 192.168.50.65 8443 v1.20.0 crio true true} ...
	I0603 13:50:37.543723 1143678 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-151788 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-151788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 13:50:37.543804 1143678 ssh_runner.go:195] Run: crio config
	I0603 13:50:37.601388 1143678 cni.go:84] Creating CNI manager for ""
	I0603 13:50:37.601428 1143678 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:50:37.601445 1143678 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 13:50:37.601471 1143678 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.65 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-151788 NodeName:old-k8s-version-151788 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.65"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.65 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0603 13:50:37.601664 1143678 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.65
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-151788"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.65
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.65"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 13:50:37.601746 1143678 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0603 13:50:37.613507 1143678 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 13:50:37.613588 1143678 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 13:50:37.623853 1143678 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0603 13:50:37.642298 1143678 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 13:50:37.660863 1143678 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0603 13:50:37.679974 1143678 ssh_runner.go:195] Run: grep 192.168.50.65	control-plane.minikube.internal$ /etc/hosts
	I0603 13:50:37.685376 1143678 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.65	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:50:37.702732 1143678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:50:37.859343 1143678 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:50:37.880684 1143678 certs.go:68] Setting up /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788 for IP: 192.168.50.65
	I0603 13:50:37.880714 1143678 certs.go:194] generating shared ca certs ...
	I0603 13:50:37.880737 1143678 certs.go:226] acquiring lock for ca certs: {Name:mkeec5aabce7c9540fcb31b78e4f96c2851d54f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:50:37.880952 1143678 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key
	I0603 13:50:37.881012 1143678 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key
	I0603 13:50:37.881024 1143678 certs.go:256] generating profile certs ...
	I0603 13:50:37.881179 1143678 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/client.key
	I0603 13:50:37.881279 1143678 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/apiserver.key.9bfe4cc3
	I0603 13:50:37.881334 1143678 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/proxy-client.key
	I0603 13:50:37.881554 1143678 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem (1338 bytes)
	W0603 13:50:37.881602 1143678 certs.go:480] ignoring /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251_empty.pem, impossibly tiny 0 bytes
	I0603 13:50:37.881629 1143678 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 13:50:37.881667 1143678 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem (1078 bytes)
	I0603 13:50:37.881698 1143678 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem (1123 bytes)
	I0603 13:50:37.881730 1143678 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem (1675 bytes)
	I0603 13:50:37.881805 1143678 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:50:37.882741 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 13:50:37.919377 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 13:50:37.957218 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 13:50:37.987016 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0603 13:50:38.024442 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0603 13:50:38.051406 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0603 13:50:38.094816 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 13:50:38.143689 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 13:50:38.171488 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem --> /usr/share/ca-certificates/1086251.pem (1338 bytes)
	I0603 13:50:38.197296 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /usr/share/ca-certificates/10862512.pem (1708 bytes)
	I0603 13:50:38.224025 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 13:50:38.250728 1143678 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 13:50:38.270485 1143678 ssh_runner.go:195] Run: openssl version
	I0603 13:50:38.276995 1143678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10862512.pem && ln -fs /usr/share/ca-certificates/10862512.pem /etc/ssl/certs/10862512.pem"
	I0603 13:50:38.288742 1143678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10862512.pem
	I0603 13:50:38.293880 1143678 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:37 /usr/share/ca-certificates/10862512.pem
	I0603 13:50:38.293955 1143678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10862512.pem
	I0603 13:50:38.300456 1143678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10862512.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 13:50:38.312180 1143678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 13:50:38.324349 1143678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:50:38.329812 1143678 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:24 /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:50:38.329881 1143678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:50:38.337560 1143678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 13:50:38.350229 1143678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1086251.pem && ln -fs /usr/share/ca-certificates/1086251.pem /etc/ssl/certs/1086251.pem"
	I0603 13:50:38.362635 1143678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1086251.pem
	I0603 13:50:38.368842 1143678 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:37 /usr/share/ca-certificates/1086251.pem
	I0603 13:50:38.368920 1143678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1086251.pem
	I0603 13:50:38.376029 1143678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1086251.pem /etc/ssl/certs/51391683.0"
	I0603 13:50:38.387703 1143678 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 13:50:38.393071 1143678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 13:50:38.399760 1143678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 13:50:38.406332 1143678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 13:50:38.413154 1143678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 13:50:38.419162 1143678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 13:50:38.425818 1143678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 13:50:38.432495 1143678 kubeadm.go:391] StartCluster: {Name:old-k8s-version-151788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-151788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:50:38.432659 1143678 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 13:50:38.432718 1143678 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:50:38.479889 1143678 cri.go:89] found id: ""
	I0603 13:50:38.479975 1143678 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0603 13:50:38.490549 1143678 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 13:50:38.490574 1143678 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 13:50:38.490580 1143678 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 13:50:38.490637 1143678 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 13:50:38.501024 1143678 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 13:50:38.503665 1143678 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-151788" does not appear in /home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 13:50:38.504563 1143678 kubeconfig.go:62] /home/jenkins/minikube-integration/19011-1078924/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-151788" cluster setting kubeconfig missing "old-k8s-version-151788" context setting]
	I0603 13:50:38.505614 1143678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/kubeconfig: {Name:mk082a4c41fd0f4876b4085806e1bc5ef6533b14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:50:38.562691 1143678 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 13:50:38.573839 1143678 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.65
	I0603 13:50:38.573889 1143678 kubeadm.go:1154] stopping kube-system containers ...
	I0603 13:50:38.573905 1143678 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0603 13:50:38.573987 1143678 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:50:38.615876 1143678 cri.go:89] found id: ""
	I0603 13:50:38.615972 1143678 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 13:50:38.633568 1143678 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 13:50:38.645197 1143678 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 13:50:38.645229 1143678 kubeadm.go:156] found existing configuration files:
	
	I0603 13:50:38.645291 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 13:50:38.655344 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 13:50:38.655423 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 13:50:38.665789 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 13:50:38.674765 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 13:50:38.674842 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 13:50:38.684268 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 13:50:38.693586 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 13:50:38.693650 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 13:50:38.703313 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 13:50:38.712523 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 13:50:38.712597 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 13:50:38.722362 1143678 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 13:50:38.732190 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:38.875545 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:39.722534 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:39.970226 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:40.090817 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:40.193178 1143678 api_server.go:52] waiting for apiserver process to appear ...
	I0603 13:50:40.193485 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:40.693580 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:41.193579 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:41.693608 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:42.193587 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:42.693593 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:43.194448 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:43.693645 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:44.193587 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:44.694583 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:45.194065 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:45.694138 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:46.194173 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:46.694344 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:47.194063 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:47.693584 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:48.193894 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:48.694053 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:49.193587 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:49.694081 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:50.194053 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:50.694265 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:51.193572 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:51.694283 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:52.194444 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:52.694071 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:53.193597 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:53.694503 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:54.193609 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:54.694446 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:55.193856 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:55.693583 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:56.194271 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:56.693558 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:57.194427 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:57.694027 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:58.193718 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:58.693488 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:59.193725 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:59.694310 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:00.194455 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:00.694182 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:01.193916 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:01.693504 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:02.194236 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:02.694248 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:03.194094 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:03.694072 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:04.194494 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:04.693899 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:05.193578 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:05.693584 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:06.193934 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:06.693586 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:07.193993 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:07.693540 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:08.194490 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:08.694498 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:09.194496 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:09.694286 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:10.193605 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:10.694326 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:11.193904 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:11.694504 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:12.194093 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:12.694356 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:13.194219 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:13.693546 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:14.193588 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:14.694003 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:15.193572 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:15.694012 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:16.193567 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:16.694014 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:17.193554 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:17.693856 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:18.193853 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:18.693858 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:19.193568 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:19.693680 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:20.193556 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:20.694129 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:21.193662 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:21.694445 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:22.193668 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:22.694004 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:23.193793 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:23.694340 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:24.194411 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:24.694314 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:25.194501 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:25.693545 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:26.194255 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:26.694312 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:27.194453 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:27.694334 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:28.193809 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:28.693744 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:29.193608 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:29.693584 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:30.194111 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:30.694213 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:31.193588 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:31.694336 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:32.193716 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:32.693501 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:33.194174 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:33.693995 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:34.194242 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:34.693961 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:35.194052 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:35.693730 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:36.193559 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:36.693763 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:37.194274 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:37.693590 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:38.194328 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:38.694296 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:39.194272 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:39.693607 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:40.193595 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:51:40.193691 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:51:40.237747 1143678 cri.go:89] found id: ""
	I0603 13:51:40.237776 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.237785 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:51:40.237792 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:51:40.237854 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:51:40.275924 1143678 cri.go:89] found id: ""
	I0603 13:51:40.275964 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.275975 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:51:40.275983 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:51:40.276049 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:51:40.314827 1143678 cri.go:89] found id: ""
	I0603 13:51:40.314857 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.314870 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:51:40.314877 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:51:40.314939 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:51:40.359040 1143678 cri.go:89] found id: ""
	I0603 13:51:40.359072 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.359084 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:51:40.359092 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:51:40.359154 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:51:40.396136 1143678 cri.go:89] found id: ""
	I0603 13:51:40.396170 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.396185 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:51:40.396194 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:51:40.396261 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:51:40.436766 1143678 cri.go:89] found id: ""
	I0603 13:51:40.436803 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.436814 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:51:40.436828 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:51:40.436902 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:51:40.477580 1143678 cri.go:89] found id: ""
	I0603 13:51:40.477606 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.477615 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:51:40.477621 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:51:40.477713 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:51:40.518920 1143678 cri.go:89] found id: ""
	I0603 13:51:40.518960 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.518972 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:51:40.518984 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:51:40.519001 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:51:40.659881 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:51:40.659913 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:51:40.659932 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:51:40.727850 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:51:40.727894 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:51:40.774153 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:51:40.774189 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:40.828054 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:51:40.828094 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:51:43.342659 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:43.357063 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:51:43.357131 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:51:43.398000 1143678 cri.go:89] found id: ""
	I0603 13:51:43.398036 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.398045 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:51:43.398051 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:51:43.398106 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:51:43.436761 1143678 cri.go:89] found id: ""
	I0603 13:51:43.436805 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.436814 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:51:43.436820 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:51:43.436872 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:51:43.478122 1143678 cri.go:89] found id: ""
	I0603 13:51:43.478154 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.478164 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:51:43.478172 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:51:43.478243 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:51:43.514473 1143678 cri.go:89] found id: ""
	I0603 13:51:43.514511 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.514523 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:51:43.514532 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:51:43.514600 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:51:43.552354 1143678 cri.go:89] found id: ""
	I0603 13:51:43.552390 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.552399 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:51:43.552405 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:51:43.552489 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:51:43.590637 1143678 cri.go:89] found id: ""
	I0603 13:51:43.590665 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.590677 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:51:43.590685 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:51:43.590745 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:51:43.633958 1143678 cri.go:89] found id: ""
	I0603 13:51:43.634001 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.634013 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:51:43.634021 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:51:43.634088 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:51:43.672640 1143678 cri.go:89] found id: ""
	I0603 13:51:43.672683 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.672695 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:51:43.672716 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:51:43.672733 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:43.725880 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:51:43.725937 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:51:43.743736 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:51:43.743771 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:51:43.831757 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:51:43.831785 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:51:43.831801 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:51:43.905062 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:51:43.905114 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:51:46.459588 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:46.472911 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:51:46.472983 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:51:46.513723 1143678 cri.go:89] found id: ""
	I0603 13:51:46.513757 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.513768 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:51:46.513776 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:51:46.513845 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:51:46.549205 1143678 cri.go:89] found id: ""
	I0603 13:51:46.549234 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.549242 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:51:46.549251 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:51:46.549311 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:51:46.585004 1143678 cri.go:89] found id: ""
	I0603 13:51:46.585042 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.585053 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:51:46.585063 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:51:46.585120 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:51:46.620534 1143678 cri.go:89] found id: ""
	I0603 13:51:46.620571 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.620582 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:51:46.620590 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:51:46.620661 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:51:46.655974 1143678 cri.go:89] found id: ""
	I0603 13:51:46.656005 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.656014 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:51:46.656020 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:51:46.656091 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:51:46.693078 1143678 cri.go:89] found id: ""
	I0603 13:51:46.693141 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.693158 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:51:46.693168 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:51:46.693244 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:51:46.729177 1143678 cri.go:89] found id: ""
	I0603 13:51:46.729213 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.729223 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:51:46.729232 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:51:46.729300 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:51:46.766899 1143678 cri.go:89] found id: ""
	I0603 13:51:46.766929 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.766937 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:51:46.766946 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:51:46.766959 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:46.826715 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:51:46.826757 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:51:46.841461 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:51:46.841504 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:51:46.914505 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:51:46.914533 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:51:46.914551 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:51:46.989886 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:51:46.989928 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:51:49.532804 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:49.547359 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:51:49.547438 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:51:49.584262 1143678 cri.go:89] found id: ""
	I0603 13:51:49.584299 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.584311 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:51:49.584319 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:51:49.584389 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:51:49.622332 1143678 cri.go:89] found id: ""
	I0603 13:51:49.622372 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.622384 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:51:49.622393 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:51:49.622488 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:51:49.664339 1143678 cri.go:89] found id: ""
	I0603 13:51:49.664378 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.664390 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:51:49.664399 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:51:49.664468 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:51:49.712528 1143678 cri.go:89] found id: ""
	I0603 13:51:49.712558 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.712565 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:51:49.712574 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:51:49.712640 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:51:49.767343 1143678 cri.go:89] found id: ""
	I0603 13:51:49.767374 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.767382 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:51:49.767388 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:51:49.767450 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:51:49.822457 1143678 cri.go:89] found id: ""
	I0603 13:51:49.822491 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.822499 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:51:49.822505 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:51:49.822561 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:51:49.867823 1143678 cri.go:89] found id: ""
	I0603 13:51:49.867855 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.867867 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:51:49.867875 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:51:49.867936 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:51:49.906765 1143678 cri.go:89] found id: ""
	I0603 13:51:49.906797 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.906805 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:51:49.906816 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:51:49.906829 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:51:49.921731 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:51:49.921764 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:51:49.993832 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:51:49.993860 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:51:49.993878 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:51:50.070080 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:51:50.070125 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:51:50.112323 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:51:50.112357 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:52.666289 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:52.680475 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:51:52.680550 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:51:52.722025 1143678 cri.go:89] found id: ""
	I0603 13:51:52.722063 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.722075 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:51:52.722083 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:51:52.722145 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:51:52.759709 1143678 cri.go:89] found id: ""
	I0603 13:51:52.759742 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.759754 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:51:52.759762 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:51:52.759838 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:51:52.797131 1143678 cri.go:89] found id: ""
	I0603 13:51:52.797162 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.797171 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:51:52.797176 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:51:52.797231 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:51:52.832921 1143678 cri.go:89] found id: ""
	I0603 13:51:52.832951 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.832959 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:51:52.832965 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:51:52.833024 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:51:52.869361 1143678 cri.go:89] found id: ""
	I0603 13:51:52.869389 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.869399 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:51:52.869422 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:51:52.869495 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:51:52.905863 1143678 cri.go:89] found id: ""
	I0603 13:51:52.905897 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.905909 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:51:52.905917 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:51:52.905985 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:51:52.940407 1143678 cri.go:89] found id: ""
	I0603 13:51:52.940438 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.940446 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:51:52.940452 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:51:52.940517 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:51:52.982079 1143678 cri.go:89] found id: ""
	I0603 13:51:52.982115 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.982126 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:51:52.982138 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:51:52.982155 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:51:53.066897 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:51:53.066942 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:51:53.108016 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:51:53.108056 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:53.164105 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:51:53.164151 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:51:53.178708 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:51:53.178743 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:51:53.257441 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:51:55.758633 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:55.774241 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:51:55.774329 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:51:55.809373 1143678 cri.go:89] found id: ""
	I0603 13:51:55.809436 1143678 logs.go:276] 0 containers: []
	W0603 13:51:55.809450 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:51:55.809467 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:51:55.809539 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:51:55.849741 1143678 cri.go:89] found id: ""
	I0603 13:51:55.849768 1143678 logs.go:276] 0 containers: []
	W0603 13:51:55.849776 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:51:55.849783 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:51:55.849834 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:51:55.893184 1143678 cri.go:89] found id: ""
	I0603 13:51:55.893216 1143678 logs.go:276] 0 containers: []
	W0603 13:51:55.893228 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:51:55.893238 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:51:55.893307 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:51:55.931572 1143678 cri.go:89] found id: ""
	I0603 13:51:55.931618 1143678 logs.go:276] 0 containers: []
	W0603 13:51:55.931632 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:51:55.931642 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:51:55.931713 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:51:55.969490 1143678 cri.go:89] found id: ""
	I0603 13:51:55.969527 1143678 logs.go:276] 0 containers: []
	W0603 13:51:55.969538 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:51:55.969546 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:51:55.969614 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:51:56.009266 1143678 cri.go:89] found id: ""
	I0603 13:51:56.009301 1143678 logs.go:276] 0 containers: []
	W0603 13:51:56.009313 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:51:56.009321 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:51:56.009394 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:51:56.049471 1143678 cri.go:89] found id: ""
	I0603 13:51:56.049520 1143678 logs.go:276] 0 containers: []
	W0603 13:51:56.049540 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:51:56.049547 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:51:56.049616 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:51:56.090176 1143678 cri.go:89] found id: ""
	I0603 13:51:56.090213 1143678 logs.go:276] 0 containers: []
	W0603 13:51:56.090228 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:51:56.090241 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:51:56.090266 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:51:56.175692 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:51:56.175737 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:51:56.222642 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:51:56.222683 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:56.276258 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:51:56.276301 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:51:56.291703 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:51:56.291739 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:51:56.364788 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:51:58.865558 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:58.879983 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:51:58.880074 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:51:58.917422 1143678 cri.go:89] found id: ""
	I0603 13:51:58.917461 1143678 logs.go:276] 0 containers: []
	W0603 13:51:58.917473 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:51:58.917480 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:51:58.917535 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:51:58.953900 1143678 cri.go:89] found id: ""
	I0603 13:51:58.953933 1143678 logs.go:276] 0 containers: []
	W0603 13:51:58.953943 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:51:58.953959 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:51:58.954030 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:51:58.988677 1143678 cri.go:89] found id: ""
	I0603 13:51:58.988704 1143678 logs.go:276] 0 containers: []
	W0603 13:51:58.988713 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:51:58.988721 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:51:58.988783 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:51:59.023436 1143678 cri.go:89] found id: ""
	I0603 13:51:59.023474 1143678 logs.go:276] 0 containers: []
	W0603 13:51:59.023486 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:51:59.023494 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:51:59.023570 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:51:59.061357 1143678 cri.go:89] found id: ""
	I0603 13:51:59.061386 1143678 logs.go:276] 0 containers: []
	W0603 13:51:59.061394 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:51:59.061400 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:51:59.061487 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:51:59.102995 1143678 cri.go:89] found id: ""
	I0603 13:51:59.103025 1143678 logs.go:276] 0 containers: []
	W0603 13:51:59.103038 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:51:59.103047 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:51:59.103124 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:51:59.141443 1143678 cri.go:89] found id: ""
	I0603 13:51:59.141480 1143678 logs.go:276] 0 containers: []
	W0603 13:51:59.141492 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:51:59.141499 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:51:59.141586 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:51:59.182909 1143678 cri.go:89] found id: ""
	I0603 13:51:59.182943 1143678 logs.go:276] 0 containers: []
	W0603 13:51:59.182953 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:51:59.182967 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:51:59.182984 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:51:59.259533 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:51:59.259580 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:51:59.308976 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:51:59.309016 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:59.362092 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:51:59.362142 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:51:59.378836 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:51:59.378887 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:51:59.454524 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:01.954939 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:01.969968 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:01.970039 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:02.014226 1143678 cri.go:89] found id: ""
	I0603 13:52:02.014267 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.014280 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:02.014289 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:02.014361 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:02.051189 1143678 cri.go:89] found id: ""
	I0603 13:52:02.051244 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.051259 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:02.051268 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:02.051349 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:02.093509 1143678 cri.go:89] found id: ""
	I0603 13:52:02.093548 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.093575 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:02.093586 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:02.093718 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:02.132069 1143678 cri.go:89] found id: ""
	I0603 13:52:02.132113 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.132129 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:02.132138 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:02.132299 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:02.168043 1143678 cri.go:89] found id: ""
	I0603 13:52:02.168071 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.168079 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:02.168085 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:02.168138 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:02.207029 1143678 cri.go:89] found id: ""
	I0603 13:52:02.207064 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.207074 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:02.207081 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:02.207134 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:02.247669 1143678 cri.go:89] found id: ""
	I0603 13:52:02.247719 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.247728 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:02.247734 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:02.247848 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:02.285780 1143678 cri.go:89] found id: ""
	I0603 13:52:02.285817 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.285829 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:02.285841 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:02.285863 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:02.348775 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:02.349776 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:02.364654 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:02.364691 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:02.447948 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:02.447978 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:02.447992 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:02.534039 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:02.534100 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:05.080437 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:05.094169 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:05.094245 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:05.132312 1143678 cri.go:89] found id: ""
	I0603 13:52:05.132339 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.132346 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:05.132352 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:05.132423 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:05.168941 1143678 cri.go:89] found id: ""
	I0603 13:52:05.168979 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.168990 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:05.168999 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:05.169068 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:05.207151 1143678 cri.go:89] found id: ""
	I0603 13:52:05.207188 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.207196 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:05.207202 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:05.207272 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:05.258807 1143678 cri.go:89] found id: ""
	I0603 13:52:05.258839 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.258850 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:05.258859 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:05.259004 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:05.298250 1143678 cri.go:89] found id: ""
	I0603 13:52:05.298285 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.298297 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:05.298306 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:05.298381 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:05.340922 1143678 cri.go:89] found id: ""
	I0603 13:52:05.340951 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.340959 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:05.340966 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:05.341027 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:05.382680 1143678 cri.go:89] found id: ""
	I0603 13:52:05.382707 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.382715 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:05.382722 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:05.382777 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:05.426774 1143678 cri.go:89] found id: ""
	I0603 13:52:05.426801 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.426811 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:05.426822 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:05.426836 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:05.483042 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:05.483091 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:05.499119 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:05.499159 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:05.580933 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:05.580962 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:05.580983 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:05.660395 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:05.660437 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:08.200887 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:08.215113 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:08.215203 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:08.252367 1143678 cri.go:89] found id: ""
	I0603 13:52:08.252404 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.252417 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:08.252427 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:08.252500 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:08.289249 1143678 cri.go:89] found id: ""
	I0603 13:52:08.289279 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.289290 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:08.289298 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:08.289364 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:08.331155 1143678 cri.go:89] found id: ""
	I0603 13:52:08.331181 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.331195 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:08.331201 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:08.331258 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:08.371376 1143678 cri.go:89] found id: ""
	I0603 13:52:08.371400 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.371408 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:08.371415 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:08.371477 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:08.408009 1143678 cri.go:89] found id: ""
	I0603 13:52:08.408045 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.408057 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:08.408065 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:08.408119 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:08.446377 1143678 cri.go:89] found id: ""
	I0603 13:52:08.446413 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.446421 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:08.446429 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:08.446504 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:08.485429 1143678 cri.go:89] found id: ""
	I0603 13:52:08.485461 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.485471 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:08.485479 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:08.485546 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:08.527319 1143678 cri.go:89] found id: ""
	I0603 13:52:08.527363 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.527375 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:08.527388 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:08.527414 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:08.602347 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:08.602371 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:08.602384 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:08.683855 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:08.683902 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:08.724402 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:08.724443 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:08.781154 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:08.781202 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:11.297827 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:11.313927 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:11.314006 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:11.352622 1143678 cri.go:89] found id: ""
	I0603 13:52:11.352660 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.352671 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:11.352678 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:11.352755 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:11.395301 1143678 cri.go:89] found id: ""
	I0603 13:52:11.395338 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.395351 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:11.395360 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:11.395442 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:11.431104 1143678 cri.go:89] found id: ""
	I0603 13:52:11.431143 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.431155 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:11.431170 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:11.431234 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:11.470177 1143678 cri.go:89] found id: ""
	I0603 13:52:11.470212 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.470223 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:11.470241 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:11.470309 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:11.508741 1143678 cri.go:89] found id: ""
	I0603 13:52:11.508779 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.508803 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:11.508810 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:11.508906 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:11.544970 1143678 cri.go:89] found id: ""
	I0603 13:52:11.545002 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.545012 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:11.545022 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:11.545093 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:11.583606 1143678 cri.go:89] found id: ""
	I0603 13:52:11.583636 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.583653 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:11.583666 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:11.583739 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:11.624770 1143678 cri.go:89] found id: ""
	I0603 13:52:11.624806 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.624815 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:11.624824 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:11.624841 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:11.680251 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:11.680298 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:11.695656 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:11.695695 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:11.770414 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:11.770478 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:11.770497 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:11.850812 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:11.850871 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:14.398649 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:14.411591 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:14.411689 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:14.447126 1143678 cri.go:89] found id: ""
	I0603 13:52:14.447158 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.447170 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:14.447178 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:14.447245 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:14.486681 1143678 cri.go:89] found id: ""
	I0603 13:52:14.486716 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.486728 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:14.486735 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:14.486799 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:14.521297 1143678 cri.go:89] found id: ""
	I0603 13:52:14.521326 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.521337 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:14.521343 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:14.521443 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:14.565086 1143678 cri.go:89] found id: ""
	I0603 13:52:14.565121 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.565130 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:14.565136 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:14.565196 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:14.601947 1143678 cri.go:89] found id: ""
	I0603 13:52:14.601975 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.601984 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:14.601990 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:14.602044 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:14.638332 1143678 cri.go:89] found id: ""
	I0603 13:52:14.638359 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.638366 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:14.638374 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:14.638435 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:14.675254 1143678 cri.go:89] found id: ""
	I0603 13:52:14.675284 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.675293 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:14.675299 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:14.675354 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:14.712601 1143678 cri.go:89] found id: ""
	I0603 13:52:14.712631 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.712639 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:14.712649 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:14.712663 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:14.787026 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:14.787068 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:14.836534 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:14.836564 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:14.889682 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:14.889729 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:14.905230 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:14.905264 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:14.979090 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:17.479590 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:17.495088 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:17.495250 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:17.530832 1143678 cri.go:89] found id: ""
	I0603 13:52:17.530871 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.530883 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:17.530891 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:17.530966 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:17.567183 1143678 cri.go:89] found id: ""
	I0603 13:52:17.567213 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.567224 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:17.567232 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:17.567305 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:17.602424 1143678 cri.go:89] found id: ""
	I0603 13:52:17.602458 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.602469 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:17.602493 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:17.602570 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:17.641148 1143678 cri.go:89] found id: ""
	I0603 13:52:17.641184 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.641197 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:17.641205 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:17.641273 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:17.679004 1143678 cri.go:89] found id: ""
	I0603 13:52:17.679031 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.679039 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:17.679045 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:17.679102 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:17.717667 1143678 cri.go:89] found id: ""
	I0603 13:52:17.717698 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.717707 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:17.717715 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:17.717786 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:17.760262 1143678 cri.go:89] found id: ""
	I0603 13:52:17.760300 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.760323 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:17.760331 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:17.760416 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:17.796910 1143678 cri.go:89] found id: ""
	I0603 13:52:17.796943 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.796960 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:17.796976 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:17.796990 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:17.811733 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:17.811768 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:17.891891 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:17.891920 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:17.891939 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:17.969495 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:17.969535 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:18.032622 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:18.032654 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:20.586079 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:20.599118 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:20.599202 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:20.633732 1143678 cri.go:89] found id: ""
	I0603 13:52:20.633770 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.633780 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:20.633787 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:20.633841 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:20.668126 1143678 cri.go:89] found id: ""
	I0603 13:52:20.668155 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.668163 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:20.668169 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:20.668231 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:20.704144 1143678 cri.go:89] found id: ""
	I0603 13:52:20.704177 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.704187 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:20.704194 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:20.704251 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:20.745562 1143678 cri.go:89] found id: ""
	I0603 13:52:20.745594 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.745602 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:20.745608 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:20.745663 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:20.788998 1143678 cri.go:89] found id: ""
	I0603 13:52:20.789041 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.789053 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:20.789075 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:20.789152 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:20.832466 1143678 cri.go:89] found id: ""
	I0603 13:52:20.832495 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.832503 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:20.832510 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:20.832575 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:20.875212 1143678 cri.go:89] found id: ""
	I0603 13:52:20.875248 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.875258 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:20.875267 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:20.875336 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:20.912957 1143678 cri.go:89] found id: ""
	I0603 13:52:20.912989 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.912999 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:20.913011 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:20.913030 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:20.963655 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:20.963700 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:20.978619 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:20.978658 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:21.057136 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:21.057163 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:21.057185 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:21.136368 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:21.136415 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:23.676222 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:23.691111 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:23.691213 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:23.733282 1143678 cri.go:89] found id: ""
	I0603 13:52:23.733319 1143678 logs.go:276] 0 containers: []
	W0603 13:52:23.733332 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:23.733341 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:23.733438 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:23.780841 1143678 cri.go:89] found id: ""
	I0603 13:52:23.780873 1143678 logs.go:276] 0 containers: []
	W0603 13:52:23.780882 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:23.780894 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:23.780947 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:23.820521 1143678 cri.go:89] found id: ""
	I0603 13:52:23.820553 1143678 logs.go:276] 0 containers: []
	W0603 13:52:23.820565 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:23.820573 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:23.820636 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:23.857684 1143678 cri.go:89] found id: ""
	I0603 13:52:23.857728 1143678 logs.go:276] 0 containers: []
	W0603 13:52:23.857739 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:23.857747 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:23.857818 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:23.896800 1143678 cri.go:89] found id: ""
	I0603 13:52:23.896829 1143678 logs.go:276] 0 containers: []
	W0603 13:52:23.896842 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:23.896850 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:23.896914 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:23.935511 1143678 cri.go:89] found id: ""
	I0603 13:52:23.935538 1143678 logs.go:276] 0 containers: []
	W0603 13:52:23.935547 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:23.935554 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:23.935608 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:23.973858 1143678 cri.go:89] found id: ""
	I0603 13:52:23.973885 1143678 logs.go:276] 0 containers: []
	W0603 13:52:23.973895 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:23.973901 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:23.973961 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:24.012491 1143678 cri.go:89] found id: ""
	I0603 13:52:24.012521 1143678 logs.go:276] 0 containers: []
	W0603 13:52:24.012532 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:24.012545 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:24.012569 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:24.064274 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:24.064319 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:24.079382 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:24.079420 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:24.153708 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:24.153733 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:24.153749 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:24.233104 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:24.233148 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:26.774771 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:26.789853 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:26.789924 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:26.830089 1143678 cri.go:89] found id: ""
	I0603 13:52:26.830129 1143678 logs.go:276] 0 containers: []
	W0603 13:52:26.830167 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:26.830176 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:26.830251 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:26.866907 1143678 cri.go:89] found id: ""
	I0603 13:52:26.866941 1143678 logs.go:276] 0 containers: []
	W0603 13:52:26.866952 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:26.866960 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:26.867031 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:26.915028 1143678 cri.go:89] found id: ""
	I0603 13:52:26.915061 1143678 logs.go:276] 0 containers: []
	W0603 13:52:26.915070 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:26.915079 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:26.915151 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:26.962044 1143678 cri.go:89] found id: ""
	I0603 13:52:26.962075 1143678 logs.go:276] 0 containers: []
	W0603 13:52:26.962083 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:26.962088 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:26.962154 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:26.996156 1143678 cri.go:89] found id: ""
	I0603 13:52:26.996188 1143678 logs.go:276] 0 containers: []
	W0603 13:52:26.996196 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:26.996202 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:26.996265 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:27.038593 1143678 cri.go:89] found id: ""
	I0603 13:52:27.038627 1143678 logs.go:276] 0 containers: []
	W0603 13:52:27.038636 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:27.038642 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:27.038708 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:27.076116 1143678 cri.go:89] found id: ""
	I0603 13:52:27.076144 1143678 logs.go:276] 0 containers: []
	W0603 13:52:27.076153 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:27.076159 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:27.076228 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:27.110653 1143678 cri.go:89] found id: ""
	I0603 13:52:27.110688 1143678 logs.go:276] 0 containers: []
	W0603 13:52:27.110700 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:27.110714 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:27.110733 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:27.193718 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:27.193743 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:27.193756 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:27.269423 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:27.269483 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:27.307899 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:27.307939 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:27.363830 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:27.363878 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:29.879016 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:29.893482 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:29.893553 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:29.932146 1143678 cri.go:89] found id: ""
	I0603 13:52:29.932190 1143678 logs.go:276] 0 containers: []
	W0603 13:52:29.932199 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:29.932205 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:29.932259 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:29.968986 1143678 cri.go:89] found id: ""
	I0603 13:52:29.969020 1143678 logs.go:276] 0 containers: []
	W0603 13:52:29.969032 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:29.969040 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:29.969097 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:30.007190 1143678 cri.go:89] found id: ""
	I0603 13:52:30.007228 1143678 logs.go:276] 0 containers: []
	W0603 13:52:30.007238 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:30.007244 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:30.007303 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:30.044607 1143678 cri.go:89] found id: ""
	I0603 13:52:30.044638 1143678 logs.go:276] 0 containers: []
	W0603 13:52:30.044646 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:30.044652 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:30.044706 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:30.083103 1143678 cri.go:89] found id: ""
	I0603 13:52:30.083179 1143678 logs.go:276] 0 containers: []
	W0603 13:52:30.083193 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:30.083204 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:30.083280 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:30.124125 1143678 cri.go:89] found id: ""
	I0603 13:52:30.124152 1143678 logs.go:276] 0 containers: []
	W0603 13:52:30.124160 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:30.124167 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:30.124234 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:30.164293 1143678 cri.go:89] found id: ""
	I0603 13:52:30.164329 1143678 logs.go:276] 0 containers: []
	W0603 13:52:30.164345 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:30.164353 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:30.164467 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:30.219980 1143678 cri.go:89] found id: ""
	I0603 13:52:30.220015 1143678 logs.go:276] 0 containers: []
	W0603 13:52:30.220028 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:30.220042 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:30.220063 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:30.313282 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:30.313305 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:30.313323 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:30.393759 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:30.393801 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:30.441384 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:30.441434 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:30.493523 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:30.493558 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:33.009114 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:33.023177 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:33.023278 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:33.065346 1143678 cri.go:89] found id: ""
	I0603 13:52:33.065388 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.065400 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:33.065424 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:33.065506 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:33.108513 1143678 cri.go:89] found id: ""
	I0603 13:52:33.108549 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.108561 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:33.108569 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:33.108640 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:33.146053 1143678 cri.go:89] found id: ""
	I0603 13:52:33.146082 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.146089 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:33.146107 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:33.146165 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:33.187152 1143678 cri.go:89] found id: ""
	I0603 13:52:33.187195 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.187206 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:33.187216 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:33.187302 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:33.223887 1143678 cri.go:89] found id: ""
	I0603 13:52:33.223920 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.223932 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:33.223941 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:33.224010 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:33.263902 1143678 cri.go:89] found id: ""
	I0603 13:52:33.263958 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.263971 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:33.263980 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:33.264048 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:33.302753 1143678 cri.go:89] found id: ""
	I0603 13:52:33.302785 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.302796 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:33.302805 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:33.302859 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:33.340711 1143678 cri.go:89] found id: ""
	I0603 13:52:33.340745 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.340754 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:33.340763 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:33.340780 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:33.400226 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:33.400271 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:33.414891 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:33.414923 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:33.498121 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:33.498156 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:33.498172 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:33.575682 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:33.575731 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:36.116930 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:36.133001 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:36.133070 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:36.182727 1143678 cri.go:89] found id: ""
	I0603 13:52:36.182763 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.182774 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:36.182782 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:36.182851 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:36.228804 1143678 cri.go:89] found id: ""
	I0603 13:52:36.228841 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.228854 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:36.228862 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:36.228929 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:36.279320 1143678 cri.go:89] found id: ""
	I0603 13:52:36.279359 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.279370 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:36.279378 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:36.279461 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:36.319725 1143678 cri.go:89] found id: ""
	I0603 13:52:36.319751 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.319759 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:36.319765 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:36.319819 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:36.356657 1143678 cri.go:89] found id: ""
	I0603 13:52:36.356685 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.356693 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:36.356703 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:36.356760 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:36.393397 1143678 cri.go:89] found id: ""
	I0603 13:52:36.393448 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.393459 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:36.393467 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:36.393545 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:36.429211 1143678 cri.go:89] found id: ""
	I0603 13:52:36.429246 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.429254 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:36.429260 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:36.429324 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:36.466796 1143678 cri.go:89] found id: ""
	I0603 13:52:36.466831 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.466839 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:36.466849 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:36.466862 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:36.509871 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:36.509900 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:36.562167 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:36.562206 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:36.577014 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:36.577047 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:36.657581 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:36.657604 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:36.657625 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:39.242339 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:39.257985 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:39.258072 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:39.300153 1143678 cri.go:89] found id: ""
	I0603 13:52:39.300185 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.300197 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:39.300205 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:39.300304 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:39.336117 1143678 cri.go:89] found id: ""
	I0603 13:52:39.336152 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.336162 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:39.336175 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:39.336307 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:39.375945 1143678 cri.go:89] found id: ""
	I0603 13:52:39.375979 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.375990 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:39.375998 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:39.376066 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:39.417207 1143678 cri.go:89] found id: ""
	I0603 13:52:39.417242 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.417253 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:39.417261 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:39.417340 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:39.456259 1143678 cri.go:89] found id: ""
	I0603 13:52:39.456295 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.456307 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:39.456315 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:39.456377 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:39.494879 1143678 cri.go:89] found id: ""
	I0603 13:52:39.494904 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.494913 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:39.494919 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:39.494979 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:39.532129 1143678 cri.go:89] found id: ""
	I0603 13:52:39.532157 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.532168 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:39.532177 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:39.532267 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:39.570662 1143678 cri.go:89] found id: ""
	I0603 13:52:39.570693 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.570703 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:39.570717 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:39.570734 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:39.622008 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:39.622057 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:39.636849 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:39.636884 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:39.719914 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:39.719948 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:39.719967 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:39.801723 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:39.801769 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:42.348936 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:42.363663 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:42.363735 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:42.400584 1143678 cri.go:89] found id: ""
	I0603 13:52:42.400616 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.400625 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:42.400631 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:42.400685 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:42.438853 1143678 cri.go:89] found id: ""
	I0603 13:52:42.438885 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.438893 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:42.438899 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:42.438954 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:42.474980 1143678 cri.go:89] found id: ""
	I0603 13:52:42.475013 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.475025 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:42.475032 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:42.475086 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:42.511027 1143678 cri.go:89] found id: ""
	I0603 13:52:42.511056 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.511068 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:42.511077 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:42.511237 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:42.545333 1143678 cri.go:89] found id: ""
	I0603 13:52:42.545367 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.545378 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:42.545386 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:42.545468 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:42.583392 1143678 cri.go:89] found id: ""
	I0603 13:52:42.583438 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.583556 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:42.583591 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:42.583656 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:42.620886 1143678 cri.go:89] found id: ""
	I0603 13:52:42.620916 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.620924 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:42.620930 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:42.620985 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:42.656265 1143678 cri.go:89] found id: ""
	I0603 13:52:42.656301 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.656313 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:42.656327 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:42.656344 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:42.711078 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:42.711124 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:42.727751 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:42.727788 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:42.802330 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:42.802356 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:42.802370 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:42.883700 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:42.883742 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:45.424591 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:45.440797 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:45.440883 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:45.483664 1143678 cri.go:89] found id: ""
	I0603 13:52:45.483698 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.483709 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:45.483717 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:45.483789 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:45.523147 1143678 cri.go:89] found id: ""
	I0603 13:52:45.523182 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.523193 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:45.523201 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:45.523273 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:45.563483 1143678 cri.go:89] found id: ""
	I0603 13:52:45.563516 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.563527 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:45.563536 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:45.563598 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:45.603574 1143678 cri.go:89] found id: ""
	I0603 13:52:45.603603 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.603618 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:45.603625 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:45.603680 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:45.642664 1143678 cri.go:89] found id: ""
	I0603 13:52:45.642694 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.642705 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:45.642714 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:45.642793 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:45.679961 1143678 cri.go:89] found id: ""
	I0603 13:52:45.679998 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.680011 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:45.680026 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:45.680100 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:45.716218 1143678 cri.go:89] found id: ""
	I0603 13:52:45.716255 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.716263 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:45.716270 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:45.716364 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:45.752346 1143678 cri.go:89] found id: ""
	I0603 13:52:45.752374 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.752382 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:45.752391 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:45.752405 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:45.793992 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:45.794029 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:45.844930 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:45.844973 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:45.859594 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:45.859633 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:45.936469 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:45.936498 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:45.936515 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:48.514959 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:48.528331 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:48.528401 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:48.565671 1143678 cri.go:89] found id: ""
	I0603 13:52:48.565703 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.565715 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:48.565724 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:48.565786 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:48.603938 1143678 cri.go:89] found id: ""
	I0603 13:52:48.603973 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.603991 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:48.604000 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:48.604068 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:48.643521 1143678 cri.go:89] found id: ""
	I0603 13:52:48.643550 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.643562 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:48.643571 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:48.643627 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:48.678264 1143678 cri.go:89] found id: ""
	I0603 13:52:48.678301 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.678312 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:48.678320 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:48.678407 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:48.714974 1143678 cri.go:89] found id: ""
	I0603 13:52:48.715014 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.715026 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:48.715034 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:48.715138 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:48.750364 1143678 cri.go:89] found id: ""
	I0603 13:52:48.750396 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.750408 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:48.750416 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:48.750482 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:48.788203 1143678 cri.go:89] found id: ""
	I0603 13:52:48.788238 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.788249 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:48.788258 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:48.788345 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:48.826891 1143678 cri.go:89] found id: ""
	I0603 13:52:48.826920 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.826928 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:48.826938 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:48.826951 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:48.877271 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:48.877315 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:48.892155 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:48.892187 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:48.973433 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:48.973459 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:48.973473 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:49.062819 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:49.062888 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:51.614261 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:51.628056 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:51.628142 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:51.662894 1143678 cri.go:89] found id: ""
	I0603 13:52:51.662924 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.662935 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:51.662942 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:51.663009 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:51.701847 1143678 cri.go:89] found id: ""
	I0603 13:52:51.701878 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.701889 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:51.701896 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:51.701963 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:51.737702 1143678 cri.go:89] found id: ""
	I0603 13:52:51.737741 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.737752 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:51.737760 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:51.737833 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:51.772913 1143678 cri.go:89] found id: ""
	I0603 13:52:51.772944 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.772956 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:51.772964 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:51.773034 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:51.810268 1143678 cri.go:89] found id: ""
	I0603 13:52:51.810298 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.810307 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:51.810312 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:51.810377 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:51.848575 1143678 cri.go:89] found id: ""
	I0603 13:52:51.848612 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.848624 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:51.848633 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:51.848696 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:51.886500 1143678 cri.go:89] found id: ""
	I0603 13:52:51.886536 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.886549 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:51.886560 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:51.886617 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:51.924070 1143678 cri.go:89] found id: ""
	I0603 13:52:51.924104 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.924115 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:51.924128 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:51.924146 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:51.940324 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:51.940355 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:52.019958 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:52.019997 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:52.020015 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:52.095953 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:52.095999 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:52.141070 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:52.141102 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:54.694651 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:54.708508 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:54.708597 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:54.745708 1143678 cri.go:89] found id: ""
	I0603 13:52:54.745748 1143678 logs.go:276] 0 containers: []
	W0603 13:52:54.745762 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:54.745770 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:54.745842 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:54.783335 1143678 cri.go:89] found id: ""
	I0603 13:52:54.783369 1143678 logs.go:276] 0 containers: []
	W0603 13:52:54.783381 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:54.783389 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:54.783465 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:54.824111 1143678 cri.go:89] found id: ""
	I0603 13:52:54.824140 1143678 logs.go:276] 0 containers: []
	W0603 13:52:54.824151 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:54.824159 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:54.824230 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:54.868676 1143678 cri.go:89] found id: ""
	I0603 13:52:54.868710 1143678 logs.go:276] 0 containers: []
	W0603 13:52:54.868721 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:54.868730 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:54.868801 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:54.906180 1143678 cri.go:89] found id: ""
	I0603 13:52:54.906216 1143678 logs.go:276] 0 containers: []
	W0603 13:52:54.906227 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:54.906235 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:54.906310 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:54.945499 1143678 cri.go:89] found id: ""
	I0603 13:52:54.945532 1143678 logs.go:276] 0 containers: []
	W0603 13:52:54.945544 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:54.945552 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:54.945619 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:54.986785 1143678 cri.go:89] found id: ""
	I0603 13:52:54.986812 1143678 logs.go:276] 0 containers: []
	W0603 13:52:54.986820 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:54.986826 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:54.986888 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:55.035290 1143678 cri.go:89] found id: ""
	I0603 13:52:55.035320 1143678 logs.go:276] 0 containers: []
	W0603 13:52:55.035329 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:55.035338 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:55.035352 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:55.085384 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:55.085451 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:55.100699 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:55.100733 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:55.171587 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:55.171614 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:55.171638 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:55.249078 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:55.249123 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:57.791538 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:57.804373 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:57.804437 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:57.843969 1143678 cri.go:89] found id: ""
	I0603 13:52:57.844007 1143678 logs.go:276] 0 containers: []
	W0603 13:52:57.844016 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:57.844022 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:57.844077 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:57.881201 1143678 cri.go:89] found id: ""
	I0603 13:52:57.881239 1143678 logs.go:276] 0 containers: []
	W0603 13:52:57.881252 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:57.881261 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:57.881336 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:57.917572 1143678 cri.go:89] found id: ""
	I0603 13:52:57.917601 1143678 logs.go:276] 0 containers: []
	W0603 13:52:57.917610 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:57.917617 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:57.917671 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:57.951603 1143678 cri.go:89] found id: ""
	I0603 13:52:57.951642 1143678 logs.go:276] 0 containers: []
	W0603 13:52:57.951654 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:57.951661 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:57.951716 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:57.992833 1143678 cri.go:89] found id: ""
	I0603 13:52:57.992863 1143678 logs.go:276] 0 containers: []
	W0603 13:52:57.992874 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:57.992881 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:57.992945 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:58.031595 1143678 cri.go:89] found id: ""
	I0603 13:52:58.031636 1143678 logs.go:276] 0 containers: []
	W0603 13:52:58.031648 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:58.031657 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:58.031723 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:58.068947 1143678 cri.go:89] found id: ""
	I0603 13:52:58.068985 1143678 logs.go:276] 0 containers: []
	W0603 13:52:58.068996 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:58.069005 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:58.069077 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:58.106559 1143678 cri.go:89] found id: ""
	I0603 13:52:58.106587 1143678 logs.go:276] 0 containers: []
	W0603 13:52:58.106598 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:58.106623 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:58.106640 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:58.162576 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:58.162623 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:58.177104 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:58.177155 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:58.250279 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:58.250312 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:58.250329 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:58.330876 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:58.330920 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:00.871443 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:00.885505 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:00.885589 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:00.923878 1143678 cri.go:89] found id: ""
	I0603 13:53:00.923910 1143678 logs.go:276] 0 containers: []
	W0603 13:53:00.923920 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:00.923928 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:00.923995 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:00.960319 1143678 cri.go:89] found id: ""
	I0603 13:53:00.960362 1143678 logs.go:276] 0 containers: []
	W0603 13:53:00.960375 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:00.960384 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:00.960449 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:00.998806 1143678 cri.go:89] found id: ""
	I0603 13:53:00.998845 1143678 logs.go:276] 0 containers: []
	W0603 13:53:00.998857 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:00.998866 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:00.998929 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:01.033211 1143678 cri.go:89] found id: ""
	I0603 13:53:01.033245 1143678 logs.go:276] 0 containers: []
	W0603 13:53:01.033256 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:01.033265 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:01.033341 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:01.072852 1143678 cri.go:89] found id: ""
	I0603 13:53:01.072883 1143678 logs.go:276] 0 containers: []
	W0603 13:53:01.072891 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:01.072898 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:01.072950 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:01.115667 1143678 cri.go:89] found id: ""
	I0603 13:53:01.115699 1143678 logs.go:276] 0 containers: []
	W0603 13:53:01.115711 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:01.115719 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:01.115824 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:01.153676 1143678 cri.go:89] found id: ""
	I0603 13:53:01.153717 1143678 logs.go:276] 0 containers: []
	W0603 13:53:01.153733 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:01.153741 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:01.153815 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:01.188970 1143678 cri.go:89] found id: ""
	I0603 13:53:01.189003 1143678 logs.go:276] 0 containers: []
	W0603 13:53:01.189017 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:01.189031 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:01.189049 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:01.233151 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:01.233214 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:01.287218 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:01.287269 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:01.302370 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:01.302408 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:01.378414 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:01.378444 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:01.378463 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:03.957327 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:03.971246 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:03.971340 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:04.007299 1143678 cri.go:89] found id: ""
	I0603 13:53:04.007335 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.007347 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:04.007356 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:04.007427 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:04.046364 1143678 cri.go:89] found id: ""
	I0603 13:53:04.046396 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.046405 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:04.046411 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:04.046469 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:04.082094 1143678 cri.go:89] found id: ""
	I0603 13:53:04.082127 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.082139 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:04.082148 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:04.082209 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:04.117389 1143678 cri.go:89] found id: ""
	I0603 13:53:04.117434 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.117446 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:04.117454 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:04.117530 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:04.150560 1143678 cri.go:89] found id: ""
	I0603 13:53:04.150596 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.150606 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:04.150614 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:04.150678 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:04.184808 1143678 cri.go:89] found id: ""
	I0603 13:53:04.184845 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.184857 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:04.184865 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:04.184935 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:04.220286 1143678 cri.go:89] found id: ""
	I0603 13:53:04.220317 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.220326 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:04.220332 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:04.220385 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:04.258898 1143678 cri.go:89] found id: ""
	I0603 13:53:04.258929 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.258941 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:04.258955 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:04.258972 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:04.312151 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:04.312198 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:04.329908 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:04.329943 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:04.402075 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:04.402106 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:04.402138 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:04.482873 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:04.482936 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:07.049978 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:07.063072 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:07.063140 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:07.097703 1143678 cri.go:89] found id: ""
	I0603 13:53:07.097737 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.097748 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:07.097755 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:07.097811 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:07.134826 1143678 cri.go:89] found id: ""
	I0603 13:53:07.134865 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.134878 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:07.134886 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:07.134955 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:07.178015 1143678 cri.go:89] found id: ""
	I0603 13:53:07.178050 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.178061 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:07.178068 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:07.178138 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:07.215713 1143678 cri.go:89] found id: ""
	I0603 13:53:07.215753 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.215764 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:07.215777 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:07.215840 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:07.251787 1143678 cri.go:89] found id: ""
	I0603 13:53:07.251815 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.251824 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:07.251830 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:07.251897 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:07.293357 1143678 cri.go:89] found id: ""
	I0603 13:53:07.293387 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.293398 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:07.293427 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:07.293496 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:07.329518 1143678 cri.go:89] found id: ""
	I0603 13:53:07.329551 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.329561 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:07.329569 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:07.329650 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:07.369534 1143678 cri.go:89] found id: ""
	I0603 13:53:07.369576 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.369587 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:07.369601 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:07.369617 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:07.424211 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:07.424260 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:07.439135 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:07.439172 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:07.511325 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:07.511360 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:07.511378 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:07.588348 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:07.588393 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:10.129812 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:10.143977 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:10.144057 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:10.181873 1143678 cri.go:89] found id: ""
	I0603 13:53:10.181906 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.181918 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:10.181926 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:10.181981 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:10.218416 1143678 cri.go:89] found id: ""
	I0603 13:53:10.218460 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.218473 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:10.218482 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:10.218562 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:10.253580 1143678 cri.go:89] found id: ""
	I0603 13:53:10.253618 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.253630 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:10.253646 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:10.253717 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:10.302919 1143678 cri.go:89] found id: ""
	I0603 13:53:10.302949 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.302957 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:10.302964 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:10.303024 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:10.343680 1143678 cri.go:89] found id: ""
	I0603 13:53:10.343709 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.343721 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:10.343729 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:10.343798 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:10.379281 1143678 cri.go:89] found id: ""
	I0603 13:53:10.379307 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.379315 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:10.379322 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:10.379374 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:10.420197 1143678 cri.go:89] found id: ""
	I0603 13:53:10.420225 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.420233 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:10.420239 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:10.420322 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:10.458578 1143678 cri.go:89] found id: ""
	I0603 13:53:10.458609 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.458618 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:10.458629 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:10.458642 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:10.511785 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:10.511828 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:10.526040 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:10.526081 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:10.603721 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:10.603749 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:10.603766 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:10.684153 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:10.684204 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:13.227605 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:13.241131 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:13.241228 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:13.284636 1143678 cri.go:89] found id: ""
	I0603 13:53:13.284667 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.284675 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:13.284681 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:13.284737 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:13.322828 1143678 cri.go:89] found id: ""
	I0603 13:53:13.322861 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.322873 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:13.322881 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:13.322945 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:13.360061 1143678 cri.go:89] found id: ""
	I0603 13:53:13.360089 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.360097 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:13.360103 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:13.360176 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:13.397115 1143678 cri.go:89] found id: ""
	I0603 13:53:13.397149 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.397158 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:13.397164 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:13.397234 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:13.434086 1143678 cri.go:89] found id: ""
	I0603 13:53:13.434118 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.434127 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:13.434135 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:13.434194 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:13.470060 1143678 cri.go:89] found id: ""
	I0603 13:53:13.470089 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.470101 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:13.470113 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:13.470189 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:13.508423 1143678 cri.go:89] found id: ""
	I0603 13:53:13.508464 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.508480 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:13.508487 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:13.508552 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:13.546713 1143678 cri.go:89] found id: ""
	I0603 13:53:13.546752 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.546765 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:13.546778 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:13.546796 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:13.632984 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:13.633027 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:13.679169 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:13.679216 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:13.735765 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:13.735812 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:13.750175 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:13.750210 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:13.826571 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:16.327185 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:16.340163 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:16.340253 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:16.380260 1143678 cri.go:89] found id: ""
	I0603 13:53:16.380292 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.380300 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:16.380307 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:16.380373 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:16.420408 1143678 cri.go:89] found id: ""
	I0603 13:53:16.420438 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.420449 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:16.420457 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:16.420534 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:16.459250 1143678 cri.go:89] found id: ""
	I0603 13:53:16.459285 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.459297 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:16.459307 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:16.459377 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:16.496395 1143678 cri.go:89] found id: ""
	I0603 13:53:16.496427 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.496436 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:16.496444 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:16.496516 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:16.534402 1143678 cri.go:89] found id: ""
	I0603 13:53:16.534433 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.534442 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:16.534449 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:16.534514 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:16.571550 1143678 cri.go:89] found id: ""
	I0603 13:53:16.571577 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.571584 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:16.571591 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:16.571659 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:16.608425 1143678 cri.go:89] found id: ""
	I0603 13:53:16.608457 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.608468 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:16.608482 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:16.608549 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:16.647282 1143678 cri.go:89] found id: ""
	I0603 13:53:16.647315 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.647324 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:16.647334 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:16.647351 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:16.728778 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:16.728814 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:16.728831 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:16.822702 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:16.822747 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:16.868816 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:16.868845 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:16.922262 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:16.922301 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:19.438231 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:19.452520 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:19.452603 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:19.488089 1143678 cri.go:89] found id: ""
	I0603 13:53:19.488121 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.488133 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:19.488141 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:19.488216 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:19.524494 1143678 cri.go:89] found id: ""
	I0603 13:53:19.524527 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.524537 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:19.524543 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:19.524595 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:19.561288 1143678 cri.go:89] found id: ""
	I0603 13:53:19.561323 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.561333 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:19.561341 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:19.561420 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:19.597919 1143678 cri.go:89] found id: ""
	I0603 13:53:19.597965 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.597976 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:19.597984 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:19.598056 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:19.634544 1143678 cri.go:89] found id: ""
	I0603 13:53:19.634579 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.634591 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:19.634599 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:19.634668 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:19.671473 1143678 cri.go:89] found id: ""
	I0603 13:53:19.671506 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.671518 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:19.671527 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:19.671598 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:19.707968 1143678 cri.go:89] found id: ""
	I0603 13:53:19.708000 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.708011 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:19.708019 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:19.708119 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:19.745555 1143678 cri.go:89] found id: ""
	I0603 13:53:19.745593 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.745604 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:19.745617 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:19.745631 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:19.830765 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:19.830812 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:19.875160 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:19.875197 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:19.927582 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:19.927627 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:19.942258 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:19.942289 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:20.016081 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:22.516859 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:22.534973 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:22.535040 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:22.593003 1143678 cri.go:89] found id: ""
	I0603 13:53:22.593043 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.593051 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:22.593058 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:22.593121 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:22.649916 1143678 cri.go:89] found id: ""
	I0603 13:53:22.649951 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.649963 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:22.649971 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:22.650030 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:22.689397 1143678 cri.go:89] found id: ""
	I0603 13:53:22.689449 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.689459 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:22.689465 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:22.689521 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:22.725109 1143678 cri.go:89] found id: ""
	I0603 13:53:22.725149 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.725161 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:22.725169 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:22.725250 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:22.761196 1143678 cri.go:89] found id: ""
	I0603 13:53:22.761225 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.761237 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:22.761245 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:22.761311 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:22.804065 1143678 cri.go:89] found id: ""
	I0603 13:53:22.804103 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.804112 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:22.804119 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:22.804189 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:22.840456 1143678 cri.go:89] found id: ""
	I0603 13:53:22.840485 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.840493 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:22.840499 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:22.840553 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:22.876796 1143678 cri.go:89] found id: ""
	I0603 13:53:22.876831 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.876842 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:22.876854 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:22.876869 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:22.957274 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:22.957317 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:22.998360 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:22.998394 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:23.054895 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:23.054942 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:23.070107 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:23.070141 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:23.147460 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:25.647727 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:25.663603 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:25.663691 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:25.698102 1143678 cri.go:89] found id: ""
	I0603 13:53:25.698139 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.698150 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:25.698159 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:25.698227 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:25.738601 1143678 cri.go:89] found id: ""
	I0603 13:53:25.738641 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.738648 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:25.738655 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:25.738718 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:25.780622 1143678 cri.go:89] found id: ""
	I0603 13:53:25.780657 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.780670 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:25.780678 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:25.780751 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:25.816950 1143678 cri.go:89] found id: ""
	I0603 13:53:25.816978 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.816989 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:25.816997 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:25.817060 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:25.860011 1143678 cri.go:89] found id: ""
	I0603 13:53:25.860051 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.860063 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:25.860072 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:25.860138 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:25.898832 1143678 cri.go:89] found id: ""
	I0603 13:53:25.898866 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.898878 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:25.898886 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:25.898959 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:25.937483 1143678 cri.go:89] found id: ""
	I0603 13:53:25.937518 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.937533 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:25.937541 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:25.937607 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:25.973972 1143678 cri.go:89] found id: ""
	I0603 13:53:25.974008 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.974021 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:25.974034 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:25.974065 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:25.989188 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:25.989227 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:26.065521 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:26.065546 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:26.065560 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:26.147852 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:26.147899 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:26.191395 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:26.191431 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:28.751041 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:28.764764 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:28.764826 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:28.808232 1143678 cri.go:89] found id: ""
	I0603 13:53:28.808271 1143678 logs.go:276] 0 containers: []
	W0603 13:53:28.808285 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:28.808293 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:28.808369 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:28.849058 1143678 cri.go:89] found id: ""
	I0603 13:53:28.849094 1143678 logs.go:276] 0 containers: []
	W0603 13:53:28.849107 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:28.849114 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:28.849187 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:28.892397 1143678 cri.go:89] found id: ""
	I0603 13:53:28.892427 1143678 logs.go:276] 0 containers: []
	W0603 13:53:28.892441 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:28.892447 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:28.892515 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:28.932675 1143678 cri.go:89] found id: ""
	I0603 13:53:28.932715 1143678 logs.go:276] 0 containers: []
	W0603 13:53:28.932727 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:28.932735 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:28.932840 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:28.969732 1143678 cri.go:89] found id: ""
	I0603 13:53:28.969769 1143678 logs.go:276] 0 containers: []
	W0603 13:53:28.969781 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:28.969789 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:28.969857 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:29.007765 1143678 cri.go:89] found id: ""
	I0603 13:53:29.007791 1143678 logs.go:276] 0 containers: []
	W0603 13:53:29.007798 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:29.007804 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:29.007865 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:29.044616 1143678 cri.go:89] found id: ""
	I0603 13:53:29.044652 1143678 logs.go:276] 0 containers: []
	W0603 13:53:29.044664 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:29.044675 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:29.044734 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:29.081133 1143678 cri.go:89] found id: ""
	I0603 13:53:29.081166 1143678 logs.go:276] 0 containers: []
	W0603 13:53:29.081187 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:29.081198 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:29.081213 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:29.095753 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:29.095783 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:29.174472 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:29.174496 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:29.174516 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:29.251216 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:29.251262 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:29.289127 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:29.289168 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:31.845335 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:31.860631 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:31.860720 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:31.904507 1143678 cri.go:89] found id: ""
	I0603 13:53:31.904544 1143678 logs.go:276] 0 containers: []
	W0603 13:53:31.904556 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:31.904564 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:31.904633 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:31.940795 1143678 cri.go:89] found id: ""
	I0603 13:53:31.940832 1143678 logs.go:276] 0 containers: []
	W0603 13:53:31.940845 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:31.940852 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:31.940921 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:31.978447 1143678 cri.go:89] found id: ""
	I0603 13:53:31.978481 1143678 logs.go:276] 0 containers: []
	W0603 13:53:31.978499 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:31.978507 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:31.978569 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:32.017975 1143678 cri.go:89] found id: ""
	I0603 13:53:32.018009 1143678 logs.go:276] 0 containers: []
	W0603 13:53:32.018018 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:32.018025 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:32.018089 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:32.053062 1143678 cri.go:89] found id: ""
	I0603 13:53:32.053091 1143678 logs.go:276] 0 containers: []
	W0603 13:53:32.053099 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:32.053106 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:32.053181 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:32.089822 1143678 cri.go:89] found id: ""
	I0603 13:53:32.089856 1143678 logs.go:276] 0 containers: []
	W0603 13:53:32.089868 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:32.089877 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:32.089944 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:32.126243 1143678 cri.go:89] found id: ""
	I0603 13:53:32.126280 1143678 logs.go:276] 0 containers: []
	W0603 13:53:32.126291 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:32.126299 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:32.126358 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:32.163297 1143678 cri.go:89] found id: ""
	I0603 13:53:32.163346 1143678 logs.go:276] 0 containers: []
	W0603 13:53:32.163357 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:32.163370 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:32.163386 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:32.218452 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:32.218495 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:32.233688 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:32.233731 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:32.318927 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:32.318947 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:32.318963 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:32.403734 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:32.403786 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:34.947857 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:34.961894 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:34.961983 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:35.006279 1143678 cri.go:89] found id: ""
	I0603 13:53:35.006308 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.006318 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:35.006326 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:35.006398 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:35.042765 1143678 cri.go:89] found id: ""
	I0603 13:53:35.042794 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.042807 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:35.042815 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:35.042877 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:35.084332 1143678 cri.go:89] found id: ""
	I0603 13:53:35.084365 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.084375 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:35.084381 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:35.084448 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:35.121306 1143678 cri.go:89] found id: ""
	I0603 13:53:35.121337 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.121348 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:35.121358 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:35.121444 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:35.155952 1143678 cri.go:89] found id: ""
	I0603 13:53:35.155994 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.156008 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:35.156016 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:35.156089 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:35.196846 1143678 cri.go:89] found id: ""
	I0603 13:53:35.196881 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.196893 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:35.196902 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:35.196972 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:35.232396 1143678 cri.go:89] found id: ""
	I0603 13:53:35.232429 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.232440 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:35.232449 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:35.232528 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:35.269833 1143678 cri.go:89] found id: ""
	I0603 13:53:35.269862 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.269872 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:35.269885 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:35.269902 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:35.357754 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:35.357794 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:35.399793 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:35.399822 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:35.453742 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:35.453782 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:35.468431 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:35.468465 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:35.547817 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:38.048517 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:38.063481 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:38.063569 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:38.100487 1143678 cri.go:89] found id: ""
	I0603 13:53:38.100523 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.100535 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:38.100543 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:38.100612 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:38.137627 1143678 cri.go:89] found id: ""
	I0603 13:53:38.137665 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.137678 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:38.137686 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:38.137754 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:38.176138 1143678 cri.go:89] found id: ""
	I0603 13:53:38.176172 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.176190 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:38.176199 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:38.176265 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:38.214397 1143678 cri.go:89] found id: ""
	I0603 13:53:38.214439 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.214451 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:38.214459 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:38.214528 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:38.250531 1143678 cri.go:89] found id: ""
	I0603 13:53:38.250563 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.250573 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:38.250580 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:38.250642 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:38.286558 1143678 cri.go:89] found id: ""
	I0603 13:53:38.286587 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.286595 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:38.286601 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:38.286652 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:38.327995 1143678 cri.go:89] found id: ""
	I0603 13:53:38.328043 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.328055 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:38.328062 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:38.328126 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:38.374266 1143678 cri.go:89] found id: ""
	I0603 13:53:38.374300 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.374311 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:38.374324 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:38.374341 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:38.426876 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:38.426918 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:38.443296 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:38.443340 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:38.514702 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:38.514728 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:38.514746 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:38.601536 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:38.601590 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:41.141766 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:41.155927 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:41.156006 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:41.196829 1143678 cri.go:89] found id: ""
	I0603 13:53:41.196871 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.196884 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:41.196896 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:41.196967 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:41.231729 1143678 cri.go:89] found id: ""
	I0603 13:53:41.231780 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.231802 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:41.231812 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:41.231900 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:41.266663 1143678 cri.go:89] found id: ""
	I0603 13:53:41.266699 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.266711 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:41.266720 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:41.266783 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:41.305251 1143678 cri.go:89] found id: ""
	I0603 13:53:41.305278 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.305286 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:41.305292 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:41.305351 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:41.342527 1143678 cri.go:89] found id: ""
	I0603 13:53:41.342556 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.342568 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:41.342575 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:41.342637 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:41.379950 1143678 cri.go:89] found id: ""
	I0603 13:53:41.379982 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.379992 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:41.379999 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:41.380068 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:41.414930 1143678 cri.go:89] found id: ""
	I0603 13:53:41.414965 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.414973 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:41.414980 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:41.415043 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:41.449265 1143678 cri.go:89] found id: ""
	I0603 13:53:41.449299 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.449310 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:41.449324 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:41.449343 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:41.502525 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:41.502560 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:41.519357 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:41.519390 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:41.591443 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:41.591471 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:41.591485 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:41.668758 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:41.668802 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:44.211768 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:44.226789 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:44.226869 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:44.265525 1143678 cri.go:89] found id: ""
	I0603 13:53:44.265553 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.265561 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:44.265568 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:44.265646 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:44.304835 1143678 cri.go:89] found id: ""
	I0603 13:53:44.304866 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.304874 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:44.304880 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:44.304935 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:44.345832 1143678 cri.go:89] found id: ""
	I0603 13:53:44.345875 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.345885 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:44.345891 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:44.345950 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:44.386150 1143678 cri.go:89] found id: ""
	I0603 13:53:44.386186 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.386198 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:44.386207 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:44.386268 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:44.423662 1143678 cri.go:89] found id: ""
	I0603 13:53:44.423697 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.423709 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:44.423719 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:44.423788 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:44.462437 1143678 cri.go:89] found id: ""
	I0603 13:53:44.462464 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.462473 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:44.462481 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:44.462567 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:44.501007 1143678 cri.go:89] found id: ""
	I0603 13:53:44.501062 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.501074 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:44.501081 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:44.501138 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:44.535501 1143678 cri.go:89] found id: ""
	I0603 13:53:44.535543 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.535554 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:44.535567 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:44.535585 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:44.587114 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:44.587157 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:44.602151 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:44.602180 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:44.674065 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:44.674104 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:44.674122 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:44.757443 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:44.757488 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:47.306481 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:47.319895 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:47.319958 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:47.356975 1143678 cri.go:89] found id: ""
	I0603 13:53:47.357013 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.357026 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:47.357034 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:47.357106 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:47.393840 1143678 cri.go:89] found id: ""
	I0603 13:53:47.393869 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.393877 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:47.393884 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:47.393936 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:47.428455 1143678 cri.go:89] found id: ""
	I0603 13:53:47.428493 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.428506 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:47.428514 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:47.428597 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:47.463744 1143678 cri.go:89] found id: ""
	I0603 13:53:47.463777 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.463788 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:47.463795 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:47.463855 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:47.498134 1143678 cri.go:89] found id: ""
	I0603 13:53:47.498159 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.498167 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:47.498173 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:47.498245 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:47.534153 1143678 cri.go:89] found id: ""
	I0603 13:53:47.534195 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.534206 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:47.534219 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:47.534272 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:47.567148 1143678 cri.go:89] found id: ""
	I0603 13:53:47.567179 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.567187 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:47.567194 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:47.567249 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:47.605759 1143678 cri.go:89] found id: ""
	I0603 13:53:47.605790 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.605798 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:47.605810 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:47.605824 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:47.683651 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:47.683692 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:47.683705 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:47.763810 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:47.763848 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:47.806092 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:47.806131 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:47.859637 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:47.859677 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:50.377538 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:50.391696 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:50.391776 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:50.433968 1143678 cri.go:89] found id: ""
	I0603 13:53:50.434001 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.434013 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:50.434020 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:50.434080 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:50.470561 1143678 cri.go:89] found id: ""
	I0603 13:53:50.470589 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.470596 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:50.470603 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:50.470662 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:50.510699 1143678 cri.go:89] found id: ""
	I0603 13:53:50.510727 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.510735 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:50.510741 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:50.510808 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:50.553386 1143678 cri.go:89] found id: ""
	I0603 13:53:50.553433 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.553445 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:50.553452 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:50.553533 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:50.589731 1143678 cri.go:89] found id: ""
	I0603 13:53:50.589779 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.589792 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:50.589801 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:50.589885 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:50.625144 1143678 cri.go:89] found id: ""
	I0603 13:53:50.625180 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.625192 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:50.625201 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:50.625274 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:50.669021 1143678 cri.go:89] found id: ""
	I0603 13:53:50.669053 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.669061 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:50.669067 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:50.669121 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:50.714241 1143678 cri.go:89] found id: ""
	I0603 13:53:50.714270 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.714284 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:50.714297 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:50.714314 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:50.766290 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:50.766333 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:50.797242 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:50.797275 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:50.866589 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:50.866616 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:50.866637 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:50.948808 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:50.948854 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:53.496797 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:53.511944 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:53.512021 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:53.549028 1143678 cri.go:89] found id: ""
	I0603 13:53:53.549057 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.549066 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:53.549072 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:53.549128 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:53.583533 1143678 cri.go:89] found id: ""
	I0603 13:53:53.583566 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.583578 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:53.583586 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:53.583652 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:53.618578 1143678 cri.go:89] found id: ""
	I0603 13:53:53.618609 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.618618 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:53.618626 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:53.618701 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:53.653313 1143678 cri.go:89] found id: ""
	I0603 13:53:53.653347 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.653358 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:53.653364 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:53.653442 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:53.689805 1143678 cri.go:89] found id: ""
	I0603 13:53:53.689839 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.689849 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:53.689857 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:53.689931 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:53.725538 1143678 cri.go:89] found id: ""
	I0603 13:53:53.725571 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.725584 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:53.725592 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:53.725648 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:53.762284 1143678 cri.go:89] found id: ""
	I0603 13:53:53.762325 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.762336 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:53.762345 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:53.762419 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:53.799056 1143678 cri.go:89] found id: ""
	I0603 13:53:53.799083 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.799092 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:53.799102 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:53.799115 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:53.873743 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:53.873809 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:53.919692 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:53.919724 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:53.969068 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:53.969109 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:53.983840 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:53.983866 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:54.054842 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:56.555587 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:56.570014 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:56.570076 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:56.604352 1143678 cri.go:89] found id: ""
	I0603 13:53:56.604386 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.604400 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:56.604408 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:56.604479 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:56.648126 1143678 cri.go:89] found id: ""
	I0603 13:53:56.648161 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.648171 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:56.648177 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:56.648231 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:56.685621 1143678 cri.go:89] found id: ""
	I0603 13:53:56.685658 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.685670 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:56.685678 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:56.685763 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:56.721860 1143678 cri.go:89] found id: ""
	I0603 13:53:56.721891 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.721913 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:56.721921 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:56.721989 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:56.757950 1143678 cri.go:89] found id: ""
	I0603 13:53:56.757982 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.757995 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:56.758002 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:56.758068 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:56.794963 1143678 cri.go:89] found id: ""
	I0603 13:53:56.794991 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.794999 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:56.795007 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:56.795072 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:56.831795 1143678 cri.go:89] found id: ""
	I0603 13:53:56.831827 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.831839 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:56.831846 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:56.831913 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:56.869263 1143678 cri.go:89] found id: ""
	I0603 13:53:56.869293 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.869303 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:56.869314 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:56.869331 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:56.945068 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:56.945096 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:56.945110 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:57.028545 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:57.028582 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:57.069973 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:57.070009 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:57.126395 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:57.126436 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:59.644870 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:59.658547 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:59.658634 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:59.694625 1143678 cri.go:89] found id: ""
	I0603 13:53:59.694656 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.694665 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:59.694673 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:59.694740 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:59.730475 1143678 cri.go:89] found id: ""
	I0603 13:53:59.730573 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.730590 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:59.730599 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:59.730696 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:59.768533 1143678 cri.go:89] found id: ""
	I0603 13:53:59.768567 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.768580 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:59.768590 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:59.768662 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:59.804913 1143678 cri.go:89] found id: ""
	I0603 13:53:59.804944 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.804953 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:59.804960 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:59.805014 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:59.850331 1143678 cri.go:89] found id: ""
	I0603 13:53:59.850363 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.850376 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:59.850385 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:59.850466 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:59.890777 1143678 cri.go:89] found id: ""
	I0603 13:53:59.890814 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.890826 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:59.890834 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:59.890909 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:59.931233 1143678 cri.go:89] found id: ""
	I0603 13:53:59.931268 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.931277 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:59.931283 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:59.931354 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:59.966267 1143678 cri.go:89] found id: ""
	I0603 13:53:59.966307 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.966319 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:59.966333 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:59.966356 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:00.019884 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:00.019924 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:00.034936 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:00.034982 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:00.115002 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:00.115035 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:00.115053 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:00.189992 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:00.190035 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:02.737387 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:02.752131 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:02.752220 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:02.787863 1143678 cri.go:89] found id: ""
	I0603 13:54:02.787893 1143678 logs.go:276] 0 containers: []
	W0603 13:54:02.787902 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:02.787908 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:02.787974 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:02.824938 1143678 cri.go:89] found id: ""
	I0603 13:54:02.824973 1143678 logs.go:276] 0 containers: []
	W0603 13:54:02.824983 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:02.824989 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:02.825061 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:02.861425 1143678 cri.go:89] found id: ""
	I0603 13:54:02.861461 1143678 logs.go:276] 0 containers: []
	W0603 13:54:02.861469 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:02.861476 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:02.861546 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:02.907417 1143678 cri.go:89] found id: ""
	I0603 13:54:02.907453 1143678 logs.go:276] 0 containers: []
	W0603 13:54:02.907475 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:02.907483 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:02.907553 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:02.953606 1143678 cri.go:89] found id: ""
	I0603 13:54:02.953640 1143678 logs.go:276] 0 containers: []
	W0603 13:54:02.953649 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:02.953655 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:02.953728 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:03.007785 1143678 cri.go:89] found id: ""
	I0603 13:54:03.007816 1143678 logs.go:276] 0 containers: []
	W0603 13:54:03.007824 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:03.007830 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:03.007896 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:03.058278 1143678 cri.go:89] found id: ""
	I0603 13:54:03.058316 1143678 logs.go:276] 0 containers: []
	W0603 13:54:03.058329 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:03.058338 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:03.058404 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:03.094766 1143678 cri.go:89] found id: ""
	I0603 13:54:03.094800 1143678 logs.go:276] 0 containers: []
	W0603 13:54:03.094811 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:03.094824 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:03.094840 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:03.163663 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:03.163690 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:03.163704 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:03.250751 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:03.250802 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:03.292418 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:03.292466 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:03.344552 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:03.344600 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:05.859965 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:05.875255 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:05.875340 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:05.918590 1143678 cri.go:89] found id: ""
	I0603 13:54:05.918619 1143678 logs.go:276] 0 containers: []
	W0603 13:54:05.918630 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:05.918637 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:05.918706 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:05.953932 1143678 cri.go:89] found id: ""
	I0603 13:54:05.953969 1143678 logs.go:276] 0 containers: []
	W0603 13:54:05.953980 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:05.953988 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:05.954056 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:05.993319 1143678 cri.go:89] found id: ""
	I0603 13:54:05.993348 1143678 logs.go:276] 0 containers: []
	W0603 13:54:05.993359 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:05.993368 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:05.993468 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:06.033047 1143678 cri.go:89] found id: ""
	I0603 13:54:06.033079 1143678 logs.go:276] 0 containers: []
	W0603 13:54:06.033087 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:06.033100 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:06.033156 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:06.072607 1143678 cri.go:89] found id: ""
	I0603 13:54:06.072631 1143678 logs.go:276] 0 containers: []
	W0603 13:54:06.072640 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:06.072647 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:06.072698 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:06.109944 1143678 cri.go:89] found id: ""
	I0603 13:54:06.109990 1143678 logs.go:276] 0 containers: []
	W0603 13:54:06.109999 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:06.110007 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:06.110071 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:06.150235 1143678 cri.go:89] found id: ""
	I0603 13:54:06.150266 1143678 logs.go:276] 0 containers: []
	W0603 13:54:06.150276 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:06.150284 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:06.150349 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:06.193963 1143678 cri.go:89] found id: ""
	I0603 13:54:06.193992 1143678 logs.go:276] 0 containers: []
	W0603 13:54:06.194004 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:06.194017 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:06.194035 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:06.235790 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:06.235827 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:06.289940 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:06.289980 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:06.305205 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:06.305240 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:06.381170 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:06.381191 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:06.381206 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:08.958985 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:08.973364 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:08.973462 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:09.015050 1143678 cri.go:89] found id: ""
	I0603 13:54:09.015087 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.015099 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:09.015107 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:09.015187 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:09.054474 1143678 cri.go:89] found id: ""
	I0603 13:54:09.054508 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.054521 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:09.054533 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:09.054590 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:09.090867 1143678 cri.go:89] found id: ""
	I0603 13:54:09.090905 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.090917 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:09.090926 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:09.090995 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:09.128401 1143678 cri.go:89] found id: ""
	I0603 13:54:09.128433 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.128441 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:09.128447 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:09.128511 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:09.162952 1143678 cri.go:89] found id: ""
	I0603 13:54:09.162992 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.163005 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:09.163013 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:09.163078 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:09.200375 1143678 cri.go:89] found id: ""
	I0603 13:54:09.200402 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.200410 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:09.200416 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:09.200495 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:09.244694 1143678 cri.go:89] found id: ""
	I0603 13:54:09.244729 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.244740 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:09.244749 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:09.244818 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:09.281633 1143678 cri.go:89] found id: ""
	I0603 13:54:09.281666 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.281675 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:09.281686 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:09.281700 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:09.341287 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:09.341331 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:09.355379 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:09.355415 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:09.435934 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:09.435960 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:09.435979 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:09.518203 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:09.518248 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:12.061538 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:12.076939 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:12.077020 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:12.114308 1143678 cri.go:89] found id: ""
	I0603 13:54:12.114344 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.114353 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:12.114359 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:12.114427 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:12.150336 1143678 cri.go:89] found id: ""
	I0603 13:54:12.150368 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.150383 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:12.150390 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:12.150455 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:12.189881 1143678 cri.go:89] found id: ""
	I0603 13:54:12.189934 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.189946 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:12.189954 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:12.190020 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:12.226361 1143678 cri.go:89] found id: ""
	I0603 13:54:12.226396 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.226407 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:12.226415 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:12.226488 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:12.264216 1143678 cri.go:89] found id: ""
	I0603 13:54:12.264257 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.264265 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:12.264271 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:12.264341 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:12.306563 1143678 cri.go:89] found id: ""
	I0603 13:54:12.306600 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.306612 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:12.306620 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:12.306690 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:12.347043 1143678 cri.go:89] found id: ""
	I0603 13:54:12.347082 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.347094 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:12.347105 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:12.347170 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:12.383947 1143678 cri.go:89] found id: ""
	I0603 13:54:12.383978 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.383989 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:12.384001 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:12.384018 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:12.464306 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:12.464348 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:12.505079 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:12.505110 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:12.563631 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:12.563666 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:12.578328 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:12.578357 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:12.646015 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:15.147166 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:15.163786 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:15.163865 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:15.202249 1143678 cri.go:89] found id: ""
	I0603 13:54:15.202286 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.202296 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:15.202304 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:15.202372 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:15.236305 1143678 cri.go:89] found id: ""
	I0603 13:54:15.236345 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.236359 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:15.236368 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:15.236459 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:15.273457 1143678 cri.go:89] found id: ""
	I0603 13:54:15.273493 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.273510 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:15.273521 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:15.273592 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:15.314917 1143678 cri.go:89] found id: ""
	I0603 13:54:15.314951 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.314963 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:15.314984 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:15.315055 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:15.353060 1143678 cri.go:89] found id: ""
	I0603 13:54:15.353098 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.353112 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:15.353118 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:15.353197 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:15.390412 1143678 cri.go:89] found id: ""
	I0603 13:54:15.390448 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.390460 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:15.390469 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:15.390534 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:15.427735 1143678 cri.go:89] found id: ""
	I0603 13:54:15.427771 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.427782 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:15.427789 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:15.427854 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:15.467134 1143678 cri.go:89] found id: ""
	I0603 13:54:15.467165 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.467175 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:15.467184 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:15.467199 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:15.517924 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:15.517973 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:15.531728 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:15.531760 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:15.608397 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:15.608421 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:15.608444 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:15.688976 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:15.689016 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:18.228279 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:18.242909 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:18.242985 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:18.285400 1143678 cri.go:89] found id: ""
	I0603 13:54:18.285445 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.285455 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:18.285461 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:18.285521 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:18.321840 1143678 cri.go:89] found id: ""
	I0603 13:54:18.321868 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.321877 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:18.321884 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:18.321943 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:18.358856 1143678 cri.go:89] found id: ""
	I0603 13:54:18.358888 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.358902 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:18.358911 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:18.358979 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:18.395638 1143678 cri.go:89] found id: ""
	I0603 13:54:18.395678 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.395691 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:18.395699 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:18.395766 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:18.435541 1143678 cri.go:89] found id: ""
	I0603 13:54:18.435570 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.435581 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:18.435589 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:18.435653 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:18.469491 1143678 cri.go:89] found id: ""
	I0603 13:54:18.469527 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.469538 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:18.469545 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:18.469615 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:18.507986 1143678 cri.go:89] found id: ""
	I0603 13:54:18.508018 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.508030 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:18.508039 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:18.508106 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:18.542311 1143678 cri.go:89] found id: ""
	I0603 13:54:18.542343 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.542351 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:18.542361 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:18.542375 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:18.619295 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:18.619337 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:18.662500 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:18.662540 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:18.714392 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:18.714432 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:18.728750 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:18.728785 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:18.800786 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:21.301554 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:21.315880 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:21.315944 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:21.358178 1143678 cri.go:89] found id: ""
	I0603 13:54:21.358208 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.358217 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:21.358227 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:21.358289 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:21.395873 1143678 cri.go:89] found id: ""
	I0603 13:54:21.395969 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.395995 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:21.396014 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:21.396111 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:21.431781 1143678 cri.go:89] found id: ""
	I0603 13:54:21.431810 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.431822 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:21.431831 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:21.431906 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:21.472840 1143678 cri.go:89] found id: ""
	I0603 13:54:21.472872 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.472885 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:21.472893 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:21.472955 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:21.512296 1143678 cri.go:89] found id: ""
	I0603 13:54:21.512333 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.512346 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:21.512353 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:21.512421 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:21.547555 1143678 cri.go:89] found id: ""
	I0603 13:54:21.547588 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.547599 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:21.547609 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:21.547670 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:21.584972 1143678 cri.go:89] found id: ""
	I0603 13:54:21.585005 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.585013 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:21.585019 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:21.585085 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:21.621566 1143678 cri.go:89] found id: ""
	I0603 13:54:21.621599 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.621610 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:21.621623 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:21.621639 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:21.637223 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:21.637263 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:21.712272 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:21.712294 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:21.712310 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:21.800453 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:21.800490 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:21.841477 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:21.841525 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:24.394864 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:24.408416 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:24.408527 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:24.444572 1143678 cri.go:89] found id: ""
	I0603 13:54:24.444603 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.444612 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:24.444618 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:24.444672 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:24.483710 1143678 cri.go:89] found id: ""
	I0603 13:54:24.483744 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.483755 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:24.483763 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:24.483837 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:24.522396 1143678 cri.go:89] found id: ""
	I0603 13:54:24.522437 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.522450 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:24.522457 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:24.522520 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:24.560865 1143678 cri.go:89] found id: ""
	I0603 13:54:24.560896 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.560905 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:24.560911 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:24.560964 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:24.598597 1143678 cri.go:89] found id: ""
	I0603 13:54:24.598632 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.598643 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:24.598657 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:24.598722 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:24.638854 1143678 cri.go:89] found id: ""
	I0603 13:54:24.638885 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.638897 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:24.638908 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:24.638979 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:24.678039 1143678 cri.go:89] found id: ""
	I0603 13:54:24.678076 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.678088 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:24.678096 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:24.678166 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:24.712836 1143678 cri.go:89] found id: ""
	I0603 13:54:24.712871 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.712883 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:24.712896 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:24.712913 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:24.763503 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:24.763545 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:24.779383 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:24.779416 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:24.867254 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:24.867287 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:24.867307 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:24.944920 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:24.944957 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:27.495908 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:27.509885 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:27.509968 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:27.545591 1143678 cri.go:89] found id: ""
	I0603 13:54:27.545626 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.545635 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:27.545641 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:27.545695 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:27.583699 1143678 cri.go:89] found id: ""
	I0603 13:54:27.583728 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.583740 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:27.583748 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:27.583835 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:27.623227 1143678 cri.go:89] found id: ""
	I0603 13:54:27.623268 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.623277 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:27.623283 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:27.623341 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:27.663057 1143678 cri.go:89] found id: ""
	I0603 13:54:27.663090 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.663102 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:27.663109 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:27.663187 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:27.708448 1143678 cri.go:89] found id: ""
	I0603 13:54:27.708481 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.708489 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:27.708495 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:27.708551 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:27.743629 1143678 cri.go:89] found id: ""
	I0603 13:54:27.743663 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.743674 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:27.743682 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:27.743748 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:27.778094 1143678 cri.go:89] found id: ""
	I0603 13:54:27.778128 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.778137 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:27.778147 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:27.778210 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:27.813137 1143678 cri.go:89] found id: ""
	I0603 13:54:27.813170 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.813180 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:27.813192 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:27.813208 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:27.861100 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:27.861136 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:27.914752 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:27.914794 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:27.929479 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:27.929511 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:28.002898 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:28.002926 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:28.002942 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:30.581890 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:30.595982 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:30.596068 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:30.638804 1143678 cri.go:89] found id: ""
	I0603 13:54:30.638841 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.638853 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:30.638862 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:30.638942 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:30.677202 1143678 cri.go:89] found id: ""
	I0603 13:54:30.677242 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.677253 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:30.677262 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:30.677329 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:30.717382 1143678 cri.go:89] found id: ""
	I0603 13:54:30.717436 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.717446 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:30.717455 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:30.717523 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:30.753691 1143678 cri.go:89] found id: ""
	I0603 13:54:30.753719 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.753728 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:30.753734 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:30.753798 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:30.790686 1143678 cri.go:89] found id: ""
	I0603 13:54:30.790714 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.790723 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:30.790729 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:30.790783 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:30.830196 1143678 cri.go:89] found id: ""
	I0603 13:54:30.830224 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.830237 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:30.830245 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:30.830299 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:30.865952 1143678 cri.go:89] found id: ""
	I0603 13:54:30.865980 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.865992 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:30.866000 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:30.866066 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:30.901561 1143678 cri.go:89] found id: ""
	I0603 13:54:30.901592 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.901601 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:30.901610 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:30.901627 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:30.979416 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:30.979459 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:31.035024 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:31.035061 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:31.089005 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:31.089046 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:31.105176 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:31.105210 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:31.172862 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:33.674069 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:33.688423 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:33.688499 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:33.729840 1143678 cri.go:89] found id: ""
	I0603 13:54:33.729876 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.729886 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:33.729893 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:33.729945 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:33.764984 1143678 cri.go:89] found id: ""
	I0603 13:54:33.765010 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.765018 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:33.765025 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:33.765075 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:33.798411 1143678 cri.go:89] found id: ""
	I0603 13:54:33.798446 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.798459 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:33.798468 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:33.798547 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:33.831565 1143678 cri.go:89] found id: ""
	I0603 13:54:33.831600 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.831611 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:33.831620 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:33.831688 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:33.869701 1143678 cri.go:89] found id: ""
	I0603 13:54:33.869727 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.869735 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:33.869741 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:33.869802 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:33.906108 1143678 cri.go:89] found id: ""
	I0603 13:54:33.906134 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.906144 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:33.906153 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:33.906218 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:33.946577 1143678 cri.go:89] found id: ""
	I0603 13:54:33.946607 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.946615 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:33.946621 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:33.946673 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:33.986691 1143678 cri.go:89] found id: ""
	I0603 13:54:33.986724 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.986743 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:33.986757 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:33.986775 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:34.044068 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:34.044110 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:34.059686 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:34.059724 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:34.141490 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:34.141514 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:34.141531 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:34.227890 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:34.227930 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:36.778969 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:36.792527 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:36.792612 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:36.828044 1143678 cri.go:89] found id: ""
	I0603 13:54:36.828083 1143678 logs.go:276] 0 containers: []
	W0603 13:54:36.828096 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:36.828102 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:36.828166 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:36.863869 1143678 cri.go:89] found id: ""
	I0603 13:54:36.863905 1143678 logs.go:276] 0 containers: []
	W0603 13:54:36.863917 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:36.863926 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:36.863996 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:36.899610 1143678 cri.go:89] found id: ""
	I0603 13:54:36.899649 1143678 logs.go:276] 0 containers: []
	W0603 13:54:36.899661 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:36.899669 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:36.899742 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:36.938627 1143678 cri.go:89] found id: ""
	I0603 13:54:36.938664 1143678 logs.go:276] 0 containers: []
	W0603 13:54:36.938675 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:36.938683 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:36.938739 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:36.973810 1143678 cri.go:89] found id: ""
	I0603 13:54:36.973842 1143678 logs.go:276] 0 containers: []
	W0603 13:54:36.973857 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:36.973863 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:36.973915 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:37.013759 1143678 cri.go:89] found id: ""
	I0603 13:54:37.013792 1143678 logs.go:276] 0 containers: []
	W0603 13:54:37.013805 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:37.013813 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:37.013881 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:37.049665 1143678 cri.go:89] found id: ""
	I0603 13:54:37.049697 1143678 logs.go:276] 0 containers: []
	W0603 13:54:37.049706 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:37.049712 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:37.049787 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:37.087405 1143678 cri.go:89] found id: ""
	I0603 13:54:37.087436 1143678 logs.go:276] 0 containers: []
	W0603 13:54:37.087446 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:37.087457 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:37.087470 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:37.126443 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:37.126476 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:37.177976 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:37.178015 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:37.192821 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:37.192860 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:37.267895 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:37.267926 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:37.267945 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:39.846505 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:39.860426 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:39.860514 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:39.896684 1143678 cri.go:89] found id: ""
	I0603 13:54:39.896712 1143678 logs.go:276] 0 containers: []
	W0603 13:54:39.896726 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:39.896736 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:39.896801 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:39.932437 1143678 cri.go:89] found id: ""
	I0603 13:54:39.932482 1143678 logs.go:276] 0 containers: []
	W0603 13:54:39.932494 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:39.932503 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:39.932571 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:39.967850 1143678 cri.go:89] found id: ""
	I0603 13:54:39.967883 1143678 logs.go:276] 0 containers: []
	W0603 13:54:39.967891 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:39.967898 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:39.967952 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:40.003255 1143678 cri.go:89] found id: ""
	I0603 13:54:40.003284 1143678 logs.go:276] 0 containers: []
	W0603 13:54:40.003292 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:40.003298 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:40.003351 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:40.045865 1143678 cri.go:89] found id: ""
	I0603 13:54:40.045892 1143678 logs.go:276] 0 containers: []
	W0603 13:54:40.045904 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:40.045912 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:40.045976 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:40.082469 1143678 cri.go:89] found id: ""
	I0603 13:54:40.082498 1143678 logs.go:276] 0 containers: []
	W0603 13:54:40.082507 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:40.082513 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:40.082584 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:40.117181 1143678 cri.go:89] found id: ""
	I0603 13:54:40.117231 1143678 logs.go:276] 0 containers: []
	W0603 13:54:40.117242 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:40.117250 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:40.117320 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:40.157776 1143678 cri.go:89] found id: ""
	I0603 13:54:40.157813 1143678 logs.go:276] 0 containers: []
	W0603 13:54:40.157822 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:40.157832 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:40.157848 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:40.213374 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:40.213437 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:40.228298 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:40.228330 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:40.305450 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:40.305485 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:40.305503 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:40.393653 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:40.393704 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:42.934691 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:42.948505 1143678 kubeadm.go:591] duration metric: took 4m4.45791317s to restartPrimaryControlPlane
	W0603 13:54:42.948592 1143678 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0603 13:54:42.948629 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 13:54:48.316951 1143678 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.36829775s)
	I0603 13:54:48.317039 1143678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:54:48.333630 1143678 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 13:54:48.345772 1143678 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 13:54:48.357359 1143678 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 13:54:48.357386 1143678 kubeadm.go:156] found existing configuration files:
	
	I0603 13:54:48.357477 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 13:54:48.367844 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 13:54:48.367917 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 13:54:48.379349 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 13:54:48.389684 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 13:54:48.389760 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 13:54:48.401562 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 13:54:48.412670 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 13:54:48.412743 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 13:54:48.424261 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 13:54:48.434598 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 13:54:48.434674 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 13:54:48.446187 1143678 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 13:54:48.527873 1143678 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0603 13:54:48.528073 1143678 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 13:54:48.695244 1143678 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 13:54:48.695401 1143678 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 13:54:48.695581 1143678 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 13:54:48.930141 1143678 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 13:54:48.932024 1143678 out.go:204]   - Generating certificates and keys ...
	I0603 13:54:48.932110 1143678 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 13:54:48.932168 1143678 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 13:54:48.932235 1143678 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 13:54:48.932305 1143678 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 13:54:48.932481 1143678 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 13:54:48.932639 1143678 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 13:54:48.933272 1143678 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 13:54:48.933771 1143678 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 13:54:48.934251 1143678 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 13:54:48.934654 1143678 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 13:54:48.934712 1143678 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 13:54:48.934762 1143678 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 13:54:49.063897 1143678 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 13:54:49.266680 1143678 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 13:54:49.364943 1143678 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 13:54:49.628905 1143678 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 13:54:49.645861 1143678 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 13:54:49.645991 1143678 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 13:54:49.646049 1143678 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 13:54:49.795196 1143678 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 13:54:49.798407 1143678 out.go:204]   - Booting up control plane ...
	I0603 13:54:49.798564 1143678 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 13:54:49.800163 1143678 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 13:54:49.802226 1143678 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 13:54:49.803809 1143678 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 13:54:49.806590 1143678 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0603 13:55:29.807867 1143678 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0603 13:55:29.808474 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:55:29.808754 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:55:34.809455 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:55:34.809722 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:55:44.810305 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:55:44.810491 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:56:04.811725 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:56:04.811929 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:56:44.813650 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:56:44.813933 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:56:44.813964 1143678 kubeadm.go:309] 
	I0603 13:56:44.814039 1143678 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0603 13:56:44.814075 1143678 kubeadm.go:309] 		timed out waiting for the condition
	I0603 13:56:44.814115 1143678 kubeadm.go:309] 
	I0603 13:56:44.814197 1143678 kubeadm.go:309] 	This error is likely caused by:
	I0603 13:56:44.814246 1143678 kubeadm.go:309] 		- The kubelet is not running
	I0603 13:56:44.814369 1143678 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0603 13:56:44.814378 1143678 kubeadm.go:309] 
	I0603 13:56:44.814496 1143678 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0603 13:56:44.814540 1143678 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0603 13:56:44.814573 1143678 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0603 13:56:44.814580 1143678 kubeadm.go:309] 
	I0603 13:56:44.814685 1143678 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0603 13:56:44.814785 1143678 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0603 13:56:44.814798 1143678 kubeadm.go:309] 
	I0603 13:56:44.814896 1143678 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0603 13:56:44.815001 1143678 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0603 13:56:44.815106 1143678 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0603 13:56:44.815208 1143678 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0603 13:56:44.815220 1143678 kubeadm.go:309] 
	I0603 13:56:44.816032 1143678 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 13:56:44.816137 1143678 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0603 13:56:44.816231 1143678 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0603 13:56:44.816405 1143678 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0603 13:56:44.816480 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 13:56:45.288649 1143678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:56:45.305284 1143678 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 13:56:45.316705 1143678 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 13:56:45.316736 1143678 kubeadm.go:156] found existing configuration files:
	
	I0603 13:56:45.316804 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 13:56:45.327560 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 13:56:45.327630 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 13:56:45.337910 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 13:56:45.349864 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 13:56:45.349948 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 13:56:45.361369 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 13:56:45.371797 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 13:56:45.371866 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 13:56:45.382861 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 13:56:45.393310 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 13:56:45.393382 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 13:56:45.403822 1143678 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 13:56:45.476725 1143678 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0603 13:56:45.476794 1143678 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 13:56:45.630786 1143678 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 13:56:45.630956 1143678 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 13:56:45.631125 1143678 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 13:56:45.814370 1143678 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 13:56:45.816372 1143678 out.go:204]   - Generating certificates and keys ...
	I0603 13:56:45.816481 1143678 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 13:56:45.816556 1143678 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 13:56:45.816710 1143678 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 13:56:45.816831 1143678 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 13:56:45.816928 1143678 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 13:56:45.817003 1143678 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 13:56:45.817093 1143678 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 13:56:45.817178 1143678 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 13:56:45.817328 1143678 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 13:56:45.817477 1143678 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 13:56:45.817533 1143678 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 13:56:45.817607 1143678 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 13:56:46.025905 1143678 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 13:56:46.331809 1143678 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 13:56:46.551488 1143678 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 13:56:46.636938 1143678 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 13:56:46.663292 1143678 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 13:56:46.663400 1143678 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 13:56:46.663448 1143678 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 13:56:46.840318 1143678 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 13:56:46.842399 1143678 out.go:204]   - Booting up control plane ...
	I0603 13:56:46.842530 1143678 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 13:56:46.851940 1143678 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 13:56:46.855283 1143678 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 13:56:46.855443 1143678 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 13:56:46.857883 1143678 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0603 13:57:26.860915 1143678 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0603 13:57:26.861047 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:57:26.861296 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:57:31.861724 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:57:31.862046 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:57:41.862803 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:57:41.863057 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:58:01.862907 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:58:01.863136 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:58:41.862069 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:58:41.862391 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:58:41.862430 1143678 kubeadm.go:309] 
	I0603 13:58:41.862535 1143678 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0603 13:58:41.862613 1143678 kubeadm.go:309] 		timed out waiting for the condition
	I0603 13:58:41.862624 1143678 kubeadm.go:309] 
	I0603 13:58:41.862675 1143678 kubeadm.go:309] 	This error is likely caused by:
	I0603 13:58:41.862737 1143678 kubeadm.go:309] 		- The kubelet is not running
	I0603 13:58:41.862895 1143678 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0603 13:58:41.862909 1143678 kubeadm.go:309] 
	I0603 13:58:41.863030 1143678 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0603 13:58:41.863060 1143678 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0603 13:58:41.863090 1143678 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0603 13:58:41.863100 1143678 kubeadm.go:309] 
	I0603 13:58:41.863230 1143678 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0603 13:58:41.863388 1143678 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0603 13:58:41.863406 1143678 kubeadm.go:309] 
	I0603 13:58:41.863583 1143678 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0603 13:58:41.863709 1143678 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0603 13:58:41.863811 1143678 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0603 13:58:41.863894 1143678 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0603 13:58:41.863917 1143678 kubeadm.go:309] 
	I0603 13:58:41.865001 1143678 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 13:58:41.865120 1143678 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0603 13:58:41.865209 1143678 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0603 13:58:41.865361 1143678 kubeadm.go:393] duration metric: took 8m3.432874561s to StartCluster
	I0603 13:58:41.865460 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:58:41.865537 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:58:41.912780 1143678 cri.go:89] found id: ""
	I0603 13:58:41.912812 1143678 logs.go:276] 0 containers: []
	W0603 13:58:41.912826 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:58:41.912832 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:58:41.912901 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:58:41.951372 1143678 cri.go:89] found id: ""
	I0603 13:58:41.951402 1143678 logs.go:276] 0 containers: []
	W0603 13:58:41.951411 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:58:41.951418 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:58:41.951490 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:58:41.989070 1143678 cri.go:89] found id: ""
	I0603 13:58:41.989104 1143678 logs.go:276] 0 containers: []
	W0603 13:58:41.989115 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:58:41.989123 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:58:41.989191 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:58:42.026208 1143678 cri.go:89] found id: ""
	I0603 13:58:42.026238 1143678 logs.go:276] 0 containers: []
	W0603 13:58:42.026246 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:58:42.026252 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:58:42.026312 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:58:42.064899 1143678 cri.go:89] found id: ""
	I0603 13:58:42.064941 1143678 logs.go:276] 0 containers: []
	W0603 13:58:42.064950 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:58:42.064971 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:58:42.065043 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:58:42.098817 1143678 cri.go:89] found id: ""
	I0603 13:58:42.098858 1143678 logs.go:276] 0 containers: []
	W0603 13:58:42.098868 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:58:42.098876 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:58:42.098939 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:58:42.133520 1143678 cri.go:89] found id: ""
	I0603 13:58:42.133558 1143678 logs.go:276] 0 containers: []
	W0603 13:58:42.133570 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:58:42.133579 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:58:42.133639 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:58:42.187356 1143678 cri.go:89] found id: ""
	I0603 13:58:42.187387 1143678 logs.go:276] 0 containers: []
	W0603 13:58:42.187399 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:58:42.187412 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:58:42.187434 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:58:42.249992 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:58:42.250034 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:58:42.272762 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:58:42.272801 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:58:42.362004 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:58:42.362030 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:58:42.362046 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:58:42.468630 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:58:42.468676 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0603 13:58:42.510945 1143678 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0603 13:58:42.511002 1143678 out.go:239] * 
	* 
	W0603 13:58:42.511094 1143678 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0603 13:58:42.511119 1143678 out.go:239] * 
	* 
	W0603 13:58:42.512307 1143678 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0603 13:58:42.516199 1143678 out.go:177] 
	W0603 13:58:42.517774 1143678 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0603 13:58:42.517848 1143678 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0603 13:58:42.517883 1143678 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0603 13:58:42.519747 1143678 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-151788 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-151788 -n old-k8s-version-151788
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-151788 -n old-k8s-version-151788: exit status 2 (241.0925ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-151788 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-151788 logs -n 25: (1.820673379s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-021279 sudo cat                              | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-021279 sudo                                  | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-021279 sudo                                  | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-021279 sudo                                  | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-021279 sudo find                             | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-021279 sudo crio                             | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-021279                                       | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	| delete  | -p                                                     | disable-driver-mounts-069000 | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | disable-driver-mounts-069000                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-030870 | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:43 UTC |
	|         | default-k8s-diff-port-030870                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-817450             | no-preload-817450            | jenkins | v1.33.1 | 03 Jun 24 13:42 UTC | 03 Jun 24 13:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-817450                                   | no-preload-817450            | jenkins | v1.33.1 | 03 Jun 24 13:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-223260            | embed-certs-223260           | jenkins | v1.33.1 | 03 Jun 24 13:43 UTC | 03 Jun 24 13:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-223260                                  | embed-certs-223260           | jenkins | v1.33.1 | 03 Jun 24 13:43 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-030870  | default-k8s-diff-port-030870 | jenkins | v1.33.1 | 03 Jun 24 13:43 UTC | 03 Jun 24 13:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-030870 | jenkins | v1.33.1 | 03 Jun 24 13:43 UTC |                     |
	|         | default-k8s-diff-port-030870                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-817450                  | no-preload-817450            | jenkins | v1.33.1 | 03 Jun 24 13:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-151788        | old-k8s-version-151788       | jenkins | v1.33.1 | 03 Jun 24 13:44 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p no-preload-817450                                   | no-preload-817450            | jenkins | v1.33.1 | 03 Jun 24 13:44 UTC | 03 Jun 24 13:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-223260                 | embed-certs-223260           | jenkins | v1.33.1 | 03 Jun 24 13:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-223260                                  | embed-certs-223260           | jenkins | v1.33.1 | 03 Jun 24 13:45 UTC | 03 Jun 24 13:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-030870       | default-k8s-diff-port-030870 | jenkins | v1.33.1 | 03 Jun 24 13:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-030870 | jenkins | v1.33.1 | 03 Jun 24 13:46 UTC | 03 Jun 24 13:54 UTC |
	|         | default-k8s-diff-port-030870                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-151788                              | old-k8s-version-151788       | jenkins | v1.33.1 | 03 Jun 24 13:46 UTC | 03 Jun 24 13:46 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-151788             | old-k8s-version-151788       | jenkins | v1.33.1 | 03 Jun 24 13:46 UTC | 03 Jun 24 13:46 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-151788                              | old-k8s-version-151788       | jenkins | v1.33.1 | 03 Jun 24 13:46 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 13:46:22
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 13:46:22.347386 1143678 out.go:291] Setting OutFile to fd 1 ...
	I0603 13:46:22.347655 1143678 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:46:22.347666 1143678 out.go:304] Setting ErrFile to fd 2...
	I0603 13:46:22.347672 1143678 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:46:22.347855 1143678 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
	I0603 13:46:22.348458 1143678 out.go:298] Setting JSON to false
	I0603 13:46:22.349502 1143678 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":16129,"bootTime":1717406253,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 13:46:22.349571 1143678 start.go:139] virtualization: kvm guest
	I0603 13:46:22.351720 1143678 out.go:177] * [old-k8s-version-151788] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 13:46:22.353180 1143678 out.go:177]   - MINIKUBE_LOCATION=19011
	I0603 13:46:22.353235 1143678 notify.go:220] Checking for updates...
	I0603 13:46:22.354400 1143678 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 13:46:22.355680 1143678 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 13:46:22.356796 1143678 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 13:46:22.357952 1143678 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 13:46:22.359052 1143678 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 13:46:22.360807 1143678 config.go:182] Loaded profile config "old-k8s-version-151788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0603 13:46:22.361230 1143678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:46:22.361306 1143678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:46:22.376241 1143678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42729
	I0603 13:46:22.376679 1143678 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:46:22.377267 1143678 main.go:141] libmachine: Using API Version  1
	I0603 13:46:22.377292 1143678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:46:22.377663 1143678 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:46:22.377897 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:46:22.379705 1143678 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0603 13:46:22.380895 1143678 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 13:46:22.381188 1143678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:46:22.381222 1143678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:46:22.396163 1143678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43961
	I0603 13:46:22.396669 1143678 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:46:22.397158 1143678 main.go:141] libmachine: Using API Version  1
	I0603 13:46:22.397180 1143678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:46:22.397509 1143678 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:46:22.397693 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:46:22.433731 1143678 out.go:177] * Using the kvm2 driver based on existing profile
	I0603 13:46:22.434876 1143678 start.go:297] selected driver: kvm2
	I0603 13:46:22.434897 1143678 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-151788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-151788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:46:22.435028 1143678 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 13:46:22.435716 1143678 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 13:46:22.435807 1143678 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19011-1078924/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0603 13:46:22.451200 1143678 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0603 13:46:22.451663 1143678 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 13:46:22.451755 1143678 cni.go:84] Creating CNI manager for ""
	I0603 13:46:22.451773 1143678 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:46:22.451832 1143678 start.go:340] cluster config:
	{Name:old-k8s-version-151788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-151788 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:46:22.451961 1143678 iso.go:125] acquiring lock: {Name:mka26d6a83f88b83737ccc78b57cc462fbe70fe1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 13:46:22.454327 1143678 out.go:177] * Starting "old-k8s-version-151788" primary control-plane node in "old-k8s-version-151788" cluster
	I0603 13:46:22.057705 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:46:22.455453 1143678 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0603 13:46:22.455492 1143678 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0603 13:46:22.455501 1143678 cache.go:56] Caching tarball of preloaded images
	I0603 13:46:22.455591 1143678 preload.go:173] Found /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 13:46:22.455604 1143678 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0603 13:46:22.455685 1143678 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/config.json ...
	I0603 13:46:22.455860 1143678 start.go:360] acquireMachinesLock for old-k8s-version-151788: {Name:mk20baaab39609d00406b78ad309423511e633ec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 13:46:28.137725 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:46:31.209684 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:46:37.289692 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:46:40.361614 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:46:46.441692 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:46:49.513686 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:46:55.593727 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:46:58.665749 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:04.745752 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:07.817726 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:13.897702 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:16.969727 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:23.049716 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:26.121758 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:32.201765 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:35.273759 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:41.353716 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:44.425767 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:50.505743 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:53.577777 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:59.657729 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:02.729769 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:08.809709 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:11.881708 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:17.961759 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:21.033726 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:27.113698 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:30.185691 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:36.265722 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:39.337764 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:45.417711 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:48.489729 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:54.569746 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:57.641701 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:49:03.721772 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:49:06.793709 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:49:12.873710 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:49:15.945728 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:49:22.025678 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:49:25.097675 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:49:28.102218 1143252 start.go:364] duration metric: took 3m44.709006863s to acquireMachinesLock for "embed-certs-223260"
	I0603 13:49:28.102293 1143252 start.go:96] Skipping create...Using existing machine configuration
	I0603 13:49:28.102302 1143252 fix.go:54] fixHost starting: 
	I0603 13:49:28.102635 1143252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:49:28.102666 1143252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:49:28.118384 1143252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44729
	I0603 13:49:28.119014 1143252 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:49:28.119601 1143252 main.go:141] libmachine: Using API Version  1
	I0603 13:49:28.119630 1143252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:49:28.119930 1143252 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:49:28.120116 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:49:28.120302 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetState
	I0603 13:49:28.122003 1143252 fix.go:112] recreateIfNeeded on embed-certs-223260: state=Stopped err=<nil>
	I0603 13:49:28.122030 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	W0603 13:49:28.122167 1143252 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 13:49:28.123963 1143252 out.go:177] * Restarting existing kvm2 VM for "embed-certs-223260" ...
	I0603 13:49:28.125564 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .Start
	I0603 13:49:28.125750 1143252 main.go:141] libmachine: (embed-certs-223260) Ensuring networks are active...
	I0603 13:49:28.126598 1143252 main.go:141] libmachine: (embed-certs-223260) Ensuring network default is active
	I0603 13:49:28.126965 1143252 main.go:141] libmachine: (embed-certs-223260) Ensuring network mk-embed-certs-223260 is active
	I0603 13:49:28.127319 1143252 main.go:141] libmachine: (embed-certs-223260) Getting domain xml...
	I0603 13:49:28.128017 1143252 main.go:141] libmachine: (embed-certs-223260) Creating domain...
	I0603 13:49:28.099474 1142862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 13:49:28.099536 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetMachineName
	I0603 13:49:28.099883 1142862 buildroot.go:166] provisioning hostname "no-preload-817450"
	I0603 13:49:28.099915 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetMachineName
	I0603 13:49:28.100115 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:49:28.102052 1142862 machine.go:97] duration metric: took 4m37.409499751s to provisionDockerMachine
	I0603 13:49:28.102123 1142862 fix.go:56] duration metric: took 4m37.432963538s for fixHost
	I0603 13:49:28.102135 1142862 start.go:83] releasing machines lock for "no-preload-817450", held for 4m37.432994587s
	W0603 13:49:28.102158 1142862 start.go:713] error starting host: provision: host is not running
	W0603 13:49:28.102317 1142862 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0603 13:49:28.102332 1142862 start.go:728] Will try again in 5 seconds ...
	I0603 13:49:29.332986 1143252 main.go:141] libmachine: (embed-certs-223260) Waiting to get IP...
	I0603 13:49:29.333963 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:29.334430 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:29.334475 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:29.334403 1144333 retry.go:31] will retry after 203.681987ms: waiting for machine to come up
	I0603 13:49:29.539995 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:29.540496 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:29.540564 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:29.540457 1144333 retry.go:31] will retry after 368.548292ms: waiting for machine to come up
	I0603 13:49:29.911212 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:29.911632 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:29.911665 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:29.911566 1144333 retry.go:31] will retry after 402.690969ms: waiting for machine to come up
	I0603 13:49:30.316480 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:30.316889 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:30.316920 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:30.316852 1144333 retry.go:31] will retry after 500.397867ms: waiting for machine to come up
	I0603 13:49:30.818653 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:30.819082 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:30.819107 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:30.819026 1144333 retry.go:31] will retry after 663.669804ms: waiting for machine to come up
	I0603 13:49:31.483776 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:31.484117 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:31.484144 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:31.484079 1144333 retry.go:31] will retry after 938.433137ms: waiting for machine to come up
	I0603 13:49:32.424128 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:32.424609 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:32.424640 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:32.424548 1144333 retry.go:31] will retry after 919.793328ms: waiting for machine to come up
	I0603 13:49:33.103895 1142862 start.go:360] acquireMachinesLock for no-preload-817450: {Name:mk20baaab39609d00406b78ad309423511e633ec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 13:49:33.346091 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:33.346549 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:33.346574 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:33.346511 1144333 retry.go:31] will retry after 1.115349726s: waiting for machine to come up
	I0603 13:49:34.463875 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:34.464588 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:34.464616 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:34.464529 1144333 retry.go:31] will retry after 1.153940362s: waiting for machine to come up
	I0603 13:49:35.619844 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:35.620243 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:35.620275 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:35.620176 1144333 retry.go:31] will retry after 1.514504154s: waiting for machine to come up
	I0603 13:49:37.135961 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:37.136409 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:37.136431 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:37.136382 1144333 retry.go:31] will retry after 2.757306897s: waiting for machine to come up
	I0603 13:49:39.895589 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:39.895942 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:39.895970 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:39.895881 1144333 retry.go:31] will retry after 3.019503072s: waiting for machine to come up
	I0603 13:49:42.919177 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:42.919640 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:42.919670 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:42.919588 1144333 retry.go:31] will retry after 3.150730989s: waiting for machine to come up
	I0603 13:49:47.494462 1143450 start.go:364] duration metric: took 3m37.207410663s to acquireMachinesLock for "default-k8s-diff-port-030870"
	I0603 13:49:47.494544 1143450 start.go:96] Skipping create...Using existing machine configuration
	I0603 13:49:47.494557 1143450 fix.go:54] fixHost starting: 
	I0603 13:49:47.494876 1143450 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:49:47.494918 1143450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:49:47.511570 1143450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44939
	I0603 13:49:47.512072 1143450 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:49:47.512568 1143450 main.go:141] libmachine: Using API Version  1
	I0603 13:49:47.512593 1143450 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:49:47.512923 1143450 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:49:47.513117 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:49:47.513276 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetState
	I0603 13:49:47.514783 1143450 fix.go:112] recreateIfNeeded on default-k8s-diff-port-030870: state=Stopped err=<nil>
	I0603 13:49:47.514817 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	W0603 13:49:47.514999 1143450 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 13:49:47.517441 1143450 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-030870" ...
	I0603 13:49:46.071609 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.072094 1143252 main.go:141] libmachine: (embed-certs-223260) Found IP for machine: 192.168.83.246
	I0603 13:49:46.072117 1143252 main.go:141] libmachine: (embed-certs-223260) Reserving static IP address...
	I0603 13:49:46.072132 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has current primary IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.072552 1143252 main.go:141] libmachine: (embed-certs-223260) Reserved static IP address: 192.168.83.246
	I0603 13:49:46.072585 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "embed-certs-223260", mac: "52:54:00:8e:14:a8", ip: "192.168.83.246"} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.072593 1143252 main.go:141] libmachine: (embed-certs-223260) Waiting for SSH to be available...
	I0603 13:49:46.072632 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | skip adding static IP to network mk-embed-certs-223260 - found existing host DHCP lease matching {name: "embed-certs-223260", mac: "52:54:00:8e:14:a8", ip: "192.168.83.246"}
	I0603 13:49:46.072655 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | Getting to WaitForSSH function...
	I0603 13:49:46.074738 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.075059 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.075091 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.075189 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | Using SSH client type: external
	I0603 13:49:46.075213 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | Using SSH private key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa (-rw-------)
	I0603 13:49:46.075249 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.246 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 13:49:46.075271 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | About to run SSH command:
	I0603 13:49:46.075283 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | exit 0
	I0603 13:49:46.197971 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | SSH cmd err, output: <nil>: 
	I0603 13:49:46.198498 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetConfigRaw
	I0603 13:49:46.199179 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetIP
	I0603 13:49:46.201821 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.202239 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.202277 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.202533 1143252 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260/config.json ...
	I0603 13:49:46.202727 1143252 machine.go:94] provisionDockerMachine start ...
	I0603 13:49:46.202745 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:49:46.202964 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:46.205259 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.205636 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.205663 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.205773 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:46.205954 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.206100 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.206318 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:46.206538 1143252 main.go:141] libmachine: Using SSH client type: native
	I0603 13:49:46.206819 1143252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.246 22 <nil> <nil>}
	I0603 13:49:46.206837 1143252 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 13:49:46.310241 1143252 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 13:49:46.310277 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetMachineName
	I0603 13:49:46.310583 1143252 buildroot.go:166] provisioning hostname "embed-certs-223260"
	I0603 13:49:46.310616 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetMachineName
	I0603 13:49:46.310836 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:46.313692 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.314078 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.314116 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.314222 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:46.314446 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.314631 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.314800 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:46.314969 1143252 main.go:141] libmachine: Using SSH client type: native
	I0603 13:49:46.315166 1143252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.246 22 <nil> <nil>}
	I0603 13:49:46.315183 1143252 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-223260 && echo "embed-certs-223260" | sudo tee /etc/hostname
	I0603 13:49:46.428560 1143252 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-223260
	
	I0603 13:49:46.428600 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:46.431381 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.431757 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.431784 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.432021 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:46.432283 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.432477 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.432609 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:46.432785 1143252 main.go:141] libmachine: Using SSH client type: native
	I0603 13:49:46.432960 1143252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.246 22 <nil> <nil>}
	I0603 13:49:46.432976 1143252 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-223260' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-223260/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-223260' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 13:49:46.542400 1143252 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 13:49:46.542446 1143252 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19011-1078924/.minikube CaCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19011-1078924/.minikube}
	I0603 13:49:46.542536 1143252 buildroot.go:174] setting up certificates
	I0603 13:49:46.542557 1143252 provision.go:84] configureAuth start
	I0603 13:49:46.542576 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetMachineName
	I0603 13:49:46.542913 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetIP
	I0603 13:49:46.545940 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.546339 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.546368 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.546499 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:46.548715 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.549097 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.549127 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.549294 1143252 provision.go:143] copyHostCerts
	I0603 13:49:46.549382 1143252 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem, removing ...
	I0603 13:49:46.549397 1143252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 13:49:46.549486 1143252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem (1078 bytes)
	I0603 13:49:46.549578 1143252 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem, removing ...
	I0603 13:49:46.549587 1143252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 13:49:46.549613 1143252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem (1123 bytes)
	I0603 13:49:46.549664 1143252 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem, removing ...
	I0603 13:49:46.549671 1143252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 13:49:46.549690 1143252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem (1675 bytes)
	I0603 13:49:46.549740 1143252 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem org=jenkins.embed-certs-223260 san=[127.0.0.1 192.168.83.246 embed-certs-223260 localhost minikube]
	I0603 13:49:46.807050 1143252 provision.go:177] copyRemoteCerts
	I0603 13:49:46.807111 1143252 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 13:49:46.807140 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:46.809916 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.810303 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.810347 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.810513 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:46.810758 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.810929 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:46.811168 1143252 sshutil.go:53] new ssh client: &{IP:192.168.83.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa Username:docker}
	I0603 13:49:46.892182 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 13:49:46.916657 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0603 13:49:46.941896 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 13:49:46.967292 1143252 provision.go:87] duration metric: took 424.714334ms to configureAuth
	I0603 13:49:46.967331 1143252 buildroot.go:189] setting minikube options for container-runtime
	I0603 13:49:46.967539 1143252 config.go:182] Loaded profile config "embed-certs-223260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:49:46.967626 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:46.970350 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.970668 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.970703 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.970870 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:46.971115 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.971314 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.971454 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:46.971625 1143252 main.go:141] libmachine: Using SSH client type: native
	I0603 13:49:46.971809 1143252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.246 22 <nil> <nil>}
	I0603 13:49:46.971831 1143252 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 13:49:47.264894 1143252 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 13:49:47.264922 1143252 machine.go:97] duration metric: took 1.062182146s to provisionDockerMachine
	I0603 13:49:47.264935 1143252 start.go:293] postStartSetup for "embed-certs-223260" (driver="kvm2")
	I0603 13:49:47.264946 1143252 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 13:49:47.264963 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:49:47.265368 1143252 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 13:49:47.265398 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:47.268412 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.268765 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:47.268796 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.268989 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:47.269223 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:47.269455 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:47.269625 1143252 sshutil.go:53] new ssh client: &{IP:192.168.83.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa Username:docker}
	I0603 13:49:47.348583 1143252 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 13:49:47.352828 1143252 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 13:49:47.352867 1143252 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/addons for local assets ...
	I0603 13:49:47.352949 1143252 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/files for local assets ...
	I0603 13:49:47.353046 1143252 filesync.go:149] local asset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> 10862512.pem in /etc/ssl/certs
	I0603 13:49:47.353164 1143252 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 13:49:47.363222 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:49:47.388132 1143252 start.go:296] duration metric: took 123.177471ms for postStartSetup
	I0603 13:49:47.388202 1143252 fix.go:56] duration metric: took 19.285899119s for fixHost
	I0603 13:49:47.388233 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:47.390960 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.391414 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:47.391477 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.391681 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:47.391937 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:47.392127 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:47.392266 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:47.392436 1143252 main.go:141] libmachine: Using SSH client type: native
	I0603 13:49:47.392670 1143252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.246 22 <nil> <nil>}
	I0603 13:49:47.392687 1143252 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 13:49:47.494294 1143252 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717422587.469729448
	
	I0603 13:49:47.494320 1143252 fix.go:216] guest clock: 1717422587.469729448
	I0603 13:49:47.494328 1143252 fix.go:229] Guest: 2024-06-03 13:49:47.469729448 +0000 UTC Remote: 2024-06-03 13:49:47.388208749 +0000 UTC m=+244.138441135 (delta=81.520699ms)
	I0603 13:49:47.494354 1143252 fix.go:200] guest clock delta is within tolerance: 81.520699ms
	I0603 13:49:47.494361 1143252 start.go:83] releasing machines lock for "embed-certs-223260", held for 19.392103897s
	I0603 13:49:47.494394 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:49:47.494686 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetIP
	I0603 13:49:47.497515 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.497930 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:47.497976 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.498110 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:49:47.498672 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:49:47.498859 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:49:47.498934 1143252 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 13:49:47.498988 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:47.499062 1143252 ssh_runner.go:195] Run: cat /version.json
	I0603 13:49:47.499082 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:47.501788 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.502075 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.502131 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:47.502156 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.502291 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:47.502390 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:47.502427 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.502550 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:47.502647 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:47.502738 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:47.502806 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:47.502942 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:47.502955 1143252 sshutil.go:53] new ssh client: &{IP:192.168.83.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa Username:docker}
	I0603 13:49:47.503078 1143252 sshutil.go:53] new ssh client: &{IP:192.168.83.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa Username:docker}
	I0603 13:49:47.612706 1143252 ssh_runner.go:195] Run: systemctl --version
	I0603 13:49:47.618922 1143252 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 13:49:47.764749 1143252 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 13:49:47.770936 1143252 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 13:49:47.771023 1143252 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 13:49:47.788401 1143252 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 13:49:47.788427 1143252 start.go:494] detecting cgroup driver to use...
	I0603 13:49:47.788486 1143252 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 13:49:47.805000 1143252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 13:49:47.822258 1143252 docker.go:217] disabling cri-docker service (if available) ...
	I0603 13:49:47.822315 1143252 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 13:49:47.837826 1143252 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 13:49:47.853818 1143252 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 13:49:47.978204 1143252 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 13:49:48.106302 1143252 docker.go:233] disabling docker service ...
	I0603 13:49:48.106366 1143252 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 13:49:48.120974 1143252 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 13:49:48.134911 1143252 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 13:49:48.278103 1143252 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 13:49:48.398238 1143252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 13:49:48.413207 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 13:49:48.432211 1143252 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 13:49:48.432281 1143252 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:49:48.443668 1143252 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 13:49:48.443746 1143252 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:49:48.454990 1143252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:49:48.467119 1143252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:49:48.479875 1143252 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 13:49:48.496767 1143252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:49:48.508872 1143252 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:49:48.530972 1143252 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:49:48.542631 1143252 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 13:49:48.552775 1143252 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 13:49:48.552836 1143252 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 13:49:48.566528 1143252 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 13:49:48.582917 1143252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:49:48.716014 1143252 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 13:49:48.860157 1143252 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 13:49:48.860283 1143252 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 13:49:48.865046 1143252 start.go:562] Will wait 60s for crictl version
	I0603 13:49:48.865121 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:49:48.869520 1143252 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 13:49:48.909721 1143252 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 13:49:48.909819 1143252 ssh_runner.go:195] Run: crio --version
	I0603 13:49:48.939080 1143252 ssh_runner.go:195] Run: crio --version
	I0603 13:49:48.970595 1143252 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 13:49:47.518807 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .Start
	I0603 13:49:47.518981 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Ensuring networks are active...
	I0603 13:49:47.519623 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Ensuring network default is active
	I0603 13:49:47.519926 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Ensuring network mk-default-k8s-diff-port-030870 is active
	I0603 13:49:47.520408 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Getting domain xml...
	I0603 13:49:47.521014 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Creating domain...
	I0603 13:49:48.798483 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting to get IP...
	I0603 13:49:48.799695 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:48.800174 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:48.800305 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:48.800165 1144471 retry.go:31] will retry after 204.161843ms: waiting for machine to come up
	I0603 13:49:49.005669 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:49.006143 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:49.006180 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:49.006091 1144471 retry.go:31] will retry after 382.751679ms: waiting for machine to come up
	I0603 13:49:49.391162 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:49.391717 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:49.391750 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:49.391670 1144471 retry.go:31] will retry after 314.248576ms: waiting for machine to come up
	I0603 13:49:49.707349 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:49.707957 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:49.707990 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:49.707856 1144471 retry.go:31] will retry after 446.461931ms: waiting for machine to come up
	I0603 13:49:50.155616 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:50.156238 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:50.156274 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:50.156174 1144471 retry.go:31] will retry after 712.186964ms: waiting for machine to come up
	I0603 13:49:48.971971 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetIP
	I0603 13:49:48.975079 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:48.975439 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:48.975471 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:48.975721 1143252 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0603 13:49:48.980114 1143252 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:49:48.993380 1143252 kubeadm.go:877] updating cluster {Name:embed-certs-223260 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:embed-certs-223260 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.246 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 13:49:48.993543 1143252 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 13:49:48.993636 1143252 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:49:49.032289 1143252 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0603 13:49:49.032364 1143252 ssh_runner.go:195] Run: which lz4
	I0603 13:49:49.036707 1143252 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 13:49:49.040973 1143252 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 13:49:49.041000 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0603 13:49:50.554295 1143252 crio.go:462] duration metric: took 1.517623353s to copy over tarball
	I0603 13:49:50.554387 1143252 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 13:49:52.823733 1143252 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.269303423s)
	I0603 13:49:52.823785 1143252 crio.go:469] duration metric: took 2.269454274s to extract the tarball
	I0603 13:49:52.823799 1143252 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 13:49:52.862060 1143252 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:49:52.906571 1143252 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 13:49:52.906602 1143252 cache_images.go:84] Images are preloaded, skipping loading
	I0603 13:49:52.906618 1143252 kubeadm.go:928] updating node { 192.168.83.246 8443 v1.30.1 crio true true} ...
	I0603 13:49:52.906774 1143252 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-223260 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:embed-certs-223260 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 13:49:52.906866 1143252 ssh_runner.go:195] Run: crio config
	I0603 13:49:52.954082 1143252 cni.go:84] Creating CNI manager for ""
	I0603 13:49:52.954111 1143252 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:49:52.954129 1143252 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 13:49:52.954159 1143252 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.246 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-223260 NodeName:embed-certs-223260 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.246"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.246 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 13:49:52.954355 1143252 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.246
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-223260"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.246
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.246"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 13:49:52.954446 1143252 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 13:49:52.964488 1143252 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 13:49:52.964582 1143252 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 13:49:52.974118 1143252 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0603 13:49:52.990701 1143252 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 13:49:53.007539 1143252 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0603 13:49:53.024943 1143252 ssh_runner.go:195] Run: grep 192.168.83.246	control-plane.minikube.internal$ /etc/hosts
	I0603 13:49:53.029097 1143252 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.246	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:49:53.041234 1143252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:49:53.178449 1143252 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:49:53.195718 1143252 certs.go:68] Setting up /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260 for IP: 192.168.83.246
	I0603 13:49:53.195750 1143252 certs.go:194] generating shared ca certs ...
	I0603 13:49:53.195769 1143252 certs.go:226] acquiring lock for ca certs: {Name:mkeec5aabce7c9540fcb31b78e4f96c2851d54f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:49:53.195954 1143252 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key
	I0603 13:49:53.196021 1143252 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key
	I0603 13:49:53.196035 1143252 certs.go:256] generating profile certs ...
	I0603 13:49:53.196256 1143252 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260/client.key
	I0603 13:49:53.196341 1143252 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260/apiserver.key.90d43877
	I0603 13:49:53.196437 1143252 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260/proxy-client.key
	I0603 13:49:53.196605 1143252 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem (1338 bytes)
	W0603 13:49:53.196663 1143252 certs.go:480] ignoring /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251_empty.pem, impossibly tiny 0 bytes
	I0603 13:49:53.196678 1143252 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 13:49:53.196708 1143252 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem (1078 bytes)
	I0603 13:49:53.196756 1143252 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem (1123 bytes)
	I0603 13:49:53.196787 1143252 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem (1675 bytes)
	I0603 13:49:53.196838 1143252 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:49:53.197895 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 13:49:53.231612 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 13:49:53.263516 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 13:49:50.870317 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:50.870816 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:50.870841 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:50.870781 1144471 retry.go:31] will retry after 855.15183ms: waiting for machine to come up
	I0603 13:49:51.727393 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:51.727926 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:51.727960 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:51.727869 1144471 retry.go:31] will retry after 997.293541ms: waiting for machine to come up
	I0603 13:49:52.726578 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:52.727036 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:52.727073 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:52.726953 1144471 retry.go:31] will retry after 1.4233414s: waiting for machine to come up
	I0603 13:49:54.151594 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:54.152072 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:54.152099 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:54.152021 1144471 retry.go:31] will retry after 1.348888248s: waiting for machine to come up
	I0603 13:49:53.303724 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0603 13:49:53.334700 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0603 13:49:53.371594 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 13:49:53.396381 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 13:49:53.420985 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0603 13:49:53.445334 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem --> /usr/share/ca-certificates/1086251.pem (1338 bytes)
	I0603 13:49:53.469632 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /usr/share/ca-certificates/10862512.pem (1708 bytes)
	I0603 13:49:53.495720 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 13:49:53.522416 1143252 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 13:49:53.541593 1143252 ssh_runner.go:195] Run: openssl version
	I0603 13:49:53.547653 1143252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1086251.pem && ln -fs /usr/share/ca-certificates/1086251.pem /etc/ssl/certs/1086251.pem"
	I0603 13:49:53.558802 1143252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1086251.pem
	I0603 13:49:53.563511 1143252 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:37 /usr/share/ca-certificates/1086251.pem
	I0603 13:49:53.563579 1143252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1086251.pem
	I0603 13:49:53.569691 1143252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1086251.pem /etc/ssl/certs/51391683.0"
	I0603 13:49:53.582814 1143252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10862512.pem && ln -fs /usr/share/ca-certificates/10862512.pem /etc/ssl/certs/10862512.pem"
	I0603 13:49:53.595684 1143252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10862512.pem
	I0603 13:49:53.600613 1143252 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:37 /usr/share/ca-certificates/10862512.pem
	I0603 13:49:53.600675 1143252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10862512.pem
	I0603 13:49:53.607008 1143252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10862512.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 13:49:53.619919 1143252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 13:49:53.632663 1143252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:49:53.637604 1143252 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:24 /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:49:53.637675 1143252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:49:53.643844 1143252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 13:49:53.655934 1143252 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 13:49:53.660801 1143252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 13:49:53.667391 1143252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 13:49:53.674382 1143252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 13:49:53.681121 1143252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 13:49:53.687496 1143252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 13:49:53.693623 1143252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 13:49:53.699764 1143252 kubeadm.go:391] StartCluster: {Name:embed-certs-223260 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:embed-certs-223260 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.246 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:49:53.699871 1143252 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 13:49:53.699928 1143252 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:49:53.736588 1143252 cri.go:89] found id: ""
	I0603 13:49:53.736662 1143252 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0603 13:49:53.750620 1143252 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 13:49:53.750644 1143252 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 13:49:53.750652 1143252 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 13:49:53.750716 1143252 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 13:49:53.765026 1143252 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 13:49:53.766297 1143252 kubeconfig.go:125] found "embed-certs-223260" server: "https://192.168.83.246:8443"
	I0603 13:49:53.768662 1143252 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 13:49:53.779583 1143252 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.83.246
	I0603 13:49:53.779625 1143252 kubeadm.go:1154] stopping kube-system containers ...
	I0603 13:49:53.779639 1143252 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0603 13:49:53.779695 1143252 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:49:53.820312 1143252 cri.go:89] found id: ""
	I0603 13:49:53.820398 1143252 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 13:49:53.838446 1143252 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 13:49:53.849623 1143252 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 13:49:53.849643 1143252 kubeadm.go:156] found existing configuration files:
	
	I0603 13:49:53.849700 1143252 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 13:49:53.859379 1143252 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 13:49:53.859451 1143252 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 13:49:53.869939 1143252 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 13:49:53.880455 1143252 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 13:49:53.880527 1143252 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 13:49:53.890918 1143252 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 13:49:53.900841 1143252 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 13:49:53.900894 1143252 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 13:49:53.910968 1143252 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 13:49:53.921064 1143252 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 13:49:53.921121 1143252 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 13:49:53.931550 1143252 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 13:49:53.942309 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:49:54.078959 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:49:54.842079 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:49:55.043420 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:49:55.111164 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:49:55.220384 1143252 api_server.go:52] waiting for apiserver process to appear ...
	I0603 13:49:55.220475 1143252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:49:55.721612 1143252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:49:56.221513 1143252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:49:56.257801 1143252 api_server.go:72] duration metric: took 1.037411844s to wait for apiserver process to appear ...
	I0603 13:49:56.257845 1143252 api_server.go:88] waiting for apiserver healthz status ...
	I0603 13:49:56.257874 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:49:55.502734 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:55.503282 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:55.503313 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:55.503226 1144471 retry.go:31] will retry after 1.733012887s: waiting for machine to come up
	I0603 13:49:57.238544 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:57.238975 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:57.239006 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:57.238917 1144471 retry.go:31] will retry after 2.565512625s: waiting for machine to come up
	I0603 13:49:59.806662 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:59.807077 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:59.807105 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:59.807024 1144471 retry.go:31] will retry after 2.759375951s: waiting for machine to come up
	I0603 13:49:59.684015 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 13:49:59.684058 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 13:49:59.684078 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:49:59.757751 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 13:49:59.757791 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 13:49:59.758846 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:49:59.779923 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:49:59.779974 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:00.258098 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:50:00.265061 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:00.265089 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:00.758643 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:50:00.764364 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:00.764400 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:01.257950 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:50:01.262846 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:01.262875 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:01.758078 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:50:01.763269 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:01.763301 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:02.258641 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:50:02.263628 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:02.263658 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:02.758205 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:50:02.765436 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:02.765470 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:03.258663 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:50:03.263141 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 200:
	ok
	I0603 13:50:03.269787 1143252 api_server.go:141] control plane version: v1.30.1
	I0603 13:50:03.269817 1143252 api_server.go:131] duration metric: took 7.011964721s to wait for apiserver health ...
	I0603 13:50:03.269827 1143252 cni.go:84] Creating CNI manager for ""
	I0603 13:50:03.269833 1143252 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:50:03.271812 1143252 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 13:50:03.273154 1143252 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 13:50:03.285329 1143252 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 13:50:03.305480 1143252 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 13:50:03.317546 1143252 system_pods.go:59] 8 kube-system pods found
	I0603 13:50:03.317601 1143252 system_pods.go:61] "coredns-7db6d8ff4d-qdjrv" [9a490ea5-c189-4d28-bd6b-509610d35f37] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 13:50:03.317614 1143252 system_pods.go:61] "etcd-embed-certs-223260" [97807b62-195b-4d94-a7f8-754f68ad4f03] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0603 13:50:03.317627 1143252 system_pods.go:61] "kube-apiserver-embed-certs-223260" [df2f6cde-407c-4ed2-8fec-5fa61a428a88] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0603 13:50:03.317637 1143252 system_pods.go:61] "kube-controller-manager-embed-certs-223260" [9b8bc1b7-3f43-4626-b9ee-37f5176b7fd6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0603 13:50:03.317645 1143252 system_pods.go:61] "kube-proxy-s5vdl" [4c515f67-d265-4140-82ec-ba9ac4ddda80] Running
	I0603 13:50:03.317658 1143252 system_pods.go:61] "kube-scheduler-embed-certs-223260" [d23001bf-d971-42d2-a901-b2ec4b4db649] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0603 13:50:03.317667 1143252 system_pods.go:61] "metrics-server-569cc877fc-v7d9t" [e89c698d-7aab-4acd-a9b3-5ba0315ad681] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:50:03.317677 1143252 system_pods.go:61] "storage-provisioner" [6ff65744-2d90-4589-a97f-d6b4d792eab4] Running
	I0603 13:50:03.317686 1143252 system_pods.go:74] duration metric: took 12.177585ms to wait for pod list to return data ...
	I0603 13:50:03.317695 1143252 node_conditions.go:102] verifying NodePressure condition ...
	I0603 13:50:03.321445 1143252 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 13:50:03.321479 1143252 node_conditions.go:123] node cpu capacity is 2
	I0603 13:50:03.321493 1143252 node_conditions.go:105] duration metric: took 3.787651ms to run NodePressure ...
	I0603 13:50:03.321512 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:03.598576 1143252 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0603 13:50:03.604196 1143252 kubeadm.go:733] kubelet initialised
	I0603 13:50:03.604219 1143252 kubeadm.go:734] duration metric: took 5.606021ms waiting for restarted kubelet to initialise ...
	I0603 13:50:03.604236 1143252 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:50:03.611441 1143252 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-qdjrv" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:03.615911 1143252 pod_ready.go:97] node "embed-certs-223260" hosting pod "coredns-7db6d8ff4d-qdjrv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:03.615936 1143252 pod_ready.go:81] duration metric: took 4.468017ms for pod "coredns-7db6d8ff4d-qdjrv" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:03.615945 1143252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-223260" hosting pod "coredns-7db6d8ff4d-qdjrv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:03.615955 1143252 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:03.620663 1143252 pod_ready.go:97] node "embed-certs-223260" hosting pod "etcd-embed-certs-223260" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:03.620683 1143252 pod_ready.go:81] duration metric: took 4.71967ms for pod "etcd-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:03.620691 1143252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-223260" hosting pod "etcd-embed-certs-223260" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:03.620697 1143252 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:03.624894 1143252 pod_ready.go:97] node "embed-certs-223260" hosting pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:03.624917 1143252 pod_ready.go:81] duration metric: took 4.212227ms for pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:03.624925 1143252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-223260" hosting pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:03.624933 1143252 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:03.708636 1143252 pod_ready.go:97] node "embed-certs-223260" hosting pod "kube-controller-manager-embed-certs-223260" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:03.708665 1143252 pod_ready.go:81] duration metric: took 83.72445ms for pod "kube-controller-manager-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:03.708675 1143252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-223260" hosting pod "kube-controller-manager-embed-certs-223260" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:03.708681 1143252 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-s5vdl" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:04.109391 1143252 pod_ready.go:97] node "embed-certs-223260" hosting pod "kube-proxy-s5vdl" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:04.109454 1143252 pod_ready.go:81] duration metric: took 400.761651ms for pod "kube-proxy-s5vdl" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:04.109469 1143252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-223260" hosting pod "kube-proxy-s5vdl" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:04.109478 1143252 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:04.509683 1143252 pod_ready.go:97] node "embed-certs-223260" hosting pod "kube-scheduler-embed-certs-223260" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:04.509712 1143252 pod_ready.go:81] duration metric: took 400.226435ms for pod "kube-scheduler-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:04.509723 1143252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-223260" hosting pod "kube-scheduler-embed-certs-223260" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:04.509730 1143252 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:04.909629 1143252 pod_ready.go:97] node "embed-certs-223260" hosting pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:04.909659 1143252 pod_ready.go:81] duration metric: took 399.917901ms for pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:04.909669 1143252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-223260" hosting pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:04.909679 1143252 pod_ready.go:38] duration metric: took 1.30543039s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:50:04.909697 1143252 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 13:50:04.921682 1143252 ops.go:34] apiserver oom_adj: -16
	I0603 13:50:04.921708 1143252 kubeadm.go:591] duration metric: took 11.171050234s to restartPrimaryControlPlane
	I0603 13:50:04.921717 1143252 kubeadm.go:393] duration metric: took 11.221962831s to StartCluster
	I0603 13:50:04.921737 1143252 settings.go:142] acquiring lock: {Name:mka7155af15d143794eb08b8670f7d850f44839e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:50:04.921807 1143252 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 13:50:04.923342 1143252 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/kubeconfig: {Name:mk082a4c41fd0f4876b4085806e1bc5ef6533b14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:50:04.923628 1143252 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.83.246 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 13:50:04.927063 1143252 out.go:177] * Verifying Kubernetes components...
	I0603 13:50:04.923693 1143252 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 13:50:04.923865 1143252 config.go:182] Loaded profile config "embed-certs-223260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:50:04.928850 1143252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:50:04.928873 1143252 addons.go:69] Setting default-storageclass=true in profile "embed-certs-223260"
	I0603 13:50:04.928872 1143252 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-223260"
	I0603 13:50:04.928889 1143252 addons.go:69] Setting metrics-server=true in profile "embed-certs-223260"
	I0603 13:50:04.928906 1143252 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-223260"
	I0603 13:50:04.928923 1143252 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-223260"
	I0603 13:50:04.928935 1143252 addons.go:234] Setting addon metrics-server=true in "embed-certs-223260"
	W0603 13:50:04.928938 1143252 addons.go:243] addon storage-provisioner should already be in state true
	W0603 13:50:04.928945 1143252 addons.go:243] addon metrics-server should already be in state true
	I0603 13:50:04.928980 1143252 host.go:66] Checking if "embed-certs-223260" exists ...
	I0603 13:50:04.928980 1143252 host.go:66] Checking if "embed-certs-223260" exists ...
	I0603 13:50:04.929307 1143252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:04.929346 1143252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:04.929352 1143252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:04.929372 1143252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:04.929597 1143252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:04.929630 1143252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:04.944948 1143252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38847
	I0603 13:50:04.945071 1143252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44523
	I0603 13:50:04.945489 1143252 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:04.945571 1143252 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:04.946137 1143252 main.go:141] libmachine: Using API Version  1
	I0603 13:50:04.946166 1143252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:04.946299 1143252 main.go:141] libmachine: Using API Version  1
	I0603 13:50:04.946319 1143252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:04.946589 1143252 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:04.946650 1143252 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:04.946798 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetState
	I0603 13:50:04.947022 1143252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35157
	I0603 13:50:04.947210 1143252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:04.947250 1143252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:04.947517 1143252 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:04.948043 1143252 main.go:141] libmachine: Using API Version  1
	I0603 13:50:04.948069 1143252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:04.948437 1143252 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:04.949064 1143252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:04.949107 1143252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:04.950532 1143252 addons.go:234] Setting addon default-storageclass=true in "embed-certs-223260"
	W0603 13:50:04.950558 1143252 addons.go:243] addon default-storageclass should already be in state true
	I0603 13:50:04.950589 1143252 host.go:66] Checking if "embed-certs-223260" exists ...
	I0603 13:50:04.950951 1143252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:04.951008 1143252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:04.964051 1143252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37589
	I0603 13:50:04.964078 1143252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35097
	I0603 13:50:04.964513 1143252 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:04.964562 1143252 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:04.965062 1143252 main.go:141] libmachine: Using API Version  1
	I0603 13:50:04.965088 1143252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:04.965128 1143252 main.go:141] libmachine: Using API Version  1
	I0603 13:50:04.965153 1143252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:04.965473 1143252 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:04.965532 1143252 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:04.965652 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetState
	I0603 13:50:04.965740 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetState
	I0603 13:50:04.967606 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:50:04.967739 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:50:04.969783 1143252 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:04.971193 1143252 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0603 13:50:02.567560 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:02.567988 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:50:02.568020 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:50:02.567915 1144471 retry.go:31] will retry after 3.955051362s: waiting for machine to come up
	I0603 13:50:04.972568 1143252 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0603 13:50:04.972588 1143252 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0603 13:50:04.972606 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:50:04.971275 1143252 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 13:50:04.972634 1143252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 13:50:04.972658 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:50:04.971495 1143252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42379
	I0603 13:50:04.973108 1143252 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:04.973575 1143252 main.go:141] libmachine: Using API Version  1
	I0603 13:50:04.973599 1143252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:04.973931 1143252 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:04.974623 1143252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:04.974658 1143252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:04.976128 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:50:04.976251 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:50:04.976535 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:50:04.976559 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:50:04.976709 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:50:04.976724 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:50:04.976768 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:50:04.976915 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:50:04.976989 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:50:04.977099 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:50:04.977156 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:50:04.977242 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:50:04.977305 1143252 sshutil.go:53] new ssh client: &{IP:192.168.83.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa Username:docker}
	I0603 13:50:04.977500 1143252 sshutil.go:53] new ssh client: &{IP:192.168.83.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa Username:docker}
	I0603 13:50:04.990810 1143252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36607
	I0603 13:50:04.991293 1143252 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:04.991844 1143252 main.go:141] libmachine: Using API Version  1
	I0603 13:50:04.991875 1143252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:04.992279 1143252 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:04.992499 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetState
	I0603 13:50:04.994225 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:50:04.994456 1143252 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 13:50:04.994476 1143252 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 13:50:04.994490 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:50:04.997771 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:50:04.998210 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:50:04.998239 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:50:04.998418 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:50:04.998627 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:50:04.998811 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:50:04.998941 1143252 sshutil.go:53] new ssh client: &{IP:192.168.83.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa Username:docker}
	I0603 13:50:05.119962 1143252 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:50:05.140880 1143252 node_ready.go:35] waiting up to 6m0s for node "embed-certs-223260" to be "Ready" ...
	I0603 13:50:05.271863 1143252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 13:50:05.275815 1143252 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0603 13:50:05.275843 1143252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0603 13:50:05.294572 1143252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 13:50:05.346520 1143252 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0603 13:50:05.346553 1143252 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0603 13:50:05.417100 1143252 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 13:50:05.417141 1143252 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0603 13:50:05.496250 1143252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 13:50:06.207746 1143252 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:06.207781 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .Close
	I0603 13:50:06.207849 1143252 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:06.207873 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .Close
	I0603 13:50:06.208103 1143252 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:06.208152 1143252 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:06.208161 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | Closing plugin on server side
	I0603 13:50:06.208182 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | Closing plugin on server side
	I0603 13:50:06.208157 1143252 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:06.208197 1143252 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:06.208200 1143252 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:06.208216 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .Close
	I0603 13:50:06.208208 1143252 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:06.208284 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .Close
	I0603 13:50:06.208572 1143252 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:06.208590 1143252 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:06.208691 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | Closing plugin on server side
	I0603 13:50:06.208703 1143252 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:06.208724 1143252 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:06.216764 1143252 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:06.216783 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .Close
	I0603 13:50:06.217095 1143252 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:06.217111 1143252 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:06.374254 1143252 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:06.374281 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .Close
	I0603 13:50:06.374603 1143252 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:06.374623 1143252 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:06.374634 1143252 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:06.374638 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | Closing plugin on server side
	I0603 13:50:06.374644 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .Close
	I0603 13:50:06.374901 1143252 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:06.374916 1143252 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:06.374933 1143252 addons.go:475] Verifying addon metrics-server=true in "embed-certs-223260"
	I0603 13:50:06.374948 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | Closing plugin on server side
	I0603 13:50:06.377491 1143252 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0603 13:50:08.083130 1143678 start.go:364] duration metric: took 3m45.627229097s to acquireMachinesLock for "old-k8s-version-151788"
	I0603 13:50:08.083256 1143678 start.go:96] Skipping create...Using existing machine configuration
	I0603 13:50:08.083266 1143678 fix.go:54] fixHost starting: 
	I0603 13:50:08.083762 1143678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:08.083812 1143678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:08.103187 1143678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36483
	I0603 13:50:08.103693 1143678 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:08.104269 1143678 main.go:141] libmachine: Using API Version  1
	I0603 13:50:08.104299 1143678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:08.104746 1143678 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:08.105115 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:50:08.105347 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetState
	I0603 13:50:08.107125 1143678 fix.go:112] recreateIfNeeded on old-k8s-version-151788: state=Stopped err=<nil>
	I0603 13:50:08.107173 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	W0603 13:50:08.107340 1143678 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 13:50:08.109207 1143678 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-151788" ...
	I0603 13:50:06.378684 1143252 addons.go:510] duration metric: took 1.4549999s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0603 13:50:07.145643 1143252 node_ready.go:53] node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:06.526793 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.527302 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Found IP for machine: 192.168.39.177
	I0603 13:50:06.527341 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has current primary IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.527366 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Reserving static IP address...
	I0603 13:50:06.527822 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Reserved static IP address: 192.168.39.177
	I0603 13:50:06.527857 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for SSH to be available...
	I0603 13:50:06.527902 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-030870", mac: "52:54:00:62:09:d4", ip: "192.168.39.177"} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:06.527956 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | skip adding static IP to network mk-default-k8s-diff-port-030870 - found existing host DHCP lease matching {name: "default-k8s-diff-port-030870", mac: "52:54:00:62:09:d4", ip: "192.168.39.177"}
	I0603 13:50:06.527973 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | Getting to WaitForSSH function...
	I0603 13:50:06.530287 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.530662 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:06.530696 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.530802 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | Using SSH client type: external
	I0603 13:50:06.530827 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | Using SSH private key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa (-rw-------)
	I0603 13:50:06.530849 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.177 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 13:50:06.530866 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | About to run SSH command:
	I0603 13:50:06.530877 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | exit 0
	I0603 13:50:06.653910 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | SSH cmd err, output: <nil>: 
	I0603 13:50:06.654259 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetConfigRaw
	I0603 13:50:06.654981 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetIP
	I0603 13:50:06.658094 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.658561 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:06.658600 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.658921 1143450 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870/config.json ...
	I0603 13:50:06.659144 1143450 machine.go:94] provisionDockerMachine start ...
	I0603 13:50:06.659168 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:06.659486 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:06.662534 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.662915 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:06.662959 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.663059 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:06.663258 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:06.663476 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:06.663660 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:06.663866 1143450 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:06.664103 1143450 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0603 13:50:06.664115 1143450 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 13:50:06.766054 1143450 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 13:50:06.766083 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetMachineName
	I0603 13:50:06.766406 1143450 buildroot.go:166] provisioning hostname "default-k8s-diff-port-030870"
	I0603 13:50:06.766440 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetMachineName
	I0603 13:50:06.766708 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:06.769445 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.769820 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:06.769871 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.770029 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:06.770244 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:06.770423 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:06.770670 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:06.770893 1143450 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:06.771057 1143450 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0603 13:50:06.771070 1143450 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-030870 && echo "default-k8s-diff-port-030870" | sudo tee /etc/hostname
	I0603 13:50:06.889997 1143450 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-030870
	
	I0603 13:50:06.890029 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:06.893778 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.894260 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:06.894297 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.894614 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:06.894826 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:06.895029 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:06.895211 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:06.895423 1143450 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:06.895608 1143450 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0603 13:50:06.895625 1143450 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-030870' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-030870/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-030870' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 13:50:07.007930 1143450 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 13:50:07.007971 1143450 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19011-1078924/.minikube CaCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19011-1078924/.minikube}
	I0603 13:50:07.008009 1143450 buildroot.go:174] setting up certificates
	I0603 13:50:07.008020 1143450 provision.go:84] configureAuth start
	I0603 13:50:07.008034 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetMachineName
	I0603 13:50:07.008433 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetIP
	I0603 13:50:07.011208 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.011607 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:07.011640 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.011774 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:07.013986 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.014431 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:07.014462 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.014655 1143450 provision.go:143] copyHostCerts
	I0603 13:50:07.014726 1143450 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem, removing ...
	I0603 13:50:07.014737 1143450 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 13:50:07.014787 1143450 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem (1078 bytes)
	I0603 13:50:07.014874 1143450 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem, removing ...
	I0603 13:50:07.014882 1143450 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 13:50:07.014902 1143450 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem (1123 bytes)
	I0603 13:50:07.014952 1143450 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem, removing ...
	I0603 13:50:07.014959 1143450 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 13:50:07.014974 1143450 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem (1675 bytes)
	I0603 13:50:07.015020 1143450 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-030870 san=[127.0.0.1 192.168.39.177 default-k8s-diff-port-030870 localhost minikube]
	I0603 13:50:07.402535 1143450 provision.go:177] copyRemoteCerts
	I0603 13:50:07.402595 1143450 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 13:50:07.402626 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:07.405891 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.406240 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:07.406272 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.406484 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:07.406718 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:07.406943 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:07.407132 1143450 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa Username:docker}
	I0603 13:50:07.489480 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 13:50:07.517212 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0603 13:50:07.543510 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 13:50:07.570284 1143450 provision.go:87] duration metric: took 562.244781ms to configureAuth
	I0603 13:50:07.570318 1143450 buildroot.go:189] setting minikube options for container-runtime
	I0603 13:50:07.570537 1143450 config.go:182] Loaded profile config "default-k8s-diff-port-030870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:50:07.570629 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:07.574171 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.574706 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:07.574739 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.574948 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:07.575262 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:07.575549 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:07.575781 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:07.575965 1143450 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:07.576217 1143450 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0603 13:50:07.576247 1143450 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 13:50:07.839415 1143450 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 13:50:07.839455 1143450 machine.go:97] duration metric: took 1.180296439s to provisionDockerMachine
	I0603 13:50:07.839468 1143450 start.go:293] postStartSetup for "default-k8s-diff-port-030870" (driver="kvm2")
	I0603 13:50:07.839482 1143450 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 13:50:07.839506 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:07.839843 1143450 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 13:50:07.839872 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:07.842547 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.842884 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:07.842918 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.843234 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:07.843471 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:07.843708 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:07.843952 1143450 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa Username:docker}
	I0603 13:50:07.927654 1143450 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 13:50:07.932965 1143450 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 13:50:07.932997 1143450 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/addons for local assets ...
	I0603 13:50:07.933082 1143450 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/files for local assets ...
	I0603 13:50:07.933202 1143450 filesync.go:149] local asset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> 10862512.pem in /etc/ssl/certs
	I0603 13:50:07.933343 1143450 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 13:50:07.945059 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:50:07.975774 1143450 start.go:296] duration metric: took 136.280559ms for postStartSetup
	I0603 13:50:07.975822 1143450 fix.go:56] duration metric: took 20.481265153s for fixHost
	I0603 13:50:07.975848 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:07.979035 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.979436 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:07.979486 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.979737 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:07.980012 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:07.980228 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:07.980452 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:07.980691 1143450 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:07.980935 1143450 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0603 13:50:07.980954 1143450 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 13:50:08.082946 1143450 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717422608.057620379
	
	I0603 13:50:08.082978 1143450 fix.go:216] guest clock: 1717422608.057620379
	I0603 13:50:08.082988 1143450 fix.go:229] Guest: 2024-06-03 13:50:08.057620379 +0000 UTC Remote: 2024-06-03 13:50:07.975826846 +0000 UTC m=+237.845886752 (delta=81.793533ms)
	I0603 13:50:08.083018 1143450 fix.go:200] guest clock delta is within tolerance: 81.793533ms
	I0603 13:50:08.083025 1143450 start.go:83] releasing machines lock for "default-k8s-diff-port-030870", held for 20.588515063s
	I0603 13:50:08.083060 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:08.083369 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetIP
	I0603 13:50:08.086674 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:08.087202 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:08.087285 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:08.087508 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:08.088324 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:08.088575 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:08.088673 1143450 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 13:50:08.088758 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:08.088823 1143450 ssh_runner.go:195] Run: cat /version.json
	I0603 13:50:08.088852 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:08.092020 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:08.092175 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:08.092406 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:08.092485 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:08.092863 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:08.092893 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:08.092916 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:08.092924 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:08.093273 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:08.093276 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:08.093522 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:08.093541 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:08.093708 1143450 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa Username:docker}
	I0603 13:50:08.093710 1143450 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa Username:docker}
	I0603 13:50:08.176292 1143450 ssh_runner.go:195] Run: systemctl --version
	I0603 13:50:08.204977 1143450 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 13:50:08.367121 1143450 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 13:50:08.376347 1143450 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 13:50:08.376431 1143450 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 13:50:08.398639 1143450 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 13:50:08.398672 1143450 start.go:494] detecting cgroup driver to use...
	I0603 13:50:08.398750 1143450 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 13:50:08.422776 1143450 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 13:50:08.443035 1143450 docker.go:217] disabling cri-docker service (if available) ...
	I0603 13:50:08.443108 1143450 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 13:50:08.459853 1143450 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 13:50:08.482009 1143450 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 13:50:08.631237 1143450 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 13:50:08.806623 1143450 docker.go:233] disabling docker service ...
	I0603 13:50:08.806715 1143450 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 13:50:08.827122 1143450 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 13:50:08.842457 1143450 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 13:50:08.999795 1143450 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 13:50:09.148706 1143450 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 13:50:09.167314 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 13:50:09.188867 1143450 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 13:50:09.188959 1143450 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:09.202239 1143450 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 13:50:09.202319 1143450 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:09.216228 1143450 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:09.231140 1143450 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:09.246767 1143450 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 13:50:09.260418 1143450 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:09.274349 1143450 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:09.300588 1143450 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:09.314659 1143450 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 13:50:09.326844 1143450 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 13:50:09.326919 1143450 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 13:50:09.344375 1143450 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 13:50:09.357955 1143450 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:50:09.504105 1143450 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 13:50:09.685468 1143450 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 13:50:09.685562 1143450 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 13:50:09.690863 1143450 start.go:562] Will wait 60s for crictl version
	I0603 13:50:09.690943 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:50:09.696532 1143450 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 13:50:09.742785 1143450 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 13:50:09.742891 1143450 ssh_runner.go:195] Run: crio --version
	I0603 13:50:09.782137 1143450 ssh_runner.go:195] Run: crio --version
	I0603 13:50:09.816251 1143450 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 13:50:09.817854 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetIP
	I0603 13:50:09.821049 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:09.821555 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:09.821595 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:09.821855 1143450 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0603 13:50:09.826658 1143450 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:50:09.841351 1143450 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-030870 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:default-k8s-diff-port-030870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.177 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 13:50:09.841521 1143450 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 13:50:09.841586 1143450 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:50:09.883751 1143450 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0603 13:50:09.883825 1143450 ssh_runner.go:195] Run: which lz4
	I0603 13:50:09.888383 1143450 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 13:50:09.893662 1143450 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 13:50:09.893704 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0603 13:50:08.110706 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .Start
	I0603 13:50:08.110954 1143678 main.go:141] libmachine: (old-k8s-version-151788) Ensuring networks are active...
	I0603 13:50:08.111890 1143678 main.go:141] libmachine: (old-k8s-version-151788) Ensuring network default is active
	I0603 13:50:08.112291 1143678 main.go:141] libmachine: (old-k8s-version-151788) Ensuring network mk-old-k8s-version-151788 is active
	I0603 13:50:08.112708 1143678 main.go:141] libmachine: (old-k8s-version-151788) Getting domain xml...
	I0603 13:50:08.113547 1143678 main.go:141] libmachine: (old-k8s-version-151788) Creating domain...
	I0603 13:50:09.528855 1143678 main.go:141] libmachine: (old-k8s-version-151788) Waiting to get IP...
	I0603 13:50:09.529978 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:09.530410 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:09.530453 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:09.530382 1144654 retry.go:31] will retry after 208.935457ms: waiting for machine to come up
	I0603 13:50:09.741245 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:09.741816 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:09.741864 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:09.741769 1144654 retry.go:31] will retry after 376.532154ms: waiting for machine to come up
	I0603 13:50:10.120533 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:10.121261 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:10.121337 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:10.121239 1144654 retry.go:31] will retry after 339.126643ms: waiting for machine to come up
	I0603 13:50:10.461708 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:10.462488 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:10.462514 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:10.462425 1144654 retry.go:31] will retry after 490.057426ms: waiting for machine to come up
	I0603 13:50:10.954107 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:10.954887 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:10.954921 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:10.954840 1144654 retry.go:31] will retry after 711.209001ms: waiting for machine to come up
	I0603 13:50:11.667459 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:11.668198 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:11.668231 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:11.668135 1144654 retry.go:31] will retry after 928.879285ms: waiting for machine to come up
	I0603 13:50:09.645006 1143252 node_ready.go:53] node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:10.146403 1143252 node_ready.go:49] node "embed-certs-223260" has status "Ready":"True"
	I0603 13:50:10.146438 1143252 node_ready.go:38] duration metric: took 5.005510729s for node "embed-certs-223260" to be "Ready" ...
	I0603 13:50:10.146453 1143252 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:50:10.154249 1143252 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-qdjrv" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:10.164361 1143252 pod_ready.go:92] pod "coredns-7db6d8ff4d-qdjrv" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:10.164401 1143252 pod_ready.go:81] duration metric: took 10.115855ms for pod "coredns-7db6d8ff4d-qdjrv" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:10.164419 1143252 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:11.675214 1143252 pod_ready.go:92] pod "etcd-embed-certs-223260" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:11.675243 1143252 pod_ready.go:81] duration metric: took 1.510815036s for pod "etcd-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:11.675254 1143252 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:11.522734 1143450 crio.go:462] duration metric: took 1.634406537s to copy over tarball
	I0603 13:50:11.522837 1143450 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 13:50:13.983446 1143450 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.460564522s)
	I0603 13:50:13.983484 1143450 crio.go:469] duration metric: took 2.460706596s to extract the tarball
	I0603 13:50:13.983503 1143450 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 13:50:14.029942 1143450 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:50:14.083084 1143450 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 13:50:14.083113 1143450 cache_images.go:84] Images are preloaded, skipping loading
	I0603 13:50:14.083122 1143450 kubeadm.go:928] updating node { 192.168.39.177 8444 v1.30.1 crio true true} ...
	I0603 13:50:14.083247 1143450 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-030870 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.177
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-030870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 13:50:14.083319 1143450 ssh_runner.go:195] Run: crio config
	I0603 13:50:14.142320 1143450 cni.go:84] Creating CNI manager for ""
	I0603 13:50:14.142344 1143450 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:50:14.142354 1143450 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 13:50:14.142379 1143450 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.177 APIServerPort:8444 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-030870 NodeName:default-k8s-diff-port-030870 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.177"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.177 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 13:50:14.142517 1143450 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.177
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-030870"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.177
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.177"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 13:50:14.142577 1143450 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 13:50:14.153585 1143450 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 13:50:14.153687 1143450 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 13:50:14.164499 1143450 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0603 13:50:14.186564 1143450 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 13:50:14.205489 1143450 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0603 13:50:14.227005 1143450 ssh_runner.go:195] Run: grep 192.168.39.177	control-plane.minikube.internal$ /etc/hosts
	I0603 13:50:14.231782 1143450 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.177	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:50:14.247433 1143450 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:50:14.368336 1143450 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:50:14.391791 1143450 certs.go:68] Setting up /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870 for IP: 192.168.39.177
	I0603 13:50:14.391816 1143450 certs.go:194] generating shared ca certs ...
	I0603 13:50:14.391840 1143450 certs.go:226] acquiring lock for ca certs: {Name:mkeec5aabce7c9540fcb31b78e4f96c2851d54f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:50:14.392015 1143450 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key
	I0603 13:50:14.392075 1143450 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key
	I0603 13:50:14.392090 1143450 certs.go:256] generating profile certs ...
	I0603 13:50:14.392282 1143450 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870/client.key
	I0603 13:50:14.392373 1143450 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870/apiserver.key.7a30187e
	I0603 13:50:14.392428 1143450 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870/proxy-client.key
	I0603 13:50:14.392545 1143450 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem (1338 bytes)
	W0603 13:50:14.392602 1143450 certs.go:480] ignoring /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251_empty.pem, impossibly tiny 0 bytes
	I0603 13:50:14.392616 1143450 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 13:50:14.392650 1143450 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem (1078 bytes)
	I0603 13:50:14.392687 1143450 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem (1123 bytes)
	I0603 13:50:14.392722 1143450 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem (1675 bytes)
	I0603 13:50:14.392780 1143450 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:50:14.393706 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 13:50:14.424354 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 13:50:14.476267 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 13:50:14.514457 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0603 13:50:14.548166 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0603 13:50:14.584479 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0603 13:50:14.626894 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 13:50:14.663103 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0603 13:50:14.696750 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /usr/share/ca-certificates/10862512.pem (1708 bytes)
	I0603 13:50:14.725770 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 13:50:14.755779 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem --> /usr/share/ca-certificates/1086251.pem (1338 bytes)
	I0603 13:50:14.786060 1143450 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 13:50:14.805976 1143450 ssh_runner.go:195] Run: openssl version
	I0603 13:50:14.812737 1143450 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10862512.pem && ln -fs /usr/share/ca-certificates/10862512.pem /etc/ssl/certs/10862512.pem"
	I0603 13:50:14.824707 1143450 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10862512.pem
	I0603 13:50:14.831139 1143450 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:37 /usr/share/ca-certificates/10862512.pem
	I0603 13:50:14.831255 1143450 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10862512.pem
	I0603 13:50:14.838855 1143450 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10862512.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 13:50:14.850974 1143450 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 13:50:14.865613 1143450 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:50:14.871431 1143450 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:24 /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:50:14.871518 1143450 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:50:14.878919 1143450 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 13:50:14.891371 1143450 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1086251.pem && ln -fs /usr/share/ca-certificates/1086251.pem /etc/ssl/certs/1086251.pem"
	I0603 13:50:14.903721 1143450 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1086251.pem
	I0603 13:50:14.909069 1143450 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:37 /usr/share/ca-certificates/1086251.pem
	I0603 13:50:14.909180 1143450 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1086251.pem
	I0603 13:50:14.915904 1143450 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1086251.pem /etc/ssl/certs/51391683.0"
	I0603 13:50:14.928622 1143450 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 13:50:14.934466 1143450 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 13:50:14.941321 1143450 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 13:50:14.947960 1143450 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 13:50:14.955629 1143450 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 13:50:14.962761 1143450 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 13:50:14.970396 1143450 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 13:50:14.977381 1143450 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-030870 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.1 ClusterName:default-k8s-diff-port-030870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.177 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:50:14.977543 1143450 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 13:50:14.977599 1143450 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:50:15.042628 1143450 cri.go:89] found id: ""
	I0603 13:50:15.042733 1143450 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0603 13:50:15.055439 1143450 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 13:50:15.055469 1143450 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 13:50:15.055476 1143450 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 13:50:15.055535 1143450 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 13:50:15.067250 1143450 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 13:50:15.068159 1143450 kubeconfig.go:125] found "default-k8s-diff-port-030870" server: "https://192.168.39.177:8444"
	I0603 13:50:15.070060 1143450 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 13:50:15.082723 1143450 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.177
	I0603 13:50:15.082788 1143450 kubeadm.go:1154] stopping kube-system containers ...
	I0603 13:50:15.082809 1143450 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0603 13:50:15.082972 1143450 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:50:15.124369 1143450 cri.go:89] found id: ""
	I0603 13:50:15.124509 1143450 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 13:50:15.144064 1143450 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 13:50:15.156148 1143450 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 13:50:15.156174 1143450 kubeadm.go:156] found existing configuration files:
	
	I0603 13:50:15.156240 1143450 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0603 13:50:15.166927 1143450 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 13:50:15.167006 1143450 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 13:50:12.598536 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:12.598972 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:12.599008 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:12.598948 1144654 retry.go:31] will retry after 882.970422ms: waiting for machine to come up
	I0603 13:50:13.483171 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:13.483723 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:13.483758 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:13.483640 1144654 retry.go:31] will retry after 1.215665556s: waiting for machine to come up
	I0603 13:50:14.701392 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:14.701960 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:14.701991 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:14.701899 1144654 retry.go:31] will retry after 1.614371992s: waiting for machine to come up
	I0603 13:50:16.318708 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:16.319127 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:16.319148 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:16.319103 1144654 retry.go:31] will retry after 2.146267337s: waiting for machine to come up
	I0603 13:50:13.683419 1143252 pod_ready.go:102] pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:15.684744 1143252 pod_ready.go:102] pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:16.792510 1143252 pod_ready.go:92] pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:16.792538 1143252 pod_ready.go:81] duration metric: took 5.117277447s for pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:16.792549 1143252 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:16.798083 1143252 pod_ready.go:92] pod "kube-controller-manager-embed-certs-223260" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:16.798112 1143252 pod_ready.go:81] duration metric: took 5.554915ms for pod "kube-controller-manager-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:16.798126 1143252 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s5vdl" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:16.804217 1143252 pod_ready.go:92] pod "kube-proxy-s5vdl" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:16.804247 1143252 pod_ready.go:81] duration metric: took 6.113411ms for pod "kube-proxy-s5vdl" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:16.804262 1143252 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:16.810317 1143252 pod_ready.go:92] pod "kube-scheduler-embed-certs-223260" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:16.810343 1143252 pod_ready.go:81] duration metric: took 6.073098ms for pod "kube-scheduler-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:16.810357 1143252 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:15.178645 1143450 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0603 13:50:15.486524 1143450 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 13:50:15.486608 1143450 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 13:50:15.497694 1143450 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0603 13:50:15.509586 1143450 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 13:50:15.509665 1143450 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 13:50:15.521976 1143450 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0603 13:50:15.533446 1143450 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 13:50:15.533535 1143450 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 13:50:15.545525 1143450 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 13:50:15.557558 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:15.710109 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:16.725380 1143450 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.015227554s)
	I0603 13:50:16.725452 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:16.964275 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:17.061586 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:17.183665 1143450 api_server.go:52] waiting for apiserver process to appear ...
	I0603 13:50:17.183764 1143450 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:17.684365 1143450 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:18.184269 1143450 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:18.254733 1143450 api_server.go:72] duration metric: took 1.07106398s to wait for apiserver process to appear ...
	I0603 13:50:18.254769 1143450 api_server.go:88] waiting for apiserver healthz status ...
	I0603 13:50:18.254797 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:18.466825 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:18.467260 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:18.467292 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:18.467187 1144654 retry.go:31] will retry after 2.752334209s: waiting for machine to come up
	I0603 13:50:21.220813 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:21.221235 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:21.221267 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:21.221182 1144654 retry.go:31] will retry after 3.082080728s: waiting for machine to come up
	I0603 13:50:18.819188 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:21.323790 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:21.193140 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 13:50:21.193177 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 13:50:21.193193 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:21.265534 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 13:50:21.265580 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 13:50:21.265602 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:21.277669 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 13:50:21.277703 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 13:50:21.754973 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:21.761802 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:21.761841 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:22.255071 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:22.262166 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:22.262227 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:22.755128 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:22.759896 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:22.759936 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:23.255520 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:23.262093 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:23.262128 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:23.755784 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:23.760053 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:23.760079 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:24.255534 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:24.259793 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:24.259820 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:24.755365 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:24.759964 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 200:
	ok
	I0603 13:50:24.768830 1143450 api_server.go:141] control plane version: v1.30.1
	I0603 13:50:24.768862 1143450 api_server.go:131] duration metric: took 6.51408552s to wait for apiserver health ...
	I0603 13:50:24.768872 1143450 cni.go:84] Creating CNI manager for ""
	I0603 13:50:24.768879 1143450 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:50:24.771099 1143450 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 13:50:24.772806 1143450 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 13:50:24.784204 1143450 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 13:50:24.805572 1143450 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 13:50:24.816944 1143450 system_pods.go:59] 8 kube-system pods found
	I0603 13:50:24.816988 1143450 system_pods.go:61] "coredns-7db6d8ff4d-flxqj" [a116f363-ca50-4e2d-8c77-e99498c81e36] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 13:50:24.816997 1143450 system_pods.go:61] "etcd-default-k8s-diff-port-030870" [4134b8e4-b7c4-4571-ae7f-f1eff2be2427] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0603 13:50:24.817008 1143450 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-030870" [38fe3d48-9d20-448a-b8d1-7c3af8ab1d2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0603 13:50:24.817021 1143450 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-030870" [5c8f2fc4-fc4f-48f8-8d81-3b64aa9a93c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0603 13:50:24.817028 1143450 system_pods.go:61] "kube-proxy-thsrx" [96df5442-b343-47c8-a561-681a2d568d50] Running
	I0603 13:50:24.817037 1143450 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-030870" [1f2c23a1-1c2c-463f-a5f0-e8f1bb8956f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0603 13:50:24.817044 1143450 system_pods.go:61] "metrics-server-569cc877fc-8xw9v" [4ab08177-2171-493b-928c-456d8a21fd68] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:50:24.817050 1143450 system_pods.go:61] "storage-provisioner" [64d080e5-d582-4ee5-adbc-a652e8e2b820] Running
	I0603 13:50:24.817060 1143450 system_pods.go:74] duration metric: took 11.461696ms to wait for pod list to return data ...
	I0603 13:50:24.817069 1143450 node_conditions.go:102] verifying NodePressure condition ...
	I0603 13:50:24.820804 1143450 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 13:50:24.820834 1143450 node_conditions.go:123] node cpu capacity is 2
	I0603 13:50:24.820846 1143450 node_conditions.go:105] duration metric: took 3.771492ms to run NodePressure ...
	I0603 13:50:24.820865 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:25.098472 1143450 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0603 13:50:25.103237 1143450 kubeadm.go:733] kubelet initialised
	I0603 13:50:25.103263 1143450 kubeadm.go:734] duration metric: took 4.763539ms waiting for restarted kubelet to initialise ...
	I0603 13:50:25.103274 1143450 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:50:25.109364 1143450 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-flxqj" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:25.114629 1143450 pod_ready.go:97] node "default-k8s-diff-port-030870" hosting pod "coredns-7db6d8ff4d-flxqj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.114662 1143450 pod_ready.go:81] duration metric: took 5.268473ms for pod "coredns-7db6d8ff4d-flxqj" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:25.114676 1143450 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-030870" hosting pod "coredns-7db6d8ff4d-flxqj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.114687 1143450 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:25.118734 1143450 pod_ready.go:97] node "default-k8s-diff-port-030870" hosting pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.118777 1143450 pod_ready.go:81] duration metric: took 4.079659ms for pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:25.118790 1143450 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-030870" hosting pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.118810 1143450 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:25.123298 1143450 pod_ready.go:97] node "default-k8s-diff-port-030870" hosting pod "kube-apiserver-default-k8s-diff-port-030870" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.123334 1143450 pod_ready.go:81] duration metric: took 4.509948ms for pod "kube-apiserver-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:25.123351 1143450 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-030870" hosting pod "kube-apiserver-default-k8s-diff-port-030870" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.123361 1143450 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:25.210283 1143450 pod_ready.go:97] node "default-k8s-diff-port-030870" hosting pod "kube-controller-manager-default-k8s-diff-port-030870" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.210316 1143450 pod_ready.go:81] duration metric: took 86.945898ms for pod "kube-controller-manager-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:25.210329 1143450 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-030870" hosting pod "kube-controller-manager-default-k8s-diff-port-030870" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.210338 1143450 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-thsrx" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:25.609043 1143450 pod_ready.go:97] node "default-k8s-diff-port-030870" hosting pod "kube-proxy-thsrx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.609074 1143450 pod_ready.go:81] duration metric: took 398.728553ms for pod "kube-proxy-thsrx" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:25.609084 1143450 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-030870" hosting pod "kube-proxy-thsrx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.609091 1143450 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:26.009831 1143450 pod_ready.go:97] node "default-k8s-diff-port-030870" hosting pod "kube-scheduler-default-k8s-diff-port-030870" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:26.009866 1143450 pod_ready.go:81] duration metric: took 400.766037ms for pod "kube-scheduler-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:26.009880 1143450 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-030870" hosting pod "kube-scheduler-default-k8s-diff-port-030870" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:26.009888 1143450 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:26.410271 1143450 pod_ready.go:97] node "default-k8s-diff-port-030870" hosting pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:26.410301 1143450 pod_ready.go:81] duration metric: took 400.402293ms for pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:26.410315 1143450 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-030870" hosting pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:26.410326 1143450 pod_ready.go:38] duration metric: took 1.307039933s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:50:26.410347 1143450 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 13:50:26.422726 1143450 ops.go:34] apiserver oom_adj: -16
	I0603 13:50:26.422753 1143450 kubeadm.go:591] duration metric: took 11.367271168s to restartPrimaryControlPlane
	I0603 13:50:26.422763 1143450 kubeadm.go:393] duration metric: took 11.445396197s to StartCluster
	I0603 13:50:26.422784 1143450 settings.go:142] acquiring lock: {Name:mka7155af15d143794eb08b8670f7d850f44839e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:50:26.422866 1143450 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 13:50:26.424423 1143450 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/kubeconfig: {Name:mk082a4c41fd0f4876b4085806e1bc5ef6533b14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:50:26.424744 1143450 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.177 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 13:50:26.426628 1143450 out.go:177] * Verifying Kubernetes components...
	I0603 13:50:26.424855 1143450 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 13:50:26.424985 1143450 config.go:182] Loaded profile config "default-k8s-diff-port-030870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:50:26.428227 1143450 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:50:26.428239 1143450 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-030870"
	I0603 13:50:26.428241 1143450 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-030870"
	I0603 13:50:26.428275 1143450 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-030870"
	I0603 13:50:26.428285 1143450 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-030870"
	W0603 13:50:26.428297 1143450 addons.go:243] addon storage-provisioner should already be in state true
	I0603 13:50:26.428243 1143450 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-030870"
	I0603 13:50:26.428338 1143450 host.go:66] Checking if "default-k8s-diff-port-030870" exists ...
	I0603 13:50:26.428404 1143450 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-030870"
	W0603 13:50:26.428428 1143450 addons.go:243] addon metrics-server should already be in state true
	I0603 13:50:26.428501 1143450 host.go:66] Checking if "default-k8s-diff-port-030870" exists ...
	I0603 13:50:26.428650 1143450 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:26.428676 1143450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:26.428724 1143450 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:26.428751 1143450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:26.428948 1143450 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:26.429001 1143450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:26.445709 1143450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33951
	I0603 13:50:26.446187 1143450 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:26.446719 1143450 main.go:141] libmachine: Using API Version  1
	I0603 13:50:26.446743 1143450 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:26.447152 1143450 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:26.447817 1143450 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:26.447852 1143450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:26.449660 1143450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46507
	I0603 13:50:26.449721 1143450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37871
	I0603 13:50:26.450120 1143450 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:26.450161 1143450 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:26.450735 1143450 main.go:141] libmachine: Using API Version  1
	I0603 13:50:26.450755 1143450 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:26.450906 1143450 main.go:141] libmachine: Using API Version  1
	I0603 13:50:26.450930 1143450 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:26.451177 1143450 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:26.451333 1143450 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:26.451421 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetState
	I0603 13:50:26.451909 1143450 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:26.451951 1143450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:26.455458 1143450 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-030870"
	W0603 13:50:26.455484 1143450 addons.go:243] addon default-storageclass should already be in state true
	I0603 13:50:26.455523 1143450 host.go:66] Checking if "default-k8s-diff-port-030870" exists ...
	I0603 13:50:26.455776 1143450 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:26.455825 1143450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:26.470807 1143450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36503
	I0603 13:50:26.471179 1143450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44131
	I0603 13:50:26.471763 1143450 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:26.471921 1143450 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:26.472042 1143450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34451
	I0603 13:50:26.472471 1143450 main.go:141] libmachine: Using API Version  1
	I0603 13:50:26.472501 1143450 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:26.472575 1143450 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:26.472750 1143450 main.go:141] libmachine: Using API Version  1
	I0603 13:50:26.472760 1143450 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:26.472966 1143450 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:26.473095 1143450 main.go:141] libmachine: Using API Version  1
	I0603 13:50:26.473118 1143450 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:26.473132 1143450 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:26.473134 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetState
	I0603 13:50:26.473357 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetState
	I0603 13:50:26.473486 1143450 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:26.474129 1143450 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:26.474183 1143450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:26.475437 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:26.475594 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:26.477911 1143450 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:26.479474 1143450 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0603 13:50:24.304462 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:24.305104 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:24.305175 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:24.305099 1144654 retry.go:31] will retry after 4.178596743s: waiting for machine to come up
	I0603 13:50:26.480998 1143450 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0603 13:50:26.481021 1143450 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0603 13:50:26.481047 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:26.479556 1143450 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 13:50:26.481095 1143450 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 13:50:26.481116 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:26.484634 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:26.484694 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:26.485083 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:26.485116 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:26.485147 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:26.485160 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:26.485538 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:26.485628 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:26.485729 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:26.485829 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:26.485856 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:26.485993 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:26.486040 1143450 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa Username:docker}
	I0603 13:50:26.486158 1143450 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa Username:docker}
	I0603 13:50:26.496035 1143450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45527
	I0603 13:50:26.496671 1143450 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:26.497270 1143450 main.go:141] libmachine: Using API Version  1
	I0603 13:50:26.497290 1143450 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:26.497719 1143450 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:26.497989 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetState
	I0603 13:50:26.500018 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:26.500280 1143450 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 13:50:26.500298 1143450 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 13:50:26.500318 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:26.503226 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:26.503732 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:26.503768 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:26.503967 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:26.504212 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:26.504399 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:26.504556 1143450 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa Username:docker}
	I0603 13:50:26.608774 1143450 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:50:26.629145 1143450 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-030870" to be "Ready" ...
	I0603 13:50:26.692164 1143450 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 13:50:26.784756 1143450 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 13:50:26.788686 1143450 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0603 13:50:26.788711 1143450 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0603 13:50:26.841094 1143450 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0603 13:50:26.841129 1143450 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0603 13:50:26.907657 1143450 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 13:50:26.907688 1143450 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0603 13:50:26.963244 1143450 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:26.963280 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .Close
	I0603 13:50:26.963618 1143450 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:26.963641 1143450 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:26.963649 1143450 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:26.963653 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | Closing plugin on server side
	I0603 13:50:26.963657 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .Close
	I0603 13:50:26.963962 1143450 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:26.963980 1143450 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:26.963982 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | Closing plugin on server side
	I0603 13:50:26.971726 1143450 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:26.971748 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .Close
	I0603 13:50:26.972101 1143450 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:26.972125 1143450 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:26.975238 1143450 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 13:50:27.653643 1143450 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:27.653689 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .Close
	I0603 13:50:27.654037 1143450 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:27.654061 1143450 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:27.654078 1143450 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:27.654087 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .Close
	I0603 13:50:27.654429 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | Closing plugin on server side
	I0603 13:50:27.654484 1143450 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:27.654507 1143450 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:27.847367 1143450 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:27.847397 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .Close
	I0603 13:50:27.847745 1143450 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:27.847770 1143450 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:27.847779 1143450 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:27.847785 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | Closing plugin on server side
	I0603 13:50:27.847793 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .Close
	I0603 13:50:27.848112 1143450 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:27.848130 1143450 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:27.848144 1143450 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-030870"
	I0603 13:50:27.851386 1143450 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0603 13:50:23.817272 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:25.818013 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:27.818160 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:29.798777 1142862 start.go:364] duration metric: took 56.694826675s to acquireMachinesLock for "no-preload-817450"
	I0603 13:50:29.798855 1142862 start.go:96] Skipping create...Using existing machine configuration
	I0603 13:50:29.798866 1142862 fix.go:54] fixHost starting: 
	I0603 13:50:29.799329 1142862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:29.799369 1142862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:29.817787 1142862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46057
	I0603 13:50:29.818396 1142862 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:29.819003 1142862 main.go:141] libmachine: Using API Version  1
	I0603 13:50:29.819025 1142862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:29.819450 1142862 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:29.819617 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:50:29.819782 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetState
	I0603 13:50:29.821742 1142862 fix.go:112] recreateIfNeeded on no-preload-817450: state=Stopped err=<nil>
	I0603 13:50:29.821777 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	W0603 13:50:29.821973 1142862 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 13:50:29.823915 1142862 out.go:177] * Restarting existing kvm2 VM for "no-preload-817450" ...
	I0603 13:50:27.852929 1143450 addons.go:510] duration metric: took 1.428071927s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0603 13:50:28.633355 1143450 node_ready.go:53] node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:29.825584 1142862 main.go:141] libmachine: (no-preload-817450) Calling .Start
	I0603 13:50:29.825783 1142862 main.go:141] libmachine: (no-preload-817450) Ensuring networks are active...
	I0603 13:50:29.826746 1142862 main.go:141] libmachine: (no-preload-817450) Ensuring network default is active
	I0603 13:50:29.827116 1142862 main.go:141] libmachine: (no-preload-817450) Ensuring network mk-no-preload-817450 is active
	I0603 13:50:29.827617 1142862 main.go:141] libmachine: (no-preload-817450) Getting domain xml...
	I0603 13:50:29.828419 1142862 main.go:141] libmachine: (no-preload-817450) Creating domain...
	I0603 13:50:28.485041 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.485598 1143678 main.go:141] libmachine: (old-k8s-version-151788) Found IP for machine: 192.168.50.65
	I0603 13:50:28.485624 1143678 main.go:141] libmachine: (old-k8s-version-151788) Reserving static IP address...
	I0603 13:50:28.485639 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has current primary IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.486053 1143678 main.go:141] libmachine: (old-k8s-version-151788) Reserved static IP address: 192.168.50.65
	I0603 13:50:28.486109 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "old-k8s-version-151788", mac: "52:54:00:56:4e:c1", ip: "192.168.50.65"} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.486123 1143678 main.go:141] libmachine: (old-k8s-version-151788) Waiting for SSH to be available...
	I0603 13:50:28.486144 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | skip adding static IP to network mk-old-k8s-version-151788 - found existing host DHCP lease matching {name: "old-k8s-version-151788", mac: "52:54:00:56:4e:c1", ip: "192.168.50.65"}
	I0603 13:50:28.486156 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | Getting to WaitForSSH function...
	I0603 13:50:28.488305 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.488754 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.488788 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.489025 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | Using SSH client type: external
	I0603 13:50:28.489048 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | Using SSH private key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/id_rsa (-rw-------)
	I0603 13:50:28.489114 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.65 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 13:50:28.489147 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | About to run SSH command:
	I0603 13:50:28.489167 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | exit 0
	I0603 13:50:28.613732 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | SSH cmd err, output: <nil>: 
	I0603 13:50:28.614183 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetConfigRaw
	I0603 13:50:28.614879 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetIP
	I0603 13:50:28.617742 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.618235 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.618270 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.618481 1143678 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/config.json ...
	I0603 13:50:28.618699 1143678 machine.go:94] provisionDockerMachine start ...
	I0603 13:50:28.618719 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:50:28.618967 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:28.621356 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.621655 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.621685 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.621897 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:28.622117 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:28.622321 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:28.622511 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:28.622750 1143678 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:28.622946 1143678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0603 13:50:28.622958 1143678 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 13:50:28.726383 1143678 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 13:50:28.726419 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetMachineName
	I0603 13:50:28.726740 1143678 buildroot.go:166] provisioning hostname "old-k8s-version-151788"
	I0603 13:50:28.726777 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetMachineName
	I0603 13:50:28.727042 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:28.729901 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.730372 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.730402 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.730599 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:28.730824 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:28.731031 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:28.731205 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:28.731403 1143678 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:28.731585 1143678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0603 13:50:28.731599 1143678 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-151788 && echo "old-k8s-version-151788" | sudo tee /etc/hostname
	I0603 13:50:28.848834 1143678 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-151788
	
	I0603 13:50:28.848867 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:28.852250 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.852698 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.852721 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.852980 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:28.853239 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:28.853536 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:28.853819 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:28.854093 1143678 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:28.854338 1143678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0603 13:50:28.854367 1143678 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-151788' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-151788/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-151788' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 13:50:28.967427 1143678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 13:50:28.967461 1143678 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19011-1078924/.minikube CaCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19011-1078924/.minikube}
	I0603 13:50:28.967520 1143678 buildroot.go:174] setting up certificates
	I0603 13:50:28.967538 1143678 provision.go:84] configureAuth start
	I0603 13:50:28.967550 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetMachineName
	I0603 13:50:28.967946 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetIP
	I0603 13:50:28.970841 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.971226 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.971256 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.971449 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:28.974316 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.974702 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.974732 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.974911 1143678 provision.go:143] copyHostCerts
	I0603 13:50:28.974994 1143678 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem, removing ...
	I0603 13:50:28.975010 1143678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 13:50:28.975068 1143678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem (1675 bytes)
	I0603 13:50:28.975247 1143678 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem, removing ...
	I0603 13:50:28.975260 1143678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 13:50:28.975283 1143678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem (1078 bytes)
	I0603 13:50:28.975354 1143678 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem, removing ...
	I0603 13:50:28.975362 1143678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 13:50:28.975385 1143678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem (1123 bytes)
	I0603 13:50:28.975463 1143678 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-151788 san=[127.0.0.1 192.168.50.65 localhost minikube old-k8s-version-151788]
	I0603 13:50:29.096777 1143678 provision.go:177] copyRemoteCerts
	I0603 13:50:29.096835 1143678 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 13:50:29.096865 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:29.099989 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.100408 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:29.100434 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.100644 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:29.100831 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.100975 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:29.101144 1143678 sshutil.go:53] new ssh client: &{IP:192.168.50.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/id_rsa Username:docker}
	I0603 13:50:29.184886 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 13:50:29.211432 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0603 13:50:29.238552 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 13:50:29.266803 1143678 provision.go:87] duration metric: took 299.247567ms to configureAuth
	I0603 13:50:29.266844 1143678 buildroot.go:189] setting minikube options for container-runtime
	I0603 13:50:29.267107 1143678 config.go:182] Loaded profile config "old-k8s-version-151788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0603 13:50:29.267220 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:29.270966 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.271417 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:29.271472 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.271688 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:29.271893 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.272121 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.272327 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:29.272544 1143678 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:29.272787 1143678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0603 13:50:29.272811 1143678 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 13:50:29.548407 1143678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 13:50:29.548437 1143678 machine.go:97] duration metric: took 929.724002ms to provisionDockerMachine
	I0603 13:50:29.548449 1143678 start.go:293] postStartSetup for "old-k8s-version-151788" (driver="kvm2")
	I0603 13:50:29.548461 1143678 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 13:50:29.548486 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:50:29.548924 1143678 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 13:50:29.548992 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:29.552127 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.552531 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:29.552571 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.552756 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:29.552974 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.553166 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:29.553364 1143678 sshutil.go:53] new ssh client: &{IP:192.168.50.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/id_rsa Username:docker}
	I0603 13:50:29.637026 1143678 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 13:50:29.641264 1143678 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 13:50:29.641293 1143678 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/addons for local assets ...
	I0603 13:50:29.641376 1143678 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/files for local assets ...
	I0603 13:50:29.641509 1143678 filesync.go:149] local asset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> 10862512.pem in /etc/ssl/certs
	I0603 13:50:29.641600 1143678 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 13:50:29.657273 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:50:29.688757 1143678 start.go:296] duration metric: took 140.291954ms for postStartSetup
	I0603 13:50:29.688806 1143678 fix.go:56] duration metric: took 21.605539652s for fixHost
	I0603 13:50:29.688843 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:29.691764 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.692170 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:29.692216 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.692356 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:29.692623 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.692814 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.692996 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:29.693180 1143678 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:29.693372 1143678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0603 13:50:29.693384 1143678 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 13:50:29.798629 1143678 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717422629.770375968
	
	I0603 13:50:29.798655 1143678 fix.go:216] guest clock: 1717422629.770375968
	I0603 13:50:29.798662 1143678 fix.go:229] Guest: 2024-06-03 13:50:29.770375968 +0000 UTC Remote: 2024-06-03 13:50:29.688811675 +0000 UTC m=+247.377673500 (delta=81.564293ms)
	I0603 13:50:29.798683 1143678 fix.go:200] guest clock delta is within tolerance: 81.564293ms
	I0603 13:50:29.798688 1143678 start.go:83] releasing machines lock for "old-k8s-version-151788", held for 21.715483341s
	I0603 13:50:29.798712 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:50:29.799019 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetIP
	I0603 13:50:29.802078 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.802479 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:29.802522 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.802674 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:50:29.803271 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:50:29.803496 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:50:29.803584 1143678 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 13:50:29.803646 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:29.803961 1143678 ssh_runner.go:195] Run: cat /version.json
	I0603 13:50:29.803988 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:29.806505 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.806863 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.806926 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:29.806961 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.807093 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:29.807299 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.807345 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:29.807386 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.807476 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:29.807670 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:29.807669 1143678 sshutil.go:53] new ssh client: &{IP:192.168.50.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/id_rsa Username:docker}
	I0603 13:50:29.807841 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.807947 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:29.808183 1143678 sshutil.go:53] new ssh client: &{IP:192.168.50.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/id_rsa Username:docker}
	I0603 13:50:29.890622 1143678 ssh_runner.go:195] Run: systemctl --version
	I0603 13:50:29.918437 1143678 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 13:50:30.064471 1143678 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 13:50:30.073881 1143678 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 13:50:30.073969 1143678 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 13:50:30.097037 1143678 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 13:50:30.097070 1143678 start.go:494] detecting cgroup driver to use...
	I0603 13:50:30.097147 1143678 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 13:50:30.114374 1143678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 13:50:30.132000 1143678 docker.go:217] disabling cri-docker service (if available) ...
	I0603 13:50:30.132075 1143678 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 13:50:30.148156 1143678 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 13:50:30.164601 1143678 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 13:50:30.303125 1143678 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 13:50:30.475478 1143678 docker.go:233] disabling docker service ...
	I0603 13:50:30.475578 1143678 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 13:50:30.494632 1143678 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 13:50:30.513383 1143678 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 13:50:30.691539 1143678 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 13:50:30.849280 1143678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 13:50:30.869107 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 13:50:30.893451 1143678 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0603 13:50:30.893528 1143678 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:30.909358 1143678 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 13:50:30.909465 1143678 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:30.926891 1143678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:30.941879 1143678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:30.957985 1143678 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 13:50:30.971349 1143678 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 13:50:30.984948 1143678 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 13:50:30.985023 1143678 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 13:50:30.999255 1143678 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 13:50:31.011615 1143678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:50:31.162848 1143678 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 13:50:31.352121 1143678 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 13:50:31.352190 1143678 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 13:50:31.357946 1143678 start.go:562] Will wait 60s for crictl version
	I0603 13:50:31.358032 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:31.362540 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 13:50:31.410642 1143678 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 13:50:31.410757 1143678 ssh_runner.go:195] Run: crio --version
	I0603 13:50:31.444750 1143678 ssh_runner.go:195] Run: crio --version
	I0603 13:50:31.482404 1143678 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0603 13:50:31.484218 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetIP
	I0603 13:50:31.488049 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:31.488663 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:31.488695 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:31.488985 1143678 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0603 13:50:31.494813 1143678 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:50:31.511436 1143678 kubeadm.go:877] updating cluster {Name:old-k8s-version-151788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-151788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 13:50:31.511597 1143678 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0603 13:50:31.511659 1143678 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:50:31.571733 1143678 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0603 13:50:31.571819 1143678 ssh_runner.go:195] Run: which lz4
	I0603 13:50:31.577765 1143678 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 13:50:31.583983 1143678 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 13:50:31.584025 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0603 13:50:30.319230 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:32.824874 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:30.633456 1143450 node_ready.go:53] node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:32.134192 1143450 node_ready.go:49] node "default-k8s-diff-port-030870" has status "Ready":"True"
	I0603 13:50:32.134227 1143450 node_ready.go:38] duration metric: took 5.505047986s for node "default-k8s-diff-port-030870" to be "Ready" ...
	I0603 13:50:32.134241 1143450 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:50:32.143157 1143450 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-flxqj" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:32.150075 1143450 pod_ready.go:92] pod "coredns-7db6d8ff4d-flxqj" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:32.150113 1143450 pod_ready.go:81] duration metric: took 6.922006ms for pod "coredns-7db6d8ff4d-flxqj" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:32.150128 1143450 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:34.157758 1143450 pod_ready.go:102] pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:31.283193 1142862 main.go:141] libmachine: (no-preload-817450) Waiting to get IP...
	I0603 13:50:31.284191 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:31.284681 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:31.284757 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:31.284641 1144889 retry.go:31] will retry after 246.139268ms: waiting for machine to come up
	I0603 13:50:31.532345 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:31.533024 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:31.533056 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:31.532956 1144889 retry.go:31] will retry after 283.586657ms: waiting for machine to come up
	I0603 13:50:31.818610 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:31.819271 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:31.819302 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:31.819235 1144889 retry.go:31] will retry after 345.327314ms: waiting for machine to come up
	I0603 13:50:32.165948 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:32.166532 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:32.166585 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:32.166485 1144889 retry.go:31] will retry after 567.370644ms: waiting for machine to come up
	I0603 13:50:32.735409 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:32.736074 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:32.736118 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:32.735978 1144889 retry.go:31] will retry after 523.349811ms: waiting for machine to come up
	I0603 13:50:33.261023 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:33.261738 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:33.261769 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:33.261685 1144889 retry.go:31] will retry after 617.256992ms: waiting for machine to come up
	I0603 13:50:33.880579 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:33.881159 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:33.881188 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:33.881113 1144889 retry.go:31] will retry after 975.807438ms: waiting for machine to come up
	I0603 13:50:34.858935 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:34.859418 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:34.859447 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:34.859365 1144889 retry.go:31] will retry after 1.257722281s: waiting for machine to come up
	I0603 13:50:33.399678 1143678 crio.go:462] duration metric: took 1.821959808s to copy over tarball
	I0603 13:50:33.399768 1143678 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 13:50:36.631033 1143678 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.231219364s)
	I0603 13:50:36.631081 1143678 crio.go:469] duration metric: took 3.231364789s to extract the tarball
	I0603 13:50:36.631092 1143678 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 13:50:36.677954 1143678 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:50:36.718160 1143678 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0603 13:50:36.718197 1143678 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0603 13:50:36.718295 1143678 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 13:50:36.718335 1143678 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 13:50:36.718295 1143678 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:36.718456 1143678 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0603 13:50:36.718302 1143678 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 13:50:36.718343 1143678 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0603 13:50:36.718335 1143678 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0603 13:50:36.718858 1143678 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 13:50:36.720574 1143678 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 13:50:36.720644 1143678 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:36.720573 1143678 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0603 13:50:36.720574 1143678 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 13:50:36.720576 1143678 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 13:50:36.720603 1143678 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0603 13:50:36.720608 1143678 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 13:50:36.721118 1143678 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0603 13:50:36.907182 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0603 13:50:36.907179 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0603 13:50:36.910017 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:36.920969 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 13:50:36.925739 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0603 13:50:36.935710 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0603 13:50:36.946767 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0603 13:50:36.973425 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0603 13:50:37.050763 1143678 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0603 13:50:37.050817 1143678 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0603 13:50:37.050846 1143678 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0603 13:50:37.050876 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:37.050880 1143678 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 13:50:37.050906 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:37.162505 1143678 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0603 13:50:37.162561 1143678 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 13:50:37.162608 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:37.162706 1143678 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0603 13:50:37.162727 1143678 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 13:50:37.162754 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:37.162858 1143678 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0603 13:50:37.162898 1143678 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 13:50:37.162922 1143678 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0603 13:50:37.162965 1143678 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0603 13:50:37.163001 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:37.162943 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:37.164963 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0603 13:50:37.165019 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0603 13:50:37.165136 1143678 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0603 13:50:37.165260 1143678 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0603 13:50:37.165295 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:37.188179 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0603 13:50:37.188292 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0603 13:50:37.188315 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0603 13:50:37.188371 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0603 13:50:37.188561 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 13:50:37.300592 1143678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0603 13:50:37.300642 1143678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0603 13:50:35.317483 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:37.318105 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:36.160066 1143450 pod_ready.go:102] pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:37.334685 1143450 pod_ready.go:92] pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:37.334719 1143450 pod_ready.go:81] duration metric: took 5.184582613s for pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.334732 1143450 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.341104 1143450 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-030870" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:37.341140 1143450 pod_ready.go:81] duration metric: took 6.399805ms for pod "kube-apiserver-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.341154 1143450 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.347174 1143450 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-030870" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:37.347208 1143450 pod_ready.go:81] duration metric: took 6.044519ms for pod "kube-controller-manager-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.347220 1143450 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-thsrx" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.356909 1143450 pod_ready.go:92] pod "kube-proxy-thsrx" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:37.356949 1143450 pod_ready.go:81] duration metric: took 9.72108ms for pod "kube-proxy-thsrx" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.356962 1143450 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.363891 1143450 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-030870" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:37.363915 1143450 pod_ready.go:81] duration metric: took 6.9442ms for pod "kube-scheduler-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.363927 1143450 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:39.372092 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:36.118754 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:36.119214 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:36.119251 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:36.119148 1144889 retry.go:31] will retry after 1.380813987s: waiting for machine to come up
	I0603 13:50:37.501464 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:37.501889 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:37.501937 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:37.501849 1144889 retry.go:31] will retry after 2.144177789s: waiting for machine to come up
	I0603 13:50:39.648238 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:39.648744 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:39.648768 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:39.648693 1144889 retry.go:31] will retry after 1.947487062s: waiting for machine to come up
	I0603 13:50:37.360149 1143678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0603 13:50:37.360196 1143678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0603 13:50:37.360346 1143678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0603 13:50:37.360371 1143678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0603 13:50:37.360436 1143678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0603 13:50:37.543409 1143678 cache_images.go:92] duration metric: took 825.189409ms to LoadCachedImages
	W0603 13:50:37.543559 1143678 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0603 13:50:37.543581 1143678 kubeadm.go:928] updating node { 192.168.50.65 8443 v1.20.0 crio true true} ...
	I0603 13:50:37.543723 1143678 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-151788 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-151788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 13:50:37.543804 1143678 ssh_runner.go:195] Run: crio config
	I0603 13:50:37.601388 1143678 cni.go:84] Creating CNI manager for ""
	I0603 13:50:37.601428 1143678 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:50:37.601445 1143678 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 13:50:37.601471 1143678 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.65 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-151788 NodeName:old-k8s-version-151788 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.65"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.65 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0603 13:50:37.601664 1143678 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.65
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-151788"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.65
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.65"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 13:50:37.601746 1143678 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0603 13:50:37.613507 1143678 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 13:50:37.613588 1143678 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 13:50:37.623853 1143678 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0603 13:50:37.642298 1143678 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 13:50:37.660863 1143678 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0603 13:50:37.679974 1143678 ssh_runner.go:195] Run: grep 192.168.50.65	control-plane.minikube.internal$ /etc/hosts
	I0603 13:50:37.685376 1143678 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.65	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:50:37.702732 1143678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:50:37.859343 1143678 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:50:37.880684 1143678 certs.go:68] Setting up /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788 for IP: 192.168.50.65
	I0603 13:50:37.880714 1143678 certs.go:194] generating shared ca certs ...
	I0603 13:50:37.880737 1143678 certs.go:226] acquiring lock for ca certs: {Name:mkeec5aabce7c9540fcb31b78e4f96c2851d54f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:50:37.880952 1143678 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key
	I0603 13:50:37.881012 1143678 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key
	I0603 13:50:37.881024 1143678 certs.go:256] generating profile certs ...
	I0603 13:50:37.881179 1143678 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/client.key
	I0603 13:50:37.881279 1143678 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/apiserver.key.9bfe4cc3
	I0603 13:50:37.881334 1143678 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/proxy-client.key
	I0603 13:50:37.881554 1143678 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem (1338 bytes)
	W0603 13:50:37.881602 1143678 certs.go:480] ignoring /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251_empty.pem, impossibly tiny 0 bytes
	I0603 13:50:37.881629 1143678 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 13:50:37.881667 1143678 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem (1078 bytes)
	I0603 13:50:37.881698 1143678 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem (1123 bytes)
	I0603 13:50:37.881730 1143678 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem (1675 bytes)
	I0603 13:50:37.881805 1143678 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:50:37.882741 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 13:50:37.919377 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 13:50:37.957218 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 13:50:37.987016 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0603 13:50:38.024442 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0603 13:50:38.051406 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0603 13:50:38.094816 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 13:50:38.143689 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 13:50:38.171488 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem --> /usr/share/ca-certificates/1086251.pem (1338 bytes)
	I0603 13:50:38.197296 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /usr/share/ca-certificates/10862512.pem (1708 bytes)
	I0603 13:50:38.224025 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 13:50:38.250728 1143678 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 13:50:38.270485 1143678 ssh_runner.go:195] Run: openssl version
	I0603 13:50:38.276995 1143678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10862512.pem && ln -fs /usr/share/ca-certificates/10862512.pem /etc/ssl/certs/10862512.pem"
	I0603 13:50:38.288742 1143678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10862512.pem
	I0603 13:50:38.293880 1143678 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:37 /usr/share/ca-certificates/10862512.pem
	I0603 13:50:38.293955 1143678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10862512.pem
	I0603 13:50:38.300456 1143678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10862512.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 13:50:38.312180 1143678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 13:50:38.324349 1143678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:50:38.329812 1143678 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:24 /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:50:38.329881 1143678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:50:38.337560 1143678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 13:50:38.350229 1143678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1086251.pem && ln -fs /usr/share/ca-certificates/1086251.pem /etc/ssl/certs/1086251.pem"
	I0603 13:50:38.362635 1143678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1086251.pem
	I0603 13:50:38.368842 1143678 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:37 /usr/share/ca-certificates/1086251.pem
	I0603 13:50:38.368920 1143678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1086251.pem
	I0603 13:50:38.376029 1143678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1086251.pem /etc/ssl/certs/51391683.0"
	I0603 13:50:38.387703 1143678 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 13:50:38.393071 1143678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 13:50:38.399760 1143678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 13:50:38.406332 1143678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 13:50:38.413154 1143678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 13:50:38.419162 1143678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 13:50:38.425818 1143678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 13:50:38.432495 1143678 kubeadm.go:391] StartCluster: {Name:old-k8s-version-151788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-151788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:50:38.432659 1143678 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 13:50:38.432718 1143678 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:50:38.479889 1143678 cri.go:89] found id: ""
	I0603 13:50:38.479975 1143678 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0603 13:50:38.490549 1143678 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 13:50:38.490574 1143678 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 13:50:38.490580 1143678 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 13:50:38.490637 1143678 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 13:50:38.501024 1143678 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 13:50:38.503665 1143678 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-151788" does not appear in /home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 13:50:38.504563 1143678 kubeconfig.go:62] /home/jenkins/minikube-integration/19011-1078924/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-151788" cluster setting kubeconfig missing "old-k8s-version-151788" context setting]
	I0603 13:50:38.505614 1143678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/kubeconfig: {Name:mk082a4c41fd0f4876b4085806e1bc5ef6533b14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:50:38.562691 1143678 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 13:50:38.573839 1143678 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.65
	I0603 13:50:38.573889 1143678 kubeadm.go:1154] stopping kube-system containers ...
	I0603 13:50:38.573905 1143678 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0603 13:50:38.573987 1143678 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:50:38.615876 1143678 cri.go:89] found id: ""
	I0603 13:50:38.615972 1143678 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 13:50:38.633568 1143678 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 13:50:38.645197 1143678 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 13:50:38.645229 1143678 kubeadm.go:156] found existing configuration files:
	
	I0603 13:50:38.645291 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 13:50:38.655344 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 13:50:38.655423 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 13:50:38.665789 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 13:50:38.674765 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 13:50:38.674842 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 13:50:38.684268 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 13:50:38.693586 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 13:50:38.693650 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 13:50:38.703313 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 13:50:38.712523 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 13:50:38.712597 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 13:50:38.722362 1143678 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 13:50:38.732190 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:38.875545 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:39.722534 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:39.970226 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:40.090817 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:40.193178 1143678 api_server.go:52] waiting for apiserver process to appear ...
	I0603 13:50:40.193485 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:40.693580 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:41.193579 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:41.693608 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:42.193587 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:39.318177 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:41.818337 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:41.373738 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:43.870381 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:41.597745 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:41.598343 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:41.598372 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:41.598280 1144889 retry.go:31] will retry after 2.47307834s: waiting for machine to come up
	I0603 13:50:44.074548 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:44.075009 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:44.075037 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:44.074970 1144889 retry.go:31] will retry after 3.055733752s: waiting for machine to come up
	I0603 13:50:42.693593 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:43.194448 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:43.693645 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:44.193587 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:44.694583 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:45.194065 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:45.694138 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:46.194173 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:46.694344 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:47.194063 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:44.316348 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:46.317245 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:47.133727 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.134266 1142862 main.go:141] libmachine: (no-preload-817450) Found IP for machine: 192.168.72.125
	I0603 13:50:47.134301 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has current primary IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.134308 1142862 main.go:141] libmachine: (no-preload-817450) Reserving static IP address...
	I0603 13:50:47.134745 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "no-preload-817450", mac: "52:54:00:8f:cc:be", ip: "192.168.72.125"} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.134777 1142862 main.go:141] libmachine: (no-preload-817450) Reserved static IP address: 192.168.72.125
	I0603 13:50:47.134797 1142862 main.go:141] libmachine: (no-preload-817450) DBG | skip adding static IP to network mk-no-preload-817450 - found existing host DHCP lease matching {name: "no-preload-817450", mac: "52:54:00:8f:cc:be", ip: "192.168.72.125"}
	I0603 13:50:47.134816 1142862 main.go:141] libmachine: (no-preload-817450) DBG | Getting to WaitForSSH function...
	I0603 13:50:47.134858 1142862 main.go:141] libmachine: (no-preload-817450) Waiting for SSH to be available...
	I0603 13:50:47.137239 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.137669 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.137705 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.137810 1142862 main.go:141] libmachine: (no-preload-817450) DBG | Using SSH client type: external
	I0603 13:50:47.137835 1142862 main.go:141] libmachine: (no-preload-817450) DBG | Using SSH private key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa (-rw-------)
	I0603 13:50:47.137870 1142862 main.go:141] libmachine: (no-preload-817450) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 13:50:47.137879 1142862 main.go:141] libmachine: (no-preload-817450) DBG | About to run SSH command:
	I0603 13:50:47.137889 1142862 main.go:141] libmachine: (no-preload-817450) DBG | exit 0
	I0603 13:50:47.265932 1142862 main.go:141] libmachine: (no-preload-817450) DBG | SSH cmd err, output: <nil>: 
	I0603 13:50:47.266268 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetConfigRaw
	I0603 13:50:47.267007 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetIP
	I0603 13:50:47.269463 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.269849 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.269885 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.270135 1142862 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450/config.json ...
	I0603 13:50:47.270355 1142862 machine.go:94] provisionDockerMachine start ...
	I0603 13:50:47.270375 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:50:47.270589 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:47.272915 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.273307 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.273341 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.273543 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:47.273737 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.273905 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.274061 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:47.274242 1142862 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:47.274417 1142862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.125 22 <nil> <nil>}
	I0603 13:50:47.274429 1142862 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 13:50:47.380760 1142862 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 13:50:47.380789 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetMachineName
	I0603 13:50:47.381068 1142862 buildroot.go:166] provisioning hostname "no-preload-817450"
	I0603 13:50:47.381095 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetMachineName
	I0603 13:50:47.381314 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:47.384093 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.384460 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.384482 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.384627 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:47.384798 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.384938 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.385099 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:47.385276 1142862 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:47.385533 1142862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.125 22 <nil> <nil>}
	I0603 13:50:47.385562 1142862 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-817450 && echo "no-preload-817450" | sudo tee /etc/hostname
	I0603 13:50:47.505203 1142862 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-817450
	
	I0603 13:50:47.505231 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:47.508267 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.508696 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.508721 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.508877 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:47.509066 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.509281 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.509437 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:47.509606 1142862 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:47.509780 1142862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.125 22 <nil> <nil>}
	I0603 13:50:47.509795 1142862 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-817450' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-817450/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-817450' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 13:50:47.618705 1142862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 13:50:47.618757 1142862 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19011-1078924/.minikube CaCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19011-1078924/.minikube}
	I0603 13:50:47.618822 1142862 buildroot.go:174] setting up certificates
	I0603 13:50:47.618835 1142862 provision.go:84] configureAuth start
	I0603 13:50:47.618854 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetMachineName
	I0603 13:50:47.619166 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetIP
	I0603 13:50:47.621974 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.622512 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.622548 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.622652 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:47.624950 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.625275 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.625302 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.625419 1142862 provision.go:143] copyHostCerts
	I0603 13:50:47.625504 1142862 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem, removing ...
	I0603 13:50:47.625520 1142862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 13:50:47.625591 1142862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem (1675 bytes)
	I0603 13:50:47.625697 1142862 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem, removing ...
	I0603 13:50:47.625706 1142862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 13:50:47.625725 1142862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem (1078 bytes)
	I0603 13:50:47.625790 1142862 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem, removing ...
	I0603 13:50:47.625800 1142862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 13:50:47.625826 1142862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem (1123 bytes)
	I0603 13:50:47.625891 1142862 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem org=jenkins.no-preload-817450 san=[127.0.0.1 192.168.72.125 localhost minikube no-preload-817450]
	I0603 13:50:47.733710 1142862 provision.go:177] copyRemoteCerts
	I0603 13:50:47.733769 1142862 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 13:50:47.733801 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:47.736326 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.736657 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.736686 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.736844 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:47.737036 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.737222 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:47.737341 1142862 sshutil.go:53] new ssh client: &{IP:192.168.72.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa Username:docker}
	I0603 13:50:47.821893 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 13:50:47.848085 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0603 13:50:47.875891 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 13:50:47.900761 1142862 provision.go:87] duration metric: took 281.906702ms to configureAuth
	I0603 13:50:47.900795 1142862 buildroot.go:189] setting minikube options for container-runtime
	I0603 13:50:47.900986 1142862 config.go:182] Loaded profile config "no-preload-817450": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:50:47.901072 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:47.904128 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.904551 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.904581 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.904802 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:47.905018 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.905203 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.905413 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:47.905609 1142862 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:47.905816 1142862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.125 22 <nil> <nil>}
	I0603 13:50:47.905839 1142862 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 13:50:48.176290 1142862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 13:50:48.176321 1142862 machine.go:97] duration metric: took 905.950732ms to provisionDockerMachine
	I0603 13:50:48.176333 1142862 start.go:293] postStartSetup for "no-preload-817450" (driver="kvm2")
	I0603 13:50:48.176344 1142862 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 13:50:48.176361 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:50:48.176689 1142862 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 13:50:48.176712 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:48.179595 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.179994 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:48.180020 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.180186 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:48.180398 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:48.180561 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:48.180704 1142862 sshutil.go:53] new ssh client: &{IP:192.168.72.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa Username:docker}
	I0603 13:50:48.267996 1142862 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 13:50:48.272936 1142862 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 13:50:48.272970 1142862 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/addons for local assets ...
	I0603 13:50:48.273044 1142862 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/files for local assets ...
	I0603 13:50:48.273141 1142862 filesync.go:149] local asset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> 10862512.pem in /etc/ssl/certs
	I0603 13:50:48.273285 1142862 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 13:50:48.283984 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:50:48.310846 1142862 start.go:296] duration metric: took 134.495139ms for postStartSetup
	I0603 13:50:48.310899 1142862 fix.go:56] duration metric: took 18.512032449s for fixHost
	I0603 13:50:48.310928 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:48.313969 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.314331 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:48.314358 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.314627 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:48.314896 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:48.315086 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:48.315258 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:48.315442 1142862 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:48.315681 1142862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.125 22 <nil> <nil>}
	I0603 13:50:48.315698 1142862 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 13:50:48.422576 1142862 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717422648.390814282
	
	I0603 13:50:48.422599 1142862 fix.go:216] guest clock: 1717422648.390814282
	I0603 13:50:48.422606 1142862 fix.go:229] Guest: 2024-06-03 13:50:48.390814282 +0000 UTC Remote: 2024-06-03 13:50:48.310904217 +0000 UTC m=+357.796105522 (delta=79.910065ms)
	I0603 13:50:48.422636 1142862 fix.go:200] guest clock delta is within tolerance: 79.910065ms
	I0603 13:50:48.422642 1142862 start.go:83] releasing machines lock for "no-preload-817450", held for 18.623816039s
	I0603 13:50:48.422659 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:50:48.422954 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetIP
	I0603 13:50:48.426261 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.426671 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:48.426701 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.426864 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:50:48.427460 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:50:48.427661 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:50:48.427762 1142862 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 13:50:48.427827 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:48.427878 1142862 ssh_runner.go:195] Run: cat /version.json
	I0603 13:50:48.427914 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:48.430586 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.430830 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.430965 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:48.430993 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.431177 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:48.431326 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:48.431355 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.431387 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:48.431516 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:48.431584 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:48.431676 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:48.431751 1142862 sshutil.go:53] new ssh client: &{IP:192.168.72.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa Username:docker}
	I0603 13:50:48.431798 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:48.431936 1142862 sshutil.go:53] new ssh client: &{IP:192.168.72.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa Username:docker}
	I0603 13:50:48.506899 1142862 ssh_runner.go:195] Run: systemctl --version
	I0603 13:50:48.545903 1142862 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 13:50:48.700235 1142862 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 13:50:48.706614 1142862 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 13:50:48.706704 1142862 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 13:50:48.724565 1142862 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 13:50:48.724592 1142862 start.go:494] detecting cgroup driver to use...
	I0603 13:50:48.724664 1142862 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 13:50:48.741006 1142862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 13:50:48.758824 1142862 docker.go:217] disabling cri-docker service (if available) ...
	I0603 13:50:48.758899 1142862 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 13:50:48.773280 1142862 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 13:50:48.791049 1142862 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 13:50:48.917847 1142862 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 13:50:49.081837 1142862 docker.go:233] disabling docker service ...
	I0603 13:50:49.081927 1142862 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 13:50:49.097577 1142862 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 13:50:49.112592 1142862 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 13:50:49.228447 1142862 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 13:50:49.350782 1142862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 13:50:49.366017 1142862 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 13:50:49.385685 1142862 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 13:50:49.385765 1142862 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:49.396361 1142862 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 13:50:49.396432 1142862 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:49.408606 1142862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:49.419642 1142862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:49.430431 1142862 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 13:50:49.441378 1142862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:49.451810 1142862 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:49.469080 1142862 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:49.480054 1142862 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 13:50:49.489742 1142862 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 13:50:49.489814 1142862 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 13:50:49.502889 1142862 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 13:50:49.512414 1142862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:50:49.639903 1142862 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 13:50:49.786388 1142862 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 13:50:49.786486 1142862 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 13:50:49.791642 1142862 start.go:562] Will wait 60s for crictl version
	I0603 13:50:49.791711 1142862 ssh_runner.go:195] Run: which crictl
	I0603 13:50:49.796156 1142862 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 13:50:49.841667 1142862 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 13:50:49.841765 1142862 ssh_runner.go:195] Run: crio --version
	I0603 13:50:49.872213 1142862 ssh_runner.go:195] Run: crio --version
	I0603 13:50:49.910979 1142862 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 13:50:46.370749 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:48.870860 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:49.912417 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetIP
	I0603 13:50:49.915368 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:49.915731 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:49.915759 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:49.915913 1142862 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0603 13:50:49.920247 1142862 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:50:49.933231 1142862 kubeadm.go:877] updating cluster {Name:no-preload-817450 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:no-preload-817450 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.125 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 13:50:49.933358 1142862 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 13:50:49.933388 1142862 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:50:49.970029 1142862 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0603 13:50:49.970059 1142862 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.1 registry.k8s.io/kube-controller-manager:v1.30.1 registry.k8s.io/kube-scheduler:v1.30.1 registry.k8s.io/kube-proxy:v1.30.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0603 13:50:49.970118 1142862 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:49.970147 1142862 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.1
	I0603 13:50:49.970163 1142862 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0603 13:50:49.970198 1142862 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 13:50:49.970239 1142862 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.1
	I0603 13:50:49.970316 1142862 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.1
	I0603 13:50:49.970328 1142862 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0603 13:50:49.970379 1142862 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0603 13:50:49.971809 1142862 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0603 13:50:49.971837 1142862 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0603 13:50:49.971841 1142862 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.1
	I0603 13:50:49.971809 1142862 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0603 13:50:49.971808 1142862 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.1
	I0603 13:50:49.971876 1142862 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 13:50:49.971816 1142862 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.1
	I0603 13:50:49.971813 1142862 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:50.126557 1142862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0603 13:50:50.146394 1142862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.1
	I0603 13:50:50.149455 1142862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.1
	I0603 13:50:50.149755 1142862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0603 13:50:50.154990 1142862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0603 13:50:50.162983 1142862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:50.177520 1142862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 13:50:50.188703 1142862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.1
	I0603 13:50:50.299288 1142862 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.1" does not exist at hash "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a" in container runtime
	I0603 13:50:50.299312 1142862 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.1" does not exist at hash "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035" in container runtime
	I0603 13:50:50.299345 1142862 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.1
	I0603 13:50:50.299350 1142862 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.1
	I0603 13:50:50.299389 1142862 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0603 13:50:50.299406 1142862 ssh_runner.go:195] Run: which crictl
	I0603 13:50:50.299413 1142862 ssh_runner.go:195] Run: which crictl
	I0603 13:50:50.299422 1142862 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0603 13:50:50.299488 1142862 ssh_runner.go:195] Run: which crictl
	I0603 13:50:50.353368 1142862 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0603 13:50:50.353431 1142862 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:50.353485 1142862 ssh_runner.go:195] Run: which crictl
	I0603 13:50:50.353506 1142862 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0603 13:50:50.353543 1142862 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0603 13:50:50.353591 1142862 ssh_runner.go:195] Run: which crictl
	I0603 13:50:50.379011 1142862 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.1" needs transfer: "registry.k8s.io/kube-proxy:v1.30.1" does not exist at hash "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd" in container runtime
	I0603 13:50:50.379028 1142862 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.1" does not exist at hash "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c" in container runtime
	I0603 13:50:50.379054 1142862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.1
	I0603 13:50:50.379062 1142862 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.1
	I0603 13:50:50.379105 1142862 ssh_runner.go:195] Run: which crictl
	I0603 13:50:50.379075 1142862 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 13:50:50.379146 1142862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.1
	I0603 13:50:50.379181 1142862 ssh_runner.go:195] Run: which crictl
	I0603 13:50:50.379212 1142862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0603 13:50:50.379229 1142862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0603 13:50:50.379239 1142862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:50.482204 1142862 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1
	I0603 13:50:50.482210 1142862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.1
	I0603 13:50:50.482332 1142862 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0603 13:50:50.511560 1142862 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0603 13:50:50.511671 1142862 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1
	I0603 13:50:50.511721 1142862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 13:50:50.511769 1142862 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0603 13:50:50.511772 1142862 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0603 13:50:50.511682 1142862 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0603 13:50:50.511868 1142862 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0603 13:50:50.512290 1142862 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0603 13:50:50.512360 1142862 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0603 13:50:50.549035 1142862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.1 (exists)
	I0603 13:50:50.549061 1142862 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1
	I0603 13:50:50.549066 1142862 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0603 13:50:50.549156 1142862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0603 13:50:50.549166 1142862 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.1
	I0603 13:50:47.693584 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:48.193894 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:48.694053 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:49.193587 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:49.694081 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:50.194053 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:50.694265 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:51.193572 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:51.694283 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:52.194444 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:48.321194 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:50.816679 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:52.818121 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:51.372716 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:53.372880 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:50.573615 1142862 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1
	I0603 13:50:50.573661 1142862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0603 13:50:50.573708 1142862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.1 (exists)
	I0603 13:50:50.573737 1142862 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0603 13:50:50.573754 1142862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0603 13:50:50.573816 1142862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0603 13:50:50.573839 1142862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.1 (exists)
	I0603 13:50:52.739312 1142862 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1: (2.190102069s)
	I0603 13:50:52.739333 1142862 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.1: (2.165569436s)
	I0603 13:50:52.739354 1142862 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1 from cache
	I0603 13:50:52.739365 1142862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.1 (exists)
	I0603 13:50:52.739372 1142862 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0603 13:50:52.739420 1142862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0603 13:50:54.995960 1142862 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1: (2.256502953s)
	I0603 13:50:54.996000 1142862 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1 from cache
	I0603 13:50:54.996019 1142862 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0603 13:50:54.996076 1142862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0603 13:50:52.694071 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:53.193597 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:53.694503 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:54.193609 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:54.694446 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:55.193856 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:55.693583 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:56.194271 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:56.693558 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:57.194427 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:55.317668 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:57.318423 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:55.872030 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:58.376034 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:55.844775 1142862 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0603 13:50:55.844853 1142862 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0603 13:50:55.844967 1142862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0603 13:50:58.110074 1142862 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1: (2.265068331s)
	I0603 13:50:58.110103 1142862 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1 from cache
	I0603 13:50:58.110115 1142862 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0603 13:50:58.110169 1142862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0603 13:50:59.979789 1142862 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.869594477s)
	I0603 13:50:59.979817 1142862 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0603 13:50:59.979832 1142862 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0603 13:50:59.979875 1142862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0603 13:50:57.694027 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:58.193718 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:58.693488 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:59.193725 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:59.694310 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:00.194455 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:00.694182 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:01.193916 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:01.693504 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:02.194236 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:59.816444 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:01.817757 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:00.872105 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:03.373427 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:04.067476 1142862 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.087571936s)
	I0603 13:51:04.067529 1142862 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0603 13:51:04.067549 1142862 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.1
	I0603 13:51:04.067605 1142862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1
	I0603 13:51:02.694248 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:03.194094 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:03.694072 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:04.194494 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:04.693899 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:05.193578 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:05.693584 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:06.193934 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:06.693586 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:07.193993 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:04.316979 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:06.318105 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:05.871061 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:08.371377 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:05.819264 1142862 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1: (1.75162069s)
	I0603 13:51:05.819302 1142862 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1 from cache
	I0603 13:51:05.819334 1142862 cache_images.go:123] Successfully loaded all cached images
	I0603 13:51:05.819341 1142862 cache_images.go:92] duration metric: took 15.849267186s to LoadCachedImages
	I0603 13:51:05.819352 1142862 kubeadm.go:928] updating node { 192.168.72.125 8443 v1.30.1 crio true true} ...
	I0603 13:51:05.819549 1142862 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-817450 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:no-preload-817450 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 13:51:05.819636 1142862 ssh_runner.go:195] Run: crio config
	I0603 13:51:05.874089 1142862 cni.go:84] Creating CNI manager for ""
	I0603 13:51:05.874114 1142862 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:51:05.874127 1142862 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 13:51:05.874152 1142862 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.125 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-817450 NodeName:no-preload-817450 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 13:51:05.874339 1142862 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.125
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-817450"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 13:51:05.874411 1142862 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 13:51:05.886116 1142862 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 13:51:05.886185 1142862 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 13:51:05.896269 1142862 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0603 13:51:05.914746 1142862 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 13:51:05.931936 1142862 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0603 13:51:05.949151 1142862 ssh_runner.go:195] Run: grep 192.168.72.125	control-plane.minikube.internal$ /etc/hosts
	I0603 13:51:05.953180 1142862 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.125	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:51:05.966675 1142862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:51:06.107517 1142862 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:51:06.129233 1142862 certs.go:68] Setting up /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450 for IP: 192.168.72.125
	I0603 13:51:06.129264 1142862 certs.go:194] generating shared ca certs ...
	I0603 13:51:06.129280 1142862 certs.go:226] acquiring lock for ca certs: {Name:mkeec5aabce7c9540fcb31b78e4f96c2851d54f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:51:06.129517 1142862 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key
	I0603 13:51:06.129583 1142862 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key
	I0603 13:51:06.129597 1142862 certs.go:256] generating profile certs ...
	I0603 13:51:06.129686 1142862 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450/client.key
	I0603 13:51:06.129746 1142862 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450/apiserver.key.e8ec030b
	I0603 13:51:06.129779 1142862 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450/proxy-client.key
	I0603 13:51:06.129885 1142862 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem (1338 bytes)
	W0603 13:51:06.129912 1142862 certs.go:480] ignoring /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251_empty.pem, impossibly tiny 0 bytes
	I0603 13:51:06.129919 1142862 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 13:51:06.129939 1142862 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem (1078 bytes)
	I0603 13:51:06.129965 1142862 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem (1123 bytes)
	I0603 13:51:06.129991 1142862 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem (1675 bytes)
	I0603 13:51:06.130028 1142862 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:51:06.130817 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 13:51:06.171348 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 13:51:06.206270 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 13:51:06.240508 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0603 13:51:06.292262 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0603 13:51:06.320406 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 13:51:06.346655 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 13:51:06.375908 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 13:51:06.401723 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 13:51:06.425992 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem --> /usr/share/ca-certificates/1086251.pem (1338 bytes)
	I0603 13:51:06.450484 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /usr/share/ca-certificates/10862512.pem (1708 bytes)
	I0603 13:51:06.475206 1142862 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 13:51:06.492795 1142862 ssh_runner.go:195] Run: openssl version
	I0603 13:51:06.499759 1142862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 13:51:06.511760 1142862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:51:06.516690 1142862 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:24 /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:51:06.516763 1142862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:51:06.523284 1142862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 13:51:06.535250 1142862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1086251.pem && ln -fs /usr/share/ca-certificates/1086251.pem /etc/ssl/certs/1086251.pem"
	I0603 13:51:06.545921 1142862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1086251.pem
	I0603 13:51:06.550765 1142862 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:37 /usr/share/ca-certificates/1086251.pem
	I0603 13:51:06.550823 1142862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1086251.pem
	I0603 13:51:06.556898 1142862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1086251.pem /etc/ssl/certs/51391683.0"
	I0603 13:51:06.567717 1142862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10862512.pem && ln -fs /usr/share/ca-certificates/10862512.pem /etc/ssl/certs/10862512.pem"
	I0603 13:51:06.578662 1142862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10862512.pem
	I0603 13:51:06.584084 1142862 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:37 /usr/share/ca-certificates/10862512.pem
	I0603 13:51:06.584153 1142862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10862512.pem
	I0603 13:51:06.591566 1142862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10862512.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 13:51:06.603554 1142862 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 13:51:06.608323 1142862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 13:51:06.614939 1142862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 13:51:06.621519 1142862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 13:51:06.627525 1142862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 13:51:06.633291 1142862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 13:51:06.639258 1142862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 13:51:06.644789 1142862 kubeadm.go:391] StartCluster: {Name:no-preload-817450 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:no-preload-817450 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.125 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:51:06.644876 1142862 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 13:51:06.644928 1142862 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:51:06.694731 1142862 cri.go:89] found id: ""
	I0603 13:51:06.694811 1142862 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0603 13:51:06.709773 1142862 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 13:51:06.709804 1142862 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 13:51:06.709812 1142862 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 13:51:06.709875 1142862 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 13:51:06.721095 1142862 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 13:51:06.722256 1142862 kubeconfig.go:125] found "no-preload-817450" server: "https://192.168.72.125:8443"
	I0603 13:51:06.724877 1142862 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 13:51:06.735753 1142862 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.125
	I0603 13:51:06.735789 1142862 kubeadm.go:1154] stopping kube-system containers ...
	I0603 13:51:06.735802 1142862 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0603 13:51:06.735847 1142862 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:51:06.776650 1142862 cri.go:89] found id: ""
	I0603 13:51:06.776743 1142862 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 13:51:06.796259 1142862 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 13:51:06.809765 1142862 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 13:51:06.809785 1142862 kubeadm.go:156] found existing configuration files:
	
	I0603 13:51:06.809839 1142862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 13:51:06.819821 1142862 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 13:51:06.819878 1142862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 13:51:06.829960 1142862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 13:51:06.839510 1142862 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 13:51:06.839561 1142862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 13:51:06.849346 1142862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 13:51:06.858834 1142862 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 13:51:06.858886 1142862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 13:51:06.869159 1142862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 13:51:06.879672 1142862 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 13:51:06.879739 1142862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 13:51:06.889393 1142862 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 13:51:06.899309 1142862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:51:07.021375 1142862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:51:08.119929 1142862 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.098510185s)
	I0603 13:51:08.119959 1142862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:51:08.318752 1142862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:51:08.396713 1142862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:51:08.506285 1142862 api_server.go:52] waiting for apiserver process to appear ...
	I0603 13:51:08.506384 1142862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:09.006865 1142862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:09.506528 1142862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:09.582432 1142862 api_server.go:72] duration metric: took 1.076134659s to wait for apiserver process to appear ...
	I0603 13:51:09.582463 1142862 api_server.go:88] waiting for apiserver healthz status ...
	I0603 13:51:09.582507 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:07.693540 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:08.194490 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:08.694498 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:09.194496 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:09.694286 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:10.193605 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:10.694326 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:11.193904 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:11.694504 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:12.194093 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:08.318739 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:10.817309 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:10.371622 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:12.372640 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:14.871007 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:12.049693 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 13:51:12.049731 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 13:51:12.049748 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:12.084495 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 13:51:12.084526 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 13:51:12.084541 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:12.141515 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:51:12.141555 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:51:12.582630 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:12.590238 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:51:12.590279 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:51:13.082813 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:13.097350 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:51:13.097380 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:51:13.582895 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:13.587479 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:51:13.587511 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:51:14.083076 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:14.087531 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:51:14.087561 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:51:14.583203 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:14.587735 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:51:14.587781 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:51:15.082844 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:15.087403 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:51:15.087438 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:51:15.583226 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:15.590238 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 200:
	ok
	I0603 13:51:15.601732 1142862 api_server.go:141] control plane version: v1.30.1
	I0603 13:51:15.601762 1142862 api_server.go:131] duration metric: took 6.019291333s to wait for apiserver health ...
	I0603 13:51:15.601775 1142862 cni.go:84] Creating CNI manager for ""
	I0603 13:51:15.601784 1142862 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:51:15.603654 1142862 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 13:51:12.694356 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:13.194219 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:13.693546 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:14.193588 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:14.694003 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:15.193572 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:15.694012 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:16.193567 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:16.694014 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:17.193554 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:13.320666 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:15.818073 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:17.369593 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:19.369916 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:15.605291 1142862 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 13:51:15.618333 1142862 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 13:51:15.640539 1142862 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 13:51:15.651042 1142862 system_pods.go:59] 8 kube-system pods found
	I0603 13:51:15.651086 1142862 system_pods.go:61] "coredns-7db6d8ff4d-s562v" [be995d41-2b25-4839-a36b-212a507e7db7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 13:51:15.651102 1142862 system_pods.go:61] "etcd-no-preload-817450" [1b21708b-d81b-4594-a186-546437467c26] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0603 13:51:15.651117 1142862 system_pods.go:61] "kube-apiserver-no-preload-817450" [0741a4bf-3161-4cf3-a9c6-36af2a0c4fde] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0603 13:51:15.651126 1142862 system_pods.go:61] "kube-controller-manager-no-preload-817450" [43713383-9197-4874-8aa9-7b1b1f05e4b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0603 13:51:15.651133 1142862 system_pods.go:61] "kube-proxy-2j4sg" [112657ad-311a-46ee-b5c0-6f544991465e] Running
	I0603 13:51:15.651145 1142862 system_pods.go:61] "kube-scheduler-no-preload-817450" [40db5c40-dc01-4fd3-a5e0-06a6ee1fd0a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0603 13:51:15.651152 1142862 system_pods.go:61] "metrics-server-569cc877fc-mtvrq" [00cb7657-2564-4d25-8faa-b6f618e61115] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:51:15.651163 1142862 system_pods.go:61] "storage-provisioner" [913d3120-32ce-4212-84be-9e3b99f2a894] Running
	I0603 13:51:15.651171 1142862 system_pods.go:74] duration metric: took 10.608401ms to wait for pod list to return data ...
	I0603 13:51:15.651181 1142862 node_conditions.go:102] verifying NodePressure condition ...
	I0603 13:51:15.654759 1142862 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 13:51:15.654784 1142862 node_conditions.go:123] node cpu capacity is 2
	I0603 13:51:15.654795 1142862 node_conditions.go:105] duration metric: took 3.608137ms to run NodePressure ...
	I0603 13:51:15.654813 1142862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:51:15.940085 1142862 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0603 13:51:15.944785 1142862 kubeadm.go:733] kubelet initialised
	I0603 13:51:15.944808 1142862 kubeadm.go:734] duration metric: took 4.692827ms waiting for restarted kubelet to initialise ...
	I0603 13:51:15.944817 1142862 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:51:15.950113 1142862 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-s562v" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:17.958330 1142862 pod_ready.go:102] pod "coredns-7db6d8ff4d-s562v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:20.456029 1142862 pod_ready.go:102] pod "coredns-7db6d8ff4d-s562v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:17.693856 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:18.193853 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:18.693858 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:19.193568 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:19.693680 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:20.193556 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:20.694129 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:21.193662 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:21.694445 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:22.193668 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:18.317128 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:20.317375 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:22.317530 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:21.371070 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:23.871400 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:21.958183 1142862 pod_ready.go:92] pod "coredns-7db6d8ff4d-s562v" in "kube-system" namespace has status "Ready":"True"
	I0603 13:51:21.958208 1142862 pod_ready.go:81] duration metric: took 6.008058251s for pod "coredns-7db6d8ff4d-s562v" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:21.958220 1142862 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:23.964785 1142862 pod_ready.go:102] pod "etcd-no-preload-817450" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:22.694004 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:23.193793 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:23.694340 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:24.194411 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:24.694314 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:25.194501 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:25.693545 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:26.194255 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:26.694312 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:27.194453 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:24.817165 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:27.317176 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:26.369665 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:28.370392 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:25.966060 1142862 pod_ready.go:102] pod "etcd-no-preload-817450" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:27.965236 1142862 pod_ready.go:92] pod "etcd-no-preload-817450" in "kube-system" namespace has status "Ready":"True"
	I0603 13:51:27.965267 1142862 pod_ready.go:81] duration metric: took 6.007038184s for pod "etcd-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.965281 1142862 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.969898 1142862 pod_ready.go:92] pod "kube-apiserver-no-preload-817450" in "kube-system" namespace has status "Ready":"True"
	I0603 13:51:27.969920 1142862 pod_ready.go:81] duration metric: took 4.630357ms for pod "kube-apiserver-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.969932 1142862 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.974500 1142862 pod_ready.go:92] pod "kube-controller-manager-no-preload-817450" in "kube-system" namespace has status "Ready":"True"
	I0603 13:51:27.974517 1142862 pod_ready.go:81] duration metric: took 4.577117ms for pod "kube-controller-manager-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.974526 1142862 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2j4sg" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.978510 1142862 pod_ready.go:92] pod "kube-proxy-2j4sg" in "kube-system" namespace has status "Ready":"True"
	I0603 13:51:27.978530 1142862 pod_ready.go:81] duration metric: took 3.997645ms for pod "kube-proxy-2j4sg" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.978537 1142862 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.982488 1142862 pod_ready.go:92] pod "kube-scheduler-no-preload-817450" in "kube-system" namespace has status "Ready":"True"
	I0603 13:51:27.982507 1142862 pod_ready.go:81] duration metric: took 3.962666ms for pod "kube-scheduler-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.982518 1142862 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:29.989265 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:27.694334 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:28.193809 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:28.693744 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:29.193608 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:29.693584 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:30.194111 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:30.694213 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:31.193588 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:31.694336 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:32.193716 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:29.317483 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:31.324199 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:30.370435 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:32.870510 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:34.872543 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:31.990649 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:34.488899 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:32.693501 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:33.194174 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:33.693995 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:34.194242 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:34.693961 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:35.194052 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:35.693730 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:36.193559 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:36.693763 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:37.194274 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:33.816533 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:36.316832 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:37.371543 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:39.372034 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:36.489364 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:38.490431 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:40.490888 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:37.693590 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:38.194328 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:38.694296 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:39.194272 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:39.693607 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:40.193595 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:51:40.193691 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:51:40.237747 1143678 cri.go:89] found id: ""
	I0603 13:51:40.237776 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.237785 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:51:40.237792 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:51:40.237854 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:51:40.275924 1143678 cri.go:89] found id: ""
	I0603 13:51:40.275964 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.275975 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:51:40.275983 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:51:40.276049 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:51:40.314827 1143678 cri.go:89] found id: ""
	I0603 13:51:40.314857 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.314870 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:51:40.314877 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:51:40.314939 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:51:40.359040 1143678 cri.go:89] found id: ""
	I0603 13:51:40.359072 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.359084 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:51:40.359092 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:51:40.359154 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:51:40.396136 1143678 cri.go:89] found id: ""
	I0603 13:51:40.396170 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.396185 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:51:40.396194 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:51:40.396261 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:51:40.436766 1143678 cri.go:89] found id: ""
	I0603 13:51:40.436803 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.436814 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:51:40.436828 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:51:40.436902 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:51:40.477580 1143678 cri.go:89] found id: ""
	I0603 13:51:40.477606 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.477615 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:51:40.477621 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:51:40.477713 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:51:40.518920 1143678 cri.go:89] found id: ""
	I0603 13:51:40.518960 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.518972 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:51:40.518984 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:51:40.519001 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:51:40.659881 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:51:40.659913 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:51:40.659932 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:51:40.727850 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:51:40.727894 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:51:40.774153 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:51:40.774189 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:40.828054 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:51:40.828094 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:51:38.820985 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:41.322044 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:41.870717 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:43.872112 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:42.988898 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:44.989384 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:43.342659 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:43.357063 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:51:43.357131 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:51:43.398000 1143678 cri.go:89] found id: ""
	I0603 13:51:43.398036 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.398045 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:51:43.398051 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:51:43.398106 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:51:43.436761 1143678 cri.go:89] found id: ""
	I0603 13:51:43.436805 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.436814 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:51:43.436820 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:51:43.436872 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:51:43.478122 1143678 cri.go:89] found id: ""
	I0603 13:51:43.478154 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.478164 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:51:43.478172 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:51:43.478243 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:51:43.514473 1143678 cri.go:89] found id: ""
	I0603 13:51:43.514511 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.514523 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:51:43.514532 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:51:43.514600 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:51:43.552354 1143678 cri.go:89] found id: ""
	I0603 13:51:43.552390 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.552399 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:51:43.552405 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:51:43.552489 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:51:43.590637 1143678 cri.go:89] found id: ""
	I0603 13:51:43.590665 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.590677 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:51:43.590685 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:51:43.590745 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:51:43.633958 1143678 cri.go:89] found id: ""
	I0603 13:51:43.634001 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.634013 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:51:43.634021 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:51:43.634088 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:51:43.672640 1143678 cri.go:89] found id: ""
	I0603 13:51:43.672683 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.672695 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:51:43.672716 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:51:43.672733 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:43.725880 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:51:43.725937 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:51:43.743736 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:51:43.743771 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:51:43.831757 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:51:43.831785 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:51:43.831801 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:51:43.905062 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:51:43.905114 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:51:46.459588 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:46.472911 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:51:46.472983 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:51:46.513723 1143678 cri.go:89] found id: ""
	I0603 13:51:46.513757 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.513768 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:51:46.513776 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:51:46.513845 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:51:46.549205 1143678 cri.go:89] found id: ""
	I0603 13:51:46.549234 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.549242 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:51:46.549251 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:51:46.549311 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:51:46.585004 1143678 cri.go:89] found id: ""
	I0603 13:51:46.585042 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.585053 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:51:46.585063 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:51:46.585120 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:51:46.620534 1143678 cri.go:89] found id: ""
	I0603 13:51:46.620571 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.620582 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:51:46.620590 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:51:46.620661 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:51:46.655974 1143678 cri.go:89] found id: ""
	I0603 13:51:46.656005 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.656014 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:51:46.656020 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:51:46.656091 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:51:46.693078 1143678 cri.go:89] found id: ""
	I0603 13:51:46.693141 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.693158 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:51:46.693168 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:51:46.693244 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:51:46.729177 1143678 cri.go:89] found id: ""
	I0603 13:51:46.729213 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.729223 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:51:46.729232 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:51:46.729300 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:51:46.766899 1143678 cri.go:89] found id: ""
	I0603 13:51:46.766929 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.766937 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:51:46.766946 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:51:46.766959 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:46.826715 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:51:46.826757 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:51:46.841461 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:51:46.841504 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:51:46.914505 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:51:46.914533 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:51:46.914551 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:51:46.989886 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:51:46.989928 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:51:43.817456 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:45.817576 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:46.370927 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:48.371196 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:46.990440 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:49.489483 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:49.532804 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:49.547359 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:51:49.547438 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:51:49.584262 1143678 cri.go:89] found id: ""
	I0603 13:51:49.584299 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.584311 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:51:49.584319 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:51:49.584389 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:51:49.622332 1143678 cri.go:89] found id: ""
	I0603 13:51:49.622372 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.622384 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:51:49.622393 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:51:49.622488 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:51:49.664339 1143678 cri.go:89] found id: ""
	I0603 13:51:49.664378 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.664390 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:51:49.664399 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:51:49.664468 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:51:49.712528 1143678 cri.go:89] found id: ""
	I0603 13:51:49.712558 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.712565 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:51:49.712574 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:51:49.712640 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:51:49.767343 1143678 cri.go:89] found id: ""
	I0603 13:51:49.767374 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.767382 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:51:49.767388 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:51:49.767450 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:51:49.822457 1143678 cri.go:89] found id: ""
	I0603 13:51:49.822491 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.822499 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:51:49.822505 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:51:49.822561 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:51:49.867823 1143678 cri.go:89] found id: ""
	I0603 13:51:49.867855 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.867867 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:51:49.867875 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:51:49.867936 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:51:49.906765 1143678 cri.go:89] found id: ""
	I0603 13:51:49.906797 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.906805 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:51:49.906816 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:51:49.906829 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:51:49.921731 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:51:49.921764 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:51:49.993832 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:51:49.993860 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:51:49.993878 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:51:50.070080 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:51:50.070125 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:51:50.112323 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:51:50.112357 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:48.317830 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:50.816577 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:52.817035 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:50.871664 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:52.871865 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:51.990258 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:54.489037 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:52.666289 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:52.680475 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:51:52.680550 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:51:52.722025 1143678 cri.go:89] found id: ""
	I0603 13:51:52.722063 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.722075 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:51:52.722083 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:51:52.722145 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:51:52.759709 1143678 cri.go:89] found id: ""
	I0603 13:51:52.759742 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.759754 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:51:52.759762 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:51:52.759838 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:51:52.797131 1143678 cri.go:89] found id: ""
	I0603 13:51:52.797162 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.797171 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:51:52.797176 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:51:52.797231 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:51:52.832921 1143678 cri.go:89] found id: ""
	I0603 13:51:52.832951 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.832959 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:51:52.832965 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:51:52.833024 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:51:52.869361 1143678 cri.go:89] found id: ""
	I0603 13:51:52.869389 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.869399 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:51:52.869422 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:51:52.869495 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:51:52.905863 1143678 cri.go:89] found id: ""
	I0603 13:51:52.905897 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.905909 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:51:52.905917 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:51:52.905985 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:51:52.940407 1143678 cri.go:89] found id: ""
	I0603 13:51:52.940438 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.940446 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:51:52.940452 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:51:52.940517 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:51:52.982079 1143678 cri.go:89] found id: ""
	I0603 13:51:52.982115 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.982126 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:51:52.982138 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:51:52.982155 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:51:53.066897 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:51:53.066942 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:51:53.108016 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:51:53.108056 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:53.164105 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:51:53.164151 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:51:53.178708 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:51:53.178743 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:51:53.257441 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:51:55.758633 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:55.774241 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:51:55.774329 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:51:55.809373 1143678 cri.go:89] found id: ""
	I0603 13:51:55.809436 1143678 logs.go:276] 0 containers: []
	W0603 13:51:55.809450 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:51:55.809467 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:51:55.809539 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:51:55.849741 1143678 cri.go:89] found id: ""
	I0603 13:51:55.849768 1143678 logs.go:276] 0 containers: []
	W0603 13:51:55.849776 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:51:55.849783 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:51:55.849834 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:51:55.893184 1143678 cri.go:89] found id: ""
	I0603 13:51:55.893216 1143678 logs.go:276] 0 containers: []
	W0603 13:51:55.893228 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:51:55.893238 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:51:55.893307 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:51:55.931572 1143678 cri.go:89] found id: ""
	I0603 13:51:55.931618 1143678 logs.go:276] 0 containers: []
	W0603 13:51:55.931632 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:51:55.931642 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:51:55.931713 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:51:55.969490 1143678 cri.go:89] found id: ""
	I0603 13:51:55.969527 1143678 logs.go:276] 0 containers: []
	W0603 13:51:55.969538 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:51:55.969546 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:51:55.969614 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:51:56.009266 1143678 cri.go:89] found id: ""
	I0603 13:51:56.009301 1143678 logs.go:276] 0 containers: []
	W0603 13:51:56.009313 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:51:56.009321 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:51:56.009394 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:51:56.049471 1143678 cri.go:89] found id: ""
	I0603 13:51:56.049520 1143678 logs.go:276] 0 containers: []
	W0603 13:51:56.049540 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:51:56.049547 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:51:56.049616 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:51:56.090176 1143678 cri.go:89] found id: ""
	I0603 13:51:56.090213 1143678 logs.go:276] 0 containers: []
	W0603 13:51:56.090228 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:51:56.090241 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:51:56.090266 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:51:56.175692 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:51:56.175737 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:51:56.222642 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:51:56.222683 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:56.276258 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:51:56.276301 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:51:56.291703 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:51:56.291739 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:51:56.364788 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:51:55.316604 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:57.816804 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:55.370917 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:57.372903 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:59.870783 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:56.489636 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:58.990006 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:58.865558 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:58.879983 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:51:58.880074 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:51:58.917422 1143678 cri.go:89] found id: ""
	I0603 13:51:58.917461 1143678 logs.go:276] 0 containers: []
	W0603 13:51:58.917473 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:51:58.917480 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:51:58.917535 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:51:58.953900 1143678 cri.go:89] found id: ""
	I0603 13:51:58.953933 1143678 logs.go:276] 0 containers: []
	W0603 13:51:58.953943 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:51:58.953959 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:51:58.954030 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:51:58.988677 1143678 cri.go:89] found id: ""
	I0603 13:51:58.988704 1143678 logs.go:276] 0 containers: []
	W0603 13:51:58.988713 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:51:58.988721 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:51:58.988783 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:51:59.023436 1143678 cri.go:89] found id: ""
	I0603 13:51:59.023474 1143678 logs.go:276] 0 containers: []
	W0603 13:51:59.023486 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:51:59.023494 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:51:59.023570 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:51:59.061357 1143678 cri.go:89] found id: ""
	I0603 13:51:59.061386 1143678 logs.go:276] 0 containers: []
	W0603 13:51:59.061394 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:51:59.061400 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:51:59.061487 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:51:59.102995 1143678 cri.go:89] found id: ""
	I0603 13:51:59.103025 1143678 logs.go:276] 0 containers: []
	W0603 13:51:59.103038 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:51:59.103047 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:51:59.103124 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:51:59.141443 1143678 cri.go:89] found id: ""
	I0603 13:51:59.141480 1143678 logs.go:276] 0 containers: []
	W0603 13:51:59.141492 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:51:59.141499 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:51:59.141586 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:51:59.182909 1143678 cri.go:89] found id: ""
	I0603 13:51:59.182943 1143678 logs.go:276] 0 containers: []
	W0603 13:51:59.182953 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:51:59.182967 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:51:59.182984 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:51:59.259533 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:51:59.259580 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:51:59.308976 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:51:59.309016 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:59.362092 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:51:59.362142 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:51:59.378836 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:51:59.378887 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:51:59.454524 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:01.954939 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:01.969968 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:01.970039 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:02.014226 1143678 cri.go:89] found id: ""
	I0603 13:52:02.014267 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.014280 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:02.014289 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:02.014361 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:02.051189 1143678 cri.go:89] found id: ""
	I0603 13:52:02.051244 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.051259 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:02.051268 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:02.051349 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:02.093509 1143678 cri.go:89] found id: ""
	I0603 13:52:02.093548 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.093575 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:02.093586 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:02.093718 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:02.132069 1143678 cri.go:89] found id: ""
	I0603 13:52:02.132113 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.132129 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:02.132138 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:02.132299 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:02.168043 1143678 cri.go:89] found id: ""
	I0603 13:52:02.168071 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.168079 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:02.168085 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:02.168138 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:02.207029 1143678 cri.go:89] found id: ""
	I0603 13:52:02.207064 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.207074 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:02.207081 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:02.207134 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:02.247669 1143678 cri.go:89] found id: ""
	I0603 13:52:02.247719 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.247728 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:02.247734 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:02.247848 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:02.285780 1143678 cri.go:89] found id: ""
	I0603 13:52:02.285817 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.285829 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:02.285841 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:02.285863 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:59.817887 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:01.818381 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:01.871338 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:04.371052 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:00.990263 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:02.990651 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:05.490343 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:02.348775 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:02.349776 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:02.364654 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:02.364691 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:02.447948 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:02.447978 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:02.447992 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:02.534039 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:02.534100 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:05.080437 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:05.094169 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:05.094245 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:05.132312 1143678 cri.go:89] found id: ""
	I0603 13:52:05.132339 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.132346 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:05.132352 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:05.132423 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:05.168941 1143678 cri.go:89] found id: ""
	I0603 13:52:05.168979 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.168990 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:05.168999 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:05.169068 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:05.207151 1143678 cri.go:89] found id: ""
	I0603 13:52:05.207188 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.207196 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:05.207202 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:05.207272 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:05.258807 1143678 cri.go:89] found id: ""
	I0603 13:52:05.258839 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.258850 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:05.258859 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:05.259004 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:05.298250 1143678 cri.go:89] found id: ""
	I0603 13:52:05.298285 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.298297 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:05.298306 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:05.298381 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:05.340922 1143678 cri.go:89] found id: ""
	I0603 13:52:05.340951 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.340959 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:05.340966 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:05.341027 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:05.382680 1143678 cri.go:89] found id: ""
	I0603 13:52:05.382707 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.382715 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:05.382722 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:05.382777 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:05.426774 1143678 cri.go:89] found id: ""
	I0603 13:52:05.426801 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.426811 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:05.426822 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:05.426836 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:05.483042 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:05.483091 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:05.499119 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:05.499159 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:05.580933 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:05.580962 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:05.580983 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:05.660395 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:05.660437 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:03.818676 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:06.316881 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:06.371515 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:08.871174 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:07.490662 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:09.992709 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:08.200887 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:08.215113 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:08.215203 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:08.252367 1143678 cri.go:89] found id: ""
	I0603 13:52:08.252404 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.252417 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:08.252427 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:08.252500 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:08.289249 1143678 cri.go:89] found id: ""
	I0603 13:52:08.289279 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.289290 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:08.289298 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:08.289364 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:08.331155 1143678 cri.go:89] found id: ""
	I0603 13:52:08.331181 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.331195 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:08.331201 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:08.331258 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:08.371376 1143678 cri.go:89] found id: ""
	I0603 13:52:08.371400 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.371408 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:08.371415 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:08.371477 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:08.408009 1143678 cri.go:89] found id: ""
	I0603 13:52:08.408045 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.408057 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:08.408065 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:08.408119 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:08.446377 1143678 cri.go:89] found id: ""
	I0603 13:52:08.446413 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.446421 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:08.446429 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:08.446504 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:08.485429 1143678 cri.go:89] found id: ""
	I0603 13:52:08.485461 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.485471 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:08.485479 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:08.485546 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:08.527319 1143678 cri.go:89] found id: ""
	I0603 13:52:08.527363 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.527375 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:08.527388 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:08.527414 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:08.602347 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:08.602371 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:08.602384 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:08.683855 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:08.683902 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:08.724402 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:08.724443 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:08.781154 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:08.781202 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:11.297827 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:11.313927 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:11.314006 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:11.352622 1143678 cri.go:89] found id: ""
	I0603 13:52:11.352660 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.352671 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:11.352678 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:11.352755 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:11.395301 1143678 cri.go:89] found id: ""
	I0603 13:52:11.395338 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.395351 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:11.395360 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:11.395442 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:11.431104 1143678 cri.go:89] found id: ""
	I0603 13:52:11.431143 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.431155 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:11.431170 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:11.431234 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:11.470177 1143678 cri.go:89] found id: ""
	I0603 13:52:11.470212 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.470223 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:11.470241 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:11.470309 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:11.508741 1143678 cri.go:89] found id: ""
	I0603 13:52:11.508779 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.508803 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:11.508810 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:11.508906 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:11.544970 1143678 cri.go:89] found id: ""
	I0603 13:52:11.545002 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.545012 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:11.545022 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:11.545093 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:11.583606 1143678 cri.go:89] found id: ""
	I0603 13:52:11.583636 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.583653 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:11.583666 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:11.583739 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:11.624770 1143678 cri.go:89] found id: ""
	I0603 13:52:11.624806 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.624815 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:11.624824 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:11.624841 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:11.680251 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:11.680298 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:11.695656 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:11.695695 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:11.770414 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:11.770478 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:11.770497 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:11.850812 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:11.850871 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:08.318447 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:10.817734 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:11.372533 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:13.871822 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:12.490666 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:14.988752 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:14.398649 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:14.411591 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:14.411689 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:14.447126 1143678 cri.go:89] found id: ""
	I0603 13:52:14.447158 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.447170 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:14.447178 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:14.447245 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:14.486681 1143678 cri.go:89] found id: ""
	I0603 13:52:14.486716 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.486728 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:14.486735 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:14.486799 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:14.521297 1143678 cri.go:89] found id: ""
	I0603 13:52:14.521326 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.521337 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:14.521343 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:14.521443 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:14.565086 1143678 cri.go:89] found id: ""
	I0603 13:52:14.565121 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.565130 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:14.565136 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:14.565196 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:14.601947 1143678 cri.go:89] found id: ""
	I0603 13:52:14.601975 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.601984 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:14.601990 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:14.602044 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:14.638332 1143678 cri.go:89] found id: ""
	I0603 13:52:14.638359 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.638366 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:14.638374 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:14.638435 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:14.675254 1143678 cri.go:89] found id: ""
	I0603 13:52:14.675284 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.675293 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:14.675299 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:14.675354 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:14.712601 1143678 cri.go:89] found id: ""
	I0603 13:52:14.712631 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.712639 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:14.712649 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:14.712663 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:14.787026 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:14.787068 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:14.836534 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:14.836564 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:14.889682 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:14.889729 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:14.905230 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:14.905264 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:14.979090 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:13.317070 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:15.317490 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:17.816412 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:15.871901 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:18.370626 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:16.989195 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:18.990108 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:17.479590 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:17.495088 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:17.495250 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:17.530832 1143678 cri.go:89] found id: ""
	I0603 13:52:17.530871 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.530883 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:17.530891 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:17.530966 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:17.567183 1143678 cri.go:89] found id: ""
	I0603 13:52:17.567213 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.567224 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:17.567232 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:17.567305 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:17.602424 1143678 cri.go:89] found id: ""
	I0603 13:52:17.602458 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.602469 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:17.602493 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:17.602570 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:17.641148 1143678 cri.go:89] found id: ""
	I0603 13:52:17.641184 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.641197 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:17.641205 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:17.641273 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:17.679004 1143678 cri.go:89] found id: ""
	I0603 13:52:17.679031 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.679039 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:17.679045 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:17.679102 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:17.717667 1143678 cri.go:89] found id: ""
	I0603 13:52:17.717698 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.717707 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:17.717715 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:17.717786 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:17.760262 1143678 cri.go:89] found id: ""
	I0603 13:52:17.760300 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.760323 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:17.760331 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:17.760416 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:17.796910 1143678 cri.go:89] found id: ""
	I0603 13:52:17.796943 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.796960 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:17.796976 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:17.796990 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:17.811733 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:17.811768 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:17.891891 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:17.891920 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:17.891939 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:17.969495 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:17.969535 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:18.032622 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:18.032654 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:20.586079 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:20.599118 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:20.599202 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:20.633732 1143678 cri.go:89] found id: ""
	I0603 13:52:20.633770 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.633780 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:20.633787 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:20.633841 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:20.668126 1143678 cri.go:89] found id: ""
	I0603 13:52:20.668155 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.668163 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:20.668169 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:20.668231 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:20.704144 1143678 cri.go:89] found id: ""
	I0603 13:52:20.704177 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.704187 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:20.704194 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:20.704251 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:20.745562 1143678 cri.go:89] found id: ""
	I0603 13:52:20.745594 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.745602 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:20.745608 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:20.745663 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:20.788998 1143678 cri.go:89] found id: ""
	I0603 13:52:20.789041 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.789053 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:20.789075 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:20.789152 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:20.832466 1143678 cri.go:89] found id: ""
	I0603 13:52:20.832495 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.832503 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:20.832510 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:20.832575 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:20.875212 1143678 cri.go:89] found id: ""
	I0603 13:52:20.875248 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.875258 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:20.875267 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:20.875336 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:20.912957 1143678 cri.go:89] found id: ""
	I0603 13:52:20.912989 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.912999 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:20.913011 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:20.913030 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:20.963655 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:20.963700 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:20.978619 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:20.978658 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:21.057136 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:21.057163 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:21.057185 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:21.136368 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:21.136415 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:19.817227 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:21.817625 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:20.871465 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:23.370757 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:21.488564 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:23.991662 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:23.676222 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:23.691111 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:23.691213 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:23.733282 1143678 cri.go:89] found id: ""
	I0603 13:52:23.733319 1143678 logs.go:276] 0 containers: []
	W0603 13:52:23.733332 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:23.733341 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:23.733438 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:23.780841 1143678 cri.go:89] found id: ""
	I0603 13:52:23.780873 1143678 logs.go:276] 0 containers: []
	W0603 13:52:23.780882 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:23.780894 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:23.780947 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:23.820521 1143678 cri.go:89] found id: ""
	I0603 13:52:23.820553 1143678 logs.go:276] 0 containers: []
	W0603 13:52:23.820565 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:23.820573 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:23.820636 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:23.857684 1143678 cri.go:89] found id: ""
	I0603 13:52:23.857728 1143678 logs.go:276] 0 containers: []
	W0603 13:52:23.857739 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:23.857747 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:23.857818 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:23.896800 1143678 cri.go:89] found id: ""
	I0603 13:52:23.896829 1143678 logs.go:276] 0 containers: []
	W0603 13:52:23.896842 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:23.896850 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:23.896914 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:23.935511 1143678 cri.go:89] found id: ""
	I0603 13:52:23.935538 1143678 logs.go:276] 0 containers: []
	W0603 13:52:23.935547 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:23.935554 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:23.935608 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:23.973858 1143678 cri.go:89] found id: ""
	I0603 13:52:23.973885 1143678 logs.go:276] 0 containers: []
	W0603 13:52:23.973895 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:23.973901 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:23.973961 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:24.012491 1143678 cri.go:89] found id: ""
	I0603 13:52:24.012521 1143678 logs.go:276] 0 containers: []
	W0603 13:52:24.012532 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:24.012545 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:24.012569 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:24.064274 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:24.064319 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:24.079382 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:24.079420 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:24.153708 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:24.153733 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:24.153749 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:24.233104 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:24.233148 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:26.774771 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:26.789853 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:26.789924 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:26.830089 1143678 cri.go:89] found id: ""
	I0603 13:52:26.830129 1143678 logs.go:276] 0 containers: []
	W0603 13:52:26.830167 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:26.830176 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:26.830251 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:26.866907 1143678 cri.go:89] found id: ""
	I0603 13:52:26.866941 1143678 logs.go:276] 0 containers: []
	W0603 13:52:26.866952 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:26.866960 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:26.867031 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:26.915028 1143678 cri.go:89] found id: ""
	I0603 13:52:26.915061 1143678 logs.go:276] 0 containers: []
	W0603 13:52:26.915070 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:26.915079 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:26.915151 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:26.962044 1143678 cri.go:89] found id: ""
	I0603 13:52:26.962075 1143678 logs.go:276] 0 containers: []
	W0603 13:52:26.962083 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:26.962088 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:26.962154 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:26.996156 1143678 cri.go:89] found id: ""
	I0603 13:52:26.996188 1143678 logs.go:276] 0 containers: []
	W0603 13:52:26.996196 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:26.996202 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:26.996265 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:27.038593 1143678 cri.go:89] found id: ""
	I0603 13:52:27.038627 1143678 logs.go:276] 0 containers: []
	W0603 13:52:27.038636 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:27.038642 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:27.038708 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:27.076116 1143678 cri.go:89] found id: ""
	I0603 13:52:27.076144 1143678 logs.go:276] 0 containers: []
	W0603 13:52:27.076153 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:27.076159 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:27.076228 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:27.110653 1143678 cri.go:89] found id: ""
	I0603 13:52:27.110688 1143678 logs.go:276] 0 containers: []
	W0603 13:52:27.110700 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:27.110714 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:27.110733 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:27.193718 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:27.193743 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:27.193756 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:27.269423 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:27.269483 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:27.307899 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:27.307939 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:24.317663 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:26.817148 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:25.371861 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:27.870070 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:29.870299 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:26.488753 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:28.489065 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:30.489568 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:27.363830 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:27.363878 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:29.879016 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:29.893482 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:29.893553 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:29.932146 1143678 cri.go:89] found id: ""
	I0603 13:52:29.932190 1143678 logs.go:276] 0 containers: []
	W0603 13:52:29.932199 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:29.932205 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:29.932259 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:29.968986 1143678 cri.go:89] found id: ""
	I0603 13:52:29.969020 1143678 logs.go:276] 0 containers: []
	W0603 13:52:29.969032 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:29.969040 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:29.969097 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:30.007190 1143678 cri.go:89] found id: ""
	I0603 13:52:30.007228 1143678 logs.go:276] 0 containers: []
	W0603 13:52:30.007238 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:30.007244 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:30.007303 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:30.044607 1143678 cri.go:89] found id: ""
	I0603 13:52:30.044638 1143678 logs.go:276] 0 containers: []
	W0603 13:52:30.044646 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:30.044652 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:30.044706 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:30.083103 1143678 cri.go:89] found id: ""
	I0603 13:52:30.083179 1143678 logs.go:276] 0 containers: []
	W0603 13:52:30.083193 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:30.083204 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:30.083280 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:30.124125 1143678 cri.go:89] found id: ""
	I0603 13:52:30.124152 1143678 logs.go:276] 0 containers: []
	W0603 13:52:30.124160 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:30.124167 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:30.124234 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:30.164293 1143678 cri.go:89] found id: ""
	I0603 13:52:30.164329 1143678 logs.go:276] 0 containers: []
	W0603 13:52:30.164345 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:30.164353 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:30.164467 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:30.219980 1143678 cri.go:89] found id: ""
	I0603 13:52:30.220015 1143678 logs.go:276] 0 containers: []
	W0603 13:52:30.220028 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:30.220042 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:30.220063 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:30.313282 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:30.313305 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:30.313323 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:30.393759 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:30.393801 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:30.441384 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:30.441434 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:30.493523 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:30.493558 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:28.817554 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:31.317629 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:31.870659 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:33.870954 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:32.990340 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:35.495665 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:33.009114 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:33.023177 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:33.023278 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:33.065346 1143678 cri.go:89] found id: ""
	I0603 13:52:33.065388 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.065400 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:33.065424 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:33.065506 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:33.108513 1143678 cri.go:89] found id: ""
	I0603 13:52:33.108549 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.108561 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:33.108569 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:33.108640 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:33.146053 1143678 cri.go:89] found id: ""
	I0603 13:52:33.146082 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.146089 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:33.146107 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:33.146165 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:33.187152 1143678 cri.go:89] found id: ""
	I0603 13:52:33.187195 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.187206 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:33.187216 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:33.187302 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:33.223887 1143678 cri.go:89] found id: ""
	I0603 13:52:33.223920 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.223932 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:33.223941 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:33.224010 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:33.263902 1143678 cri.go:89] found id: ""
	I0603 13:52:33.263958 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.263971 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:33.263980 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:33.264048 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:33.302753 1143678 cri.go:89] found id: ""
	I0603 13:52:33.302785 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.302796 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:33.302805 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:33.302859 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:33.340711 1143678 cri.go:89] found id: ""
	I0603 13:52:33.340745 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.340754 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:33.340763 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:33.340780 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:33.400226 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:33.400271 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:33.414891 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:33.414923 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:33.498121 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:33.498156 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:33.498172 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:33.575682 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:33.575731 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:36.116930 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:36.133001 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:36.133070 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:36.182727 1143678 cri.go:89] found id: ""
	I0603 13:52:36.182763 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.182774 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:36.182782 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:36.182851 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:36.228804 1143678 cri.go:89] found id: ""
	I0603 13:52:36.228841 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.228854 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:36.228862 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:36.228929 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:36.279320 1143678 cri.go:89] found id: ""
	I0603 13:52:36.279359 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.279370 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:36.279378 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:36.279461 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:36.319725 1143678 cri.go:89] found id: ""
	I0603 13:52:36.319751 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.319759 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:36.319765 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:36.319819 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:36.356657 1143678 cri.go:89] found id: ""
	I0603 13:52:36.356685 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.356693 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:36.356703 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:36.356760 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:36.393397 1143678 cri.go:89] found id: ""
	I0603 13:52:36.393448 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.393459 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:36.393467 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:36.393545 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:36.429211 1143678 cri.go:89] found id: ""
	I0603 13:52:36.429246 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.429254 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:36.429260 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:36.429324 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:36.466796 1143678 cri.go:89] found id: ""
	I0603 13:52:36.466831 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.466839 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:36.466849 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:36.466862 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:36.509871 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:36.509900 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:36.562167 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:36.562206 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:36.577014 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:36.577047 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:36.657581 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:36.657604 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:36.657625 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:33.817495 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:35.820854 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:36.371645 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:38.871484 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:37.989038 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:39.989986 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:39.242339 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:39.257985 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:39.258072 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:39.300153 1143678 cri.go:89] found id: ""
	I0603 13:52:39.300185 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.300197 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:39.300205 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:39.300304 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:39.336117 1143678 cri.go:89] found id: ""
	I0603 13:52:39.336152 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.336162 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:39.336175 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:39.336307 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:39.375945 1143678 cri.go:89] found id: ""
	I0603 13:52:39.375979 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.375990 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:39.375998 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:39.376066 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:39.417207 1143678 cri.go:89] found id: ""
	I0603 13:52:39.417242 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.417253 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:39.417261 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:39.417340 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:39.456259 1143678 cri.go:89] found id: ""
	I0603 13:52:39.456295 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.456307 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:39.456315 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:39.456377 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:39.494879 1143678 cri.go:89] found id: ""
	I0603 13:52:39.494904 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.494913 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:39.494919 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:39.494979 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:39.532129 1143678 cri.go:89] found id: ""
	I0603 13:52:39.532157 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.532168 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:39.532177 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:39.532267 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:39.570662 1143678 cri.go:89] found id: ""
	I0603 13:52:39.570693 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.570703 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:39.570717 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:39.570734 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:39.622008 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:39.622057 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:39.636849 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:39.636884 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:39.719914 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:39.719948 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:39.719967 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:39.801723 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:39.801769 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:38.317321 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:40.817649 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:42.819652 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:41.370965 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:43.371900 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:42.490311 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:44.988731 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:42.348936 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:42.363663 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:42.363735 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:42.400584 1143678 cri.go:89] found id: ""
	I0603 13:52:42.400616 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.400625 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:42.400631 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:42.400685 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:42.438853 1143678 cri.go:89] found id: ""
	I0603 13:52:42.438885 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.438893 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:42.438899 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:42.438954 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:42.474980 1143678 cri.go:89] found id: ""
	I0603 13:52:42.475013 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.475025 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:42.475032 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:42.475086 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:42.511027 1143678 cri.go:89] found id: ""
	I0603 13:52:42.511056 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.511068 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:42.511077 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:42.511237 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:42.545333 1143678 cri.go:89] found id: ""
	I0603 13:52:42.545367 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.545378 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:42.545386 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:42.545468 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:42.583392 1143678 cri.go:89] found id: ""
	I0603 13:52:42.583438 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.583556 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:42.583591 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:42.583656 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:42.620886 1143678 cri.go:89] found id: ""
	I0603 13:52:42.620916 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.620924 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:42.620930 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:42.620985 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:42.656265 1143678 cri.go:89] found id: ""
	I0603 13:52:42.656301 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.656313 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:42.656327 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:42.656344 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:42.711078 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:42.711124 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:42.727751 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:42.727788 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:42.802330 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:42.802356 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:42.802370 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:42.883700 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:42.883742 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:45.424591 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:45.440797 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:45.440883 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:45.483664 1143678 cri.go:89] found id: ""
	I0603 13:52:45.483698 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.483709 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:45.483717 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:45.483789 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:45.523147 1143678 cri.go:89] found id: ""
	I0603 13:52:45.523182 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.523193 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:45.523201 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:45.523273 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:45.563483 1143678 cri.go:89] found id: ""
	I0603 13:52:45.563516 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.563527 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:45.563536 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:45.563598 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:45.603574 1143678 cri.go:89] found id: ""
	I0603 13:52:45.603603 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.603618 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:45.603625 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:45.603680 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:45.642664 1143678 cri.go:89] found id: ""
	I0603 13:52:45.642694 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.642705 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:45.642714 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:45.642793 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:45.679961 1143678 cri.go:89] found id: ""
	I0603 13:52:45.679998 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.680011 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:45.680026 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:45.680100 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:45.716218 1143678 cri.go:89] found id: ""
	I0603 13:52:45.716255 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.716263 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:45.716270 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:45.716364 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:45.752346 1143678 cri.go:89] found id: ""
	I0603 13:52:45.752374 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.752382 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:45.752391 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:45.752405 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:45.793992 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:45.794029 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:45.844930 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:45.844973 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:45.859594 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:45.859633 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:45.936469 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:45.936498 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:45.936515 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:45.317705 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:47.818994 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:45.870780 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:47.871003 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:49.871625 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:46.990866 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:49.488680 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:48.514959 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:48.528331 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:48.528401 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:48.565671 1143678 cri.go:89] found id: ""
	I0603 13:52:48.565703 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.565715 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:48.565724 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:48.565786 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:48.603938 1143678 cri.go:89] found id: ""
	I0603 13:52:48.603973 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.603991 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:48.604000 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:48.604068 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:48.643521 1143678 cri.go:89] found id: ""
	I0603 13:52:48.643550 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.643562 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:48.643571 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:48.643627 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:48.678264 1143678 cri.go:89] found id: ""
	I0603 13:52:48.678301 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.678312 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:48.678320 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:48.678407 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:48.714974 1143678 cri.go:89] found id: ""
	I0603 13:52:48.715014 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.715026 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:48.715034 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:48.715138 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:48.750364 1143678 cri.go:89] found id: ""
	I0603 13:52:48.750396 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.750408 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:48.750416 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:48.750482 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:48.788203 1143678 cri.go:89] found id: ""
	I0603 13:52:48.788238 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.788249 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:48.788258 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:48.788345 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:48.826891 1143678 cri.go:89] found id: ""
	I0603 13:52:48.826920 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.826928 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:48.826938 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:48.826951 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:48.877271 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:48.877315 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:48.892155 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:48.892187 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:48.973433 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:48.973459 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:48.973473 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:49.062819 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:49.062888 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:51.614261 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:51.628056 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:51.628142 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:51.662894 1143678 cri.go:89] found id: ""
	I0603 13:52:51.662924 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.662935 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:51.662942 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:51.663009 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:51.701847 1143678 cri.go:89] found id: ""
	I0603 13:52:51.701878 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.701889 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:51.701896 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:51.701963 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:51.737702 1143678 cri.go:89] found id: ""
	I0603 13:52:51.737741 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.737752 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:51.737760 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:51.737833 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:51.772913 1143678 cri.go:89] found id: ""
	I0603 13:52:51.772944 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.772956 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:51.772964 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:51.773034 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:51.810268 1143678 cri.go:89] found id: ""
	I0603 13:52:51.810298 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.810307 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:51.810312 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:51.810377 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:51.848575 1143678 cri.go:89] found id: ""
	I0603 13:52:51.848612 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.848624 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:51.848633 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:51.848696 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:51.886500 1143678 cri.go:89] found id: ""
	I0603 13:52:51.886536 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.886549 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:51.886560 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:51.886617 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:51.924070 1143678 cri.go:89] found id: ""
	I0603 13:52:51.924104 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.924115 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:51.924128 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:51.924146 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:51.940324 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:51.940355 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:52.019958 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:52.019997 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:52.020015 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:52.095953 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:52.095999 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:52.141070 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:52.141102 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:50.317008 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:52.317142 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:51.872275 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:54.376761 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:51.490098 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:53.491292 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:54.694651 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:54.708508 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:54.708597 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:54.745708 1143678 cri.go:89] found id: ""
	I0603 13:52:54.745748 1143678 logs.go:276] 0 containers: []
	W0603 13:52:54.745762 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:54.745770 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:54.745842 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:54.783335 1143678 cri.go:89] found id: ""
	I0603 13:52:54.783369 1143678 logs.go:276] 0 containers: []
	W0603 13:52:54.783381 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:54.783389 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:54.783465 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:54.824111 1143678 cri.go:89] found id: ""
	I0603 13:52:54.824140 1143678 logs.go:276] 0 containers: []
	W0603 13:52:54.824151 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:54.824159 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:54.824230 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:54.868676 1143678 cri.go:89] found id: ""
	I0603 13:52:54.868710 1143678 logs.go:276] 0 containers: []
	W0603 13:52:54.868721 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:54.868730 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:54.868801 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:54.906180 1143678 cri.go:89] found id: ""
	I0603 13:52:54.906216 1143678 logs.go:276] 0 containers: []
	W0603 13:52:54.906227 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:54.906235 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:54.906310 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:54.945499 1143678 cri.go:89] found id: ""
	I0603 13:52:54.945532 1143678 logs.go:276] 0 containers: []
	W0603 13:52:54.945544 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:54.945552 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:54.945619 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:54.986785 1143678 cri.go:89] found id: ""
	I0603 13:52:54.986812 1143678 logs.go:276] 0 containers: []
	W0603 13:52:54.986820 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:54.986826 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:54.986888 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:55.035290 1143678 cri.go:89] found id: ""
	I0603 13:52:55.035320 1143678 logs.go:276] 0 containers: []
	W0603 13:52:55.035329 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:55.035338 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:55.035352 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:55.085384 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:55.085451 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:55.100699 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:55.100733 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:55.171587 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:55.171614 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:55.171638 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:55.249078 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:55.249123 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:54.317435 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:56.318657 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:56.869954 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:58.872728 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:55.990512 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:58.489578 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:00.490668 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:57.791538 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:57.804373 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:57.804437 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:57.843969 1143678 cri.go:89] found id: ""
	I0603 13:52:57.844007 1143678 logs.go:276] 0 containers: []
	W0603 13:52:57.844016 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:57.844022 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:57.844077 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:57.881201 1143678 cri.go:89] found id: ""
	I0603 13:52:57.881239 1143678 logs.go:276] 0 containers: []
	W0603 13:52:57.881252 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:57.881261 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:57.881336 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:57.917572 1143678 cri.go:89] found id: ""
	I0603 13:52:57.917601 1143678 logs.go:276] 0 containers: []
	W0603 13:52:57.917610 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:57.917617 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:57.917671 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:57.951603 1143678 cri.go:89] found id: ""
	I0603 13:52:57.951642 1143678 logs.go:276] 0 containers: []
	W0603 13:52:57.951654 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:57.951661 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:57.951716 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:57.992833 1143678 cri.go:89] found id: ""
	I0603 13:52:57.992863 1143678 logs.go:276] 0 containers: []
	W0603 13:52:57.992874 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:57.992881 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:57.992945 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:58.031595 1143678 cri.go:89] found id: ""
	I0603 13:52:58.031636 1143678 logs.go:276] 0 containers: []
	W0603 13:52:58.031648 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:58.031657 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:58.031723 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:58.068947 1143678 cri.go:89] found id: ""
	I0603 13:52:58.068985 1143678 logs.go:276] 0 containers: []
	W0603 13:52:58.068996 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:58.069005 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:58.069077 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:58.106559 1143678 cri.go:89] found id: ""
	I0603 13:52:58.106587 1143678 logs.go:276] 0 containers: []
	W0603 13:52:58.106598 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:58.106623 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:58.106640 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:58.162576 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:58.162623 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:58.177104 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:58.177155 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:58.250279 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:58.250312 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:58.250329 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:58.330876 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:58.330920 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:00.871443 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:00.885505 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:00.885589 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:00.923878 1143678 cri.go:89] found id: ""
	I0603 13:53:00.923910 1143678 logs.go:276] 0 containers: []
	W0603 13:53:00.923920 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:00.923928 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:00.923995 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:00.960319 1143678 cri.go:89] found id: ""
	I0603 13:53:00.960362 1143678 logs.go:276] 0 containers: []
	W0603 13:53:00.960375 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:00.960384 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:00.960449 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:00.998806 1143678 cri.go:89] found id: ""
	I0603 13:53:00.998845 1143678 logs.go:276] 0 containers: []
	W0603 13:53:00.998857 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:00.998866 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:00.998929 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:01.033211 1143678 cri.go:89] found id: ""
	I0603 13:53:01.033245 1143678 logs.go:276] 0 containers: []
	W0603 13:53:01.033256 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:01.033265 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:01.033341 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:01.072852 1143678 cri.go:89] found id: ""
	I0603 13:53:01.072883 1143678 logs.go:276] 0 containers: []
	W0603 13:53:01.072891 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:01.072898 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:01.072950 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:01.115667 1143678 cri.go:89] found id: ""
	I0603 13:53:01.115699 1143678 logs.go:276] 0 containers: []
	W0603 13:53:01.115711 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:01.115719 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:01.115824 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:01.153676 1143678 cri.go:89] found id: ""
	I0603 13:53:01.153717 1143678 logs.go:276] 0 containers: []
	W0603 13:53:01.153733 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:01.153741 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:01.153815 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:01.188970 1143678 cri.go:89] found id: ""
	I0603 13:53:01.189003 1143678 logs.go:276] 0 containers: []
	W0603 13:53:01.189017 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:01.189031 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:01.189049 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:01.233151 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:01.233214 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:01.287218 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:01.287269 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:01.302370 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:01.302408 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:01.378414 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:01.378444 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:01.378463 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:58.817003 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:01.317698 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:01.371257 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:03.872917 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:02.989133 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:04.990930 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:03.957327 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:03.971246 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:03.971340 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:04.007299 1143678 cri.go:89] found id: ""
	I0603 13:53:04.007335 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.007347 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:04.007356 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:04.007427 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:04.046364 1143678 cri.go:89] found id: ""
	I0603 13:53:04.046396 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.046405 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:04.046411 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:04.046469 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:04.082094 1143678 cri.go:89] found id: ""
	I0603 13:53:04.082127 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.082139 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:04.082148 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:04.082209 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:04.117389 1143678 cri.go:89] found id: ""
	I0603 13:53:04.117434 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.117446 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:04.117454 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:04.117530 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:04.150560 1143678 cri.go:89] found id: ""
	I0603 13:53:04.150596 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.150606 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:04.150614 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:04.150678 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:04.184808 1143678 cri.go:89] found id: ""
	I0603 13:53:04.184845 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.184857 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:04.184865 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:04.184935 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:04.220286 1143678 cri.go:89] found id: ""
	I0603 13:53:04.220317 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.220326 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:04.220332 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:04.220385 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:04.258898 1143678 cri.go:89] found id: ""
	I0603 13:53:04.258929 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.258941 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:04.258955 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:04.258972 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:04.312151 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:04.312198 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:04.329908 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:04.329943 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:04.402075 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:04.402106 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:04.402138 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:04.482873 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:04.482936 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:07.049978 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:07.063072 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:07.063140 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:07.097703 1143678 cri.go:89] found id: ""
	I0603 13:53:07.097737 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.097748 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:07.097755 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:07.097811 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:07.134826 1143678 cri.go:89] found id: ""
	I0603 13:53:07.134865 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.134878 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:07.134886 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:07.134955 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:07.178015 1143678 cri.go:89] found id: ""
	I0603 13:53:07.178050 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.178061 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:07.178068 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:07.178138 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:07.215713 1143678 cri.go:89] found id: ""
	I0603 13:53:07.215753 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.215764 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:07.215777 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:07.215840 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:07.251787 1143678 cri.go:89] found id: ""
	I0603 13:53:07.251815 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.251824 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:07.251830 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:07.251897 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:07.293357 1143678 cri.go:89] found id: ""
	I0603 13:53:07.293387 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.293398 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:07.293427 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:07.293496 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:07.329518 1143678 cri.go:89] found id: ""
	I0603 13:53:07.329551 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.329561 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:07.329569 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:07.329650 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:03.819203 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:06.317653 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:06.370539 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:08.370701 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:07.490706 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:09.990002 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:07.369534 1143678 cri.go:89] found id: ""
	I0603 13:53:07.369576 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.369587 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:07.369601 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:07.369617 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:07.424211 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:07.424260 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:07.439135 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:07.439172 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:07.511325 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:07.511360 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:07.511378 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:07.588348 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:07.588393 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:10.129812 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:10.143977 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:10.144057 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:10.181873 1143678 cri.go:89] found id: ""
	I0603 13:53:10.181906 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.181918 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:10.181926 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:10.181981 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:10.218416 1143678 cri.go:89] found id: ""
	I0603 13:53:10.218460 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.218473 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:10.218482 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:10.218562 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:10.253580 1143678 cri.go:89] found id: ""
	I0603 13:53:10.253618 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.253630 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:10.253646 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:10.253717 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:10.302919 1143678 cri.go:89] found id: ""
	I0603 13:53:10.302949 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.302957 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:10.302964 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:10.303024 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:10.343680 1143678 cri.go:89] found id: ""
	I0603 13:53:10.343709 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.343721 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:10.343729 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:10.343798 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:10.379281 1143678 cri.go:89] found id: ""
	I0603 13:53:10.379307 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.379315 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:10.379322 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:10.379374 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:10.420197 1143678 cri.go:89] found id: ""
	I0603 13:53:10.420225 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.420233 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:10.420239 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:10.420322 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:10.458578 1143678 cri.go:89] found id: ""
	I0603 13:53:10.458609 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.458618 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:10.458629 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:10.458642 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:10.511785 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:10.511828 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:10.526040 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:10.526081 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:10.603721 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:10.603749 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:10.603766 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:10.684153 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:10.684204 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:08.816447 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:11.318264 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:10.374788 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:12.871019 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:14.871064 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:11.992127 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:14.488866 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:13.227605 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:13.241131 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:13.241228 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:13.284636 1143678 cri.go:89] found id: ""
	I0603 13:53:13.284667 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.284675 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:13.284681 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:13.284737 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:13.322828 1143678 cri.go:89] found id: ""
	I0603 13:53:13.322861 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.322873 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:13.322881 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:13.322945 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:13.360061 1143678 cri.go:89] found id: ""
	I0603 13:53:13.360089 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.360097 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:13.360103 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:13.360176 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:13.397115 1143678 cri.go:89] found id: ""
	I0603 13:53:13.397149 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.397158 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:13.397164 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:13.397234 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:13.434086 1143678 cri.go:89] found id: ""
	I0603 13:53:13.434118 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.434127 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:13.434135 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:13.434194 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:13.470060 1143678 cri.go:89] found id: ""
	I0603 13:53:13.470089 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.470101 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:13.470113 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:13.470189 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:13.508423 1143678 cri.go:89] found id: ""
	I0603 13:53:13.508464 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.508480 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:13.508487 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:13.508552 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:13.546713 1143678 cri.go:89] found id: ""
	I0603 13:53:13.546752 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.546765 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:13.546778 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:13.546796 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:13.632984 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:13.633027 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:13.679169 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:13.679216 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:13.735765 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:13.735812 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:13.750175 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:13.750210 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:13.826571 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:16.327185 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:16.340163 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:16.340253 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:16.380260 1143678 cri.go:89] found id: ""
	I0603 13:53:16.380292 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.380300 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:16.380307 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:16.380373 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:16.420408 1143678 cri.go:89] found id: ""
	I0603 13:53:16.420438 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.420449 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:16.420457 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:16.420534 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:16.459250 1143678 cri.go:89] found id: ""
	I0603 13:53:16.459285 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.459297 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:16.459307 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:16.459377 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:16.496395 1143678 cri.go:89] found id: ""
	I0603 13:53:16.496427 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.496436 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:16.496444 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:16.496516 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:16.534402 1143678 cri.go:89] found id: ""
	I0603 13:53:16.534433 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.534442 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:16.534449 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:16.534514 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:16.571550 1143678 cri.go:89] found id: ""
	I0603 13:53:16.571577 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.571584 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:16.571591 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:16.571659 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:16.608425 1143678 cri.go:89] found id: ""
	I0603 13:53:16.608457 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.608468 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:16.608482 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:16.608549 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:16.647282 1143678 cri.go:89] found id: ""
	I0603 13:53:16.647315 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.647324 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:16.647334 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:16.647351 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:16.728778 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:16.728814 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:16.728831 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:16.822702 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:16.822747 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:16.868816 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:16.868845 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:16.922262 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:16.922301 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:13.818935 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:16.316865 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:17.370681 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:19.371232 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:16.489494 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:18.490176 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:20.491433 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:19.438231 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:19.452520 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:19.452603 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:19.488089 1143678 cri.go:89] found id: ""
	I0603 13:53:19.488121 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.488133 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:19.488141 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:19.488216 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:19.524494 1143678 cri.go:89] found id: ""
	I0603 13:53:19.524527 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.524537 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:19.524543 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:19.524595 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:19.561288 1143678 cri.go:89] found id: ""
	I0603 13:53:19.561323 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.561333 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:19.561341 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:19.561420 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:19.597919 1143678 cri.go:89] found id: ""
	I0603 13:53:19.597965 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.597976 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:19.597984 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:19.598056 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:19.634544 1143678 cri.go:89] found id: ""
	I0603 13:53:19.634579 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.634591 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:19.634599 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:19.634668 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:19.671473 1143678 cri.go:89] found id: ""
	I0603 13:53:19.671506 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.671518 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:19.671527 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:19.671598 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:19.707968 1143678 cri.go:89] found id: ""
	I0603 13:53:19.708000 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.708011 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:19.708019 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:19.708119 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:19.745555 1143678 cri.go:89] found id: ""
	I0603 13:53:19.745593 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.745604 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:19.745617 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:19.745631 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:19.830765 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:19.830812 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:19.875160 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:19.875197 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:19.927582 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:19.927627 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:19.942258 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:19.942289 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:20.016081 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:18.820067 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:21.319103 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:21.871214 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:24.371680 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:22.990210 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:24.990605 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:22.516859 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:22.534973 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:22.535040 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:22.593003 1143678 cri.go:89] found id: ""
	I0603 13:53:22.593043 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.593051 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:22.593058 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:22.593121 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:22.649916 1143678 cri.go:89] found id: ""
	I0603 13:53:22.649951 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.649963 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:22.649971 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:22.650030 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:22.689397 1143678 cri.go:89] found id: ""
	I0603 13:53:22.689449 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.689459 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:22.689465 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:22.689521 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:22.725109 1143678 cri.go:89] found id: ""
	I0603 13:53:22.725149 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.725161 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:22.725169 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:22.725250 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:22.761196 1143678 cri.go:89] found id: ""
	I0603 13:53:22.761225 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.761237 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:22.761245 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:22.761311 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:22.804065 1143678 cri.go:89] found id: ""
	I0603 13:53:22.804103 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.804112 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:22.804119 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:22.804189 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:22.840456 1143678 cri.go:89] found id: ""
	I0603 13:53:22.840485 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.840493 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:22.840499 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:22.840553 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:22.876796 1143678 cri.go:89] found id: ""
	I0603 13:53:22.876831 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.876842 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:22.876854 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:22.876869 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:22.957274 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:22.957317 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:22.998360 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:22.998394 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:23.054895 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:23.054942 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:23.070107 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:23.070141 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:23.147460 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:25.647727 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:25.663603 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:25.663691 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:25.698102 1143678 cri.go:89] found id: ""
	I0603 13:53:25.698139 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.698150 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:25.698159 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:25.698227 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:25.738601 1143678 cri.go:89] found id: ""
	I0603 13:53:25.738641 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.738648 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:25.738655 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:25.738718 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:25.780622 1143678 cri.go:89] found id: ""
	I0603 13:53:25.780657 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.780670 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:25.780678 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:25.780751 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:25.816950 1143678 cri.go:89] found id: ""
	I0603 13:53:25.816978 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.816989 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:25.816997 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:25.817060 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:25.860011 1143678 cri.go:89] found id: ""
	I0603 13:53:25.860051 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.860063 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:25.860072 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:25.860138 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:25.898832 1143678 cri.go:89] found id: ""
	I0603 13:53:25.898866 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.898878 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:25.898886 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:25.898959 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:25.937483 1143678 cri.go:89] found id: ""
	I0603 13:53:25.937518 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.937533 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:25.937541 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:25.937607 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:25.973972 1143678 cri.go:89] found id: ""
	I0603 13:53:25.974008 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.974021 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:25.974034 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:25.974065 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:25.989188 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:25.989227 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:26.065521 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:26.065546 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:26.065560 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:26.147852 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:26.147899 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:26.191395 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:26.191431 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:23.816928 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:25.818534 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:26.872084 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:28.872558 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:27.489951 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:29.989352 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:28.751041 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:28.764764 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:28.764826 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:28.808232 1143678 cri.go:89] found id: ""
	I0603 13:53:28.808271 1143678 logs.go:276] 0 containers: []
	W0603 13:53:28.808285 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:28.808293 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:28.808369 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:28.849058 1143678 cri.go:89] found id: ""
	I0603 13:53:28.849094 1143678 logs.go:276] 0 containers: []
	W0603 13:53:28.849107 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:28.849114 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:28.849187 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:28.892397 1143678 cri.go:89] found id: ""
	I0603 13:53:28.892427 1143678 logs.go:276] 0 containers: []
	W0603 13:53:28.892441 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:28.892447 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:28.892515 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:28.932675 1143678 cri.go:89] found id: ""
	I0603 13:53:28.932715 1143678 logs.go:276] 0 containers: []
	W0603 13:53:28.932727 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:28.932735 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:28.932840 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:28.969732 1143678 cri.go:89] found id: ""
	I0603 13:53:28.969769 1143678 logs.go:276] 0 containers: []
	W0603 13:53:28.969781 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:28.969789 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:28.969857 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:29.007765 1143678 cri.go:89] found id: ""
	I0603 13:53:29.007791 1143678 logs.go:276] 0 containers: []
	W0603 13:53:29.007798 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:29.007804 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:29.007865 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:29.044616 1143678 cri.go:89] found id: ""
	I0603 13:53:29.044652 1143678 logs.go:276] 0 containers: []
	W0603 13:53:29.044664 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:29.044675 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:29.044734 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:29.081133 1143678 cri.go:89] found id: ""
	I0603 13:53:29.081166 1143678 logs.go:276] 0 containers: []
	W0603 13:53:29.081187 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:29.081198 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:29.081213 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:29.095753 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:29.095783 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:29.174472 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:29.174496 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:29.174516 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:29.251216 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:29.251262 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:29.289127 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:29.289168 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:31.845335 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:31.860631 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:31.860720 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:31.904507 1143678 cri.go:89] found id: ""
	I0603 13:53:31.904544 1143678 logs.go:276] 0 containers: []
	W0603 13:53:31.904556 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:31.904564 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:31.904633 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:31.940795 1143678 cri.go:89] found id: ""
	I0603 13:53:31.940832 1143678 logs.go:276] 0 containers: []
	W0603 13:53:31.940845 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:31.940852 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:31.940921 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:31.978447 1143678 cri.go:89] found id: ""
	I0603 13:53:31.978481 1143678 logs.go:276] 0 containers: []
	W0603 13:53:31.978499 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:31.978507 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:31.978569 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:32.017975 1143678 cri.go:89] found id: ""
	I0603 13:53:32.018009 1143678 logs.go:276] 0 containers: []
	W0603 13:53:32.018018 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:32.018025 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:32.018089 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:32.053062 1143678 cri.go:89] found id: ""
	I0603 13:53:32.053091 1143678 logs.go:276] 0 containers: []
	W0603 13:53:32.053099 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:32.053106 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:32.053181 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:32.089822 1143678 cri.go:89] found id: ""
	I0603 13:53:32.089856 1143678 logs.go:276] 0 containers: []
	W0603 13:53:32.089868 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:32.089877 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:32.089944 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:32.126243 1143678 cri.go:89] found id: ""
	I0603 13:53:32.126280 1143678 logs.go:276] 0 containers: []
	W0603 13:53:32.126291 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:32.126299 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:32.126358 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:32.163297 1143678 cri.go:89] found id: ""
	I0603 13:53:32.163346 1143678 logs.go:276] 0 containers: []
	W0603 13:53:32.163357 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:32.163370 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:32.163386 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:32.218452 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:32.218495 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:32.233688 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:32.233731 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:32.318927 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:32.318947 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:32.318963 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:28.317046 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:30.317308 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:32.318273 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:31.370654 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:33.371038 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:31.991594 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:34.492142 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:32.403734 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:32.403786 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:34.947857 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:34.961894 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:34.961983 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:35.006279 1143678 cri.go:89] found id: ""
	I0603 13:53:35.006308 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.006318 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:35.006326 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:35.006398 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:35.042765 1143678 cri.go:89] found id: ""
	I0603 13:53:35.042794 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.042807 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:35.042815 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:35.042877 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:35.084332 1143678 cri.go:89] found id: ""
	I0603 13:53:35.084365 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.084375 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:35.084381 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:35.084448 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:35.121306 1143678 cri.go:89] found id: ""
	I0603 13:53:35.121337 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.121348 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:35.121358 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:35.121444 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:35.155952 1143678 cri.go:89] found id: ""
	I0603 13:53:35.155994 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.156008 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:35.156016 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:35.156089 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:35.196846 1143678 cri.go:89] found id: ""
	I0603 13:53:35.196881 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.196893 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:35.196902 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:35.196972 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:35.232396 1143678 cri.go:89] found id: ""
	I0603 13:53:35.232429 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.232440 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:35.232449 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:35.232528 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:35.269833 1143678 cri.go:89] found id: ""
	I0603 13:53:35.269862 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.269872 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:35.269885 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:35.269902 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:35.357754 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:35.357794 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:35.399793 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:35.399822 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:35.453742 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:35.453782 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:35.468431 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:35.468465 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:35.547817 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:34.816178 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:36.817093 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:35.373072 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:37.870173 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:36.989364 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:38.990163 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:38.048517 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:38.063481 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:38.063569 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:38.100487 1143678 cri.go:89] found id: ""
	I0603 13:53:38.100523 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.100535 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:38.100543 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:38.100612 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:38.137627 1143678 cri.go:89] found id: ""
	I0603 13:53:38.137665 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.137678 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:38.137686 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:38.137754 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:38.176138 1143678 cri.go:89] found id: ""
	I0603 13:53:38.176172 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.176190 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:38.176199 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:38.176265 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:38.214397 1143678 cri.go:89] found id: ""
	I0603 13:53:38.214439 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.214451 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:38.214459 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:38.214528 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:38.250531 1143678 cri.go:89] found id: ""
	I0603 13:53:38.250563 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.250573 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:38.250580 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:38.250642 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:38.286558 1143678 cri.go:89] found id: ""
	I0603 13:53:38.286587 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.286595 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:38.286601 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:38.286652 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:38.327995 1143678 cri.go:89] found id: ""
	I0603 13:53:38.328043 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.328055 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:38.328062 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:38.328126 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:38.374266 1143678 cri.go:89] found id: ""
	I0603 13:53:38.374300 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.374311 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:38.374324 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:38.374341 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:38.426876 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:38.426918 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:38.443296 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:38.443340 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:38.514702 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:38.514728 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:38.514746 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:38.601536 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:38.601590 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:41.141766 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:41.155927 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:41.156006 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:41.196829 1143678 cri.go:89] found id: ""
	I0603 13:53:41.196871 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.196884 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:41.196896 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:41.196967 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:41.231729 1143678 cri.go:89] found id: ""
	I0603 13:53:41.231780 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.231802 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:41.231812 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:41.231900 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:41.266663 1143678 cri.go:89] found id: ""
	I0603 13:53:41.266699 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.266711 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:41.266720 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:41.266783 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:41.305251 1143678 cri.go:89] found id: ""
	I0603 13:53:41.305278 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.305286 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:41.305292 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:41.305351 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:41.342527 1143678 cri.go:89] found id: ""
	I0603 13:53:41.342556 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.342568 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:41.342575 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:41.342637 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:41.379950 1143678 cri.go:89] found id: ""
	I0603 13:53:41.379982 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.379992 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:41.379999 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:41.380068 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:41.414930 1143678 cri.go:89] found id: ""
	I0603 13:53:41.414965 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.414973 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:41.414980 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:41.415043 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:41.449265 1143678 cri.go:89] found id: ""
	I0603 13:53:41.449299 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.449310 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:41.449324 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:41.449343 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:41.502525 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:41.502560 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:41.519357 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:41.519390 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:41.591443 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:41.591471 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:41.591485 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:41.668758 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:41.668802 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:39.317333 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:41.317598 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:40.370844 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:42.871161 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:41.489574 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:43.989620 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:44.211768 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:44.226789 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:44.226869 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:44.265525 1143678 cri.go:89] found id: ""
	I0603 13:53:44.265553 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.265561 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:44.265568 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:44.265646 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:44.304835 1143678 cri.go:89] found id: ""
	I0603 13:53:44.304866 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.304874 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:44.304880 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:44.304935 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:44.345832 1143678 cri.go:89] found id: ""
	I0603 13:53:44.345875 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.345885 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:44.345891 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:44.345950 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:44.386150 1143678 cri.go:89] found id: ""
	I0603 13:53:44.386186 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.386198 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:44.386207 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:44.386268 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:44.423662 1143678 cri.go:89] found id: ""
	I0603 13:53:44.423697 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.423709 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:44.423719 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:44.423788 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:44.462437 1143678 cri.go:89] found id: ""
	I0603 13:53:44.462464 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.462473 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:44.462481 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:44.462567 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:44.501007 1143678 cri.go:89] found id: ""
	I0603 13:53:44.501062 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.501074 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:44.501081 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:44.501138 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:44.535501 1143678 cri.go:89] found id: ""
	I0603 13:53:44.535543 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.535554 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:44.535567 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:44.535585 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:44.587114 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:44.587157 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:44.602151 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:44.602180 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:44.674065 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:44.674104 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:44.674122 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:44.757443 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:44.757488 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:47.306481 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:47.319895 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:47.319958 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:43.818030 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:46.316852 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:45.370762 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:47.371799 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:49.871512 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:46.488076 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:48.488472 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:50.488892 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:47.356975 1143678 cri.go:89] found id: ""
	I0603 13:53:47.357013 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.357026 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:47.357034 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:47.357106 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:47.393840 1143678 cri.go:89] found id: ""
	I0603 13:53:47.393869 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.393877 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:47.393884 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:47.393936 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:47.428455 1143678 cri.go:89] found id: ""
	I0603 13:53:47.428493 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.428506 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:47.428514 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:47.428597 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:47.463744 1143678 cri.go:89] found id: ""
	I0603 13:53:47.463777 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.463788 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:47.463795 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:47.463855 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:47.498134 1143678 cri.go:89] found id: ""
	I0603 13:53:47.498159 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.498167 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:47.498173 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:47.498245 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:47.534153 1143678 cri.go:89] found id: ""
	I0603 13:53:47.534195 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.534206 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:47.534219 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:47.534272 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:47.567148 1143678 cri.go:89] found id: ""
	I0603 13:53:47.567179 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.567187 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:47.567194 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:47.567249 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:47.605759 1143678 cri.go:89] found id: ""
	I0603 13:53:47.605790 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.605798 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:47.605810 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:47.605824 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:47.683651 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:47.683692 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:47.683705 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:47.763810 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:47.763848 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:47.806092 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:47.806131 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:47.859637 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:47.859677 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:50.377538 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:50.391696 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:50.391776 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:50.433968 1143678 cri.go:89] found id: ""
	I0603 13:53:50.434001 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.434013 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:50.434020 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:50.434080 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:50.470561 1143678 cri.go:89] found id: ""
	I0603 13:53:50.470589 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.470596 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:50.470603 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:50.470662 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:50.510699 1143678 cri.go:89] found id: ""
	I0603 13:53:50.510727 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.510735 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:50.510741 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:50.510808 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:50.553386 1143678 cri.go:89] found id: ""
	I0603 13:53:50.553433 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.553445 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:50.553452 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:50.553533 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:50.589731 1143678 cri.go:89] found id: ""
	I0603 13:53:50.589779 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.589792 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:50.589801 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:50.589885 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:50.625144 1143678 cri.go:89] found id: ""
	I0603 13:53:50.625180 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.625192 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:50.625201 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:50.625274 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:50.669021 1143678 cri.go:89] found id: ""
	I0603 13:53:50.669053 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.669061 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:50.669067 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:50.669121 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:50.714241 1143678 cri.go:89] found id: ""
	I0603 13:53:50.714270 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.714284 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:50.714297 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:50.714314 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:50.766290 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:50.766333 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:50.797242 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:50.797275 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:50.866589 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:50.866616 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:50.866637 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:50.948808 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:50.948854 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:48.318282 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:50.817445 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:52.370798 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:54.377027 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:52.490719 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:54.989907 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:53.496797 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:53.511944 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:53.512021 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:53.549028 1143678 cri.go:89] found id: ""
	I0603 13:53:53.549057 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.549066 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:53.549072 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:53.549128 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:53.583533 1143678 cri.go:89] found id: ""
	I0603 13:53:53.583566 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.583578 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:53.583586 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:53.583652 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:53.618578 1143678 cri.go:89] found id: ""
	I0603 13:53:53.618609 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.618618 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:53.618626 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:53.618701 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:53.653313 1143678 cri.go:89] found id: ""
	I0603 13:53:53.653347 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.653358 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:53.653364 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:53.653442 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:53.689805 1143678 cri.go:89] found id: ""
	I0603 13:53:53.689839 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.689849 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:53.689857 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:53.689931 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:53.725538 1143678 cri.go:89] found id: ""
	I0603 13:53:53.725571 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.725584 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:53.725592 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:53.725648 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:53.762284 1143678 cri.go:89] found id: ""
	I0603 13:53:53.762325 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.762336 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:53.762345 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:53.762419 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:53.799056 1143678 cri.go:89] found id: ""
	I0603 13:53:53.799083 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.799092 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:53.799102 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:53.799115 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:53.873743 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:53.873809 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:53.919692 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:53.919724 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:53.969068 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:53.969109 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:53.983840 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:53.983866 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:54.054842 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:56.555587 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:56.570014 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:56.570076 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:56.604352 1143678 cri.go:89] found id: ""
	I0603 13:53:56.604386 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.604400 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:56.604408 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:56.604479 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:56.648126 1143678 cri.go:89] found id: ""
	I0603 13:53:56.648161 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.648171 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:56.648177 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:56.648231 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:56.685621 1143678 cri.go:89] found id: ""
	I0603 13:53:56.685658 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.685670 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:56.685678 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:56.685763 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:56.721860 1143678 cri.go:89] found id: ""
	I0603 13:53:56.721891 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.721913 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:56.721921 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:56.721989 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:56.757950 1143678 cri.go:89] found id: ""
	I0603 13:53:56.757982 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.757995 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:56.758002 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:56.758068 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:56.794963 1143678 cri.go:89] found id: ""
	I0603 13:53:56.794991 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.794999 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:56.795007 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:56.795072 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:56.831795 1143678 cri.go:89] found id: ""
	I0603 13:53:56.831827 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.831839 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:56.831846 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:56.831913 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:56.869263 1143678 cri.go:89] found id: ""
	I0603 13:53:56.869293 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.869303 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:56.869314 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:56.869331 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:56.945068 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:56.945096 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:56.945110 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:57.028545 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:57.028582 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:57.069973 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:57.070009 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:57.126395 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:57.126436 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:53.316616 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:55.316981 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:57.317295 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:56.870680 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:59.371553 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:56.990964 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:59.489616 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:59.644870 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:59.658547 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:59.658634 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:59.694625 1143678 cri.go:89] found id: ""
	I0603 13:53:59.694656 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.694665 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:59.694673 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:59.694740 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:59.730475 1143678 cri.go:89] found id: ""
	I0603 13:53:59.730573 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.730590 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:59.730599 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:59.730696 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:59.768533 1143678 cri.go:89] found id: ""
	I0603 13:53:59.768567 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.768580 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:59.768590 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:59.768662 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:59.804913 1143678 cri.go:89] found id: ""
	I0603 13:53:59.804944 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.804953 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:59.804960 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:59.805014 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:59.850331 1143678 cri.go:89] found id: ""
	I0603 13:53:59.850363 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.850376 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:59.850385 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:59.850466 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:59.890777 1143678 cri.go:89] found id: ""
	I0603 13:53:59.890814 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.890826 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:59.890834 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:59.890909 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:59.931233 1143678 cri.go:89] found id: ""
	I0603 13:53:59.931268 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.931277 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:59.931283 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:59.931354 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:59.966267 1143678 cri.go:89] found id: ""
	I0603 13:53:59.966307 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.966319 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:59.966333 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:59.966356 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:00.019884 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:00.019924 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:00.034936 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:00.034982 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:00.115002 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:00.115035 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:00.115053 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:00.189992 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:00.190035 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:59.818065 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:02.316183 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:01.870679 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:03.872563 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:01.490213 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:03.988699 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:02.737387 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:02.752131 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:02.752220 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:02.787863 1143678 cri.go:89] found id: ""
	I0603 13:54:02.787893 1143678 logs.go:276] 0 containers: []
	W0603 13:54:02.787902 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:02.787908 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:02.787974 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:02.824938 1143678 cri.go:89] found id: ""
	I0603 13:54:02.824973 1143678 logs.go:276] 0 containers: []
	W0603 13:54:02.824983 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:02.824989 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:02.825061 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:02.861425 1143678 cri.go:89] found id: ""
	I0603 13:54:02.861461 1143678 logs.go:276] 0 containers: []
	W0603 13:54:02.861469 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:02.861476 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:02.861546 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:02.907417 1143678 cri.go:89] found id: ""
	I0603 13:54:02.907453 1143678 logs.go:276] 0 containers: []
	W0603 13:54:02.907475 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:02.907483 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:02.907553 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:02.953606 1143678 cri.go:89] found id: ""
	I0603 13:54:02.953640 1143678 logs.go:276] 0 containers: []
	W0603 13:54:02.953649 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:02.953655 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:02.953728 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:03.007785 1143678 cri.go:89] found id: ""
	I0603 13:54:03.007816 1143678 logs.go:276] 0 containers: []
	W0603 13:54:03.007824 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:03.007830 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:03.007896 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:03.058278 1143678 cri.go:89] found id: ""
	I0603 13:54:03.058316 1143678 logs.go:276] 0 containers: []
	W0603 13:54:03.058329 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:03.058338 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:03.058404 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:03.094766 1143678 cri.go:89] found id: ""
	I0603 13:54:03.094800 1143678 logs.go:276] 0 containers: []
	W0603 13:54:03.094811 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:03.094824 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:03.094840 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:03.163663 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:03.163690 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:03.163704 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:03.250751 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:03.250802 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:03.292418 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:03.292466 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:03.344552 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:03.344600 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:05.859965 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:05.875255 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:05.875340 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:05.918590 1143678 cri.go:89] found id: ""
	I0603 13:54:05.918619 1143678 logs.go:276] 0 containers: []
	W0603 13:54:05.918630 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:05.918637 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:05.918706 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:05.953932 1143678 cri.go:89] found id: ""
	I0603 13:54:05.953969 1143678 logs.go:276] 0 containers: []
	W0603 13:54:05.953980 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:05.953988 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:05.954056 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:05.993319 1143678 cri.go:89] found id: ""
	I0603 13:54:05.993348 1143678 logs.go:276] 0 containers: []
	W0603 13:54:05.993359 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:05.993368 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:05.993468 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:06.033047 1143678 cri.go:89] found id: ""
	I0603 13:54:06.033079 1143678 logs.go:276] 0 containers: []
	W0603 13:54:06.033087 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:06.033100 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:06.033156 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:06.072607 1143678 cri.go:89] found id: ""
	I0603 13:54:06.072631 1143678 logs.go:276] 0 containers: []
	W0603 13:54:06.072640 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:06.072647 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:06.072698 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:06.109944 1143678 cri.go:89] found id: ""
	I0603 13:54:06.109990 1143678 logs.go:276] 0 containers: []
	W0603 13:54:06.109999 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:06.110007 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:06.110071 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:06.150235 1143678 cri.go:89] found id: ""
	I0603 13:54:06.150266 1143678 logs.go:276] 0 containers: []
	W0603 13:54:06.150276 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:06.150284 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:06.150349 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:06.193963 1143678 cri.go:89] found id: ""
	I0603 13:54:06.193992 1143678 logs.go:276] 0 containers: []
	W0603 13:54:06.194004 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:06.194017 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:06.194035 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:06.235790 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:06.235827 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:06.289940 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:06.289980 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:06.305205 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:06.305240 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:06.381170 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:06.381191 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:06.381206 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:04.316812 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:06.317759 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:06.370944 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:08.371668 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:05.989346 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:08.492021 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:08.958985 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:08.973364 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:08.973462 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:09.015050 1143678 cri.go:89] found id: ""
	I0603 13:54:09.015087 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.015099 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:09.015107 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:09.015187 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:09.054474 1143678 cri.go:89] found id: ""
	I0603 13:54:09.054508 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.054521 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:09.054533 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:09.054590 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:09.090867 1143678 cri.go:89] found id: ""
	I0603 13:54:09.090905 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.090917 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:09.090926 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:09.090995 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:09.128401 1143678 cri.go:89] found id: ""
	I0603 13:54:09.128433 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.128441 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:09.128447 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:09.128511 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:09.162952 1143678 cri.go:89] found id: ""
	I0603 13:54:09.162992 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.163005 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:09.163013 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:09.163078 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:09.200375 1143678 cri.go:89] found id: ""
	I0603 13:54:09.200402 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.200410 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:09.200416 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:09.200495 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:09.244694 1143678 cri.go:89] found id: ""
	I0603 13:54:09.244729 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.244740 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:09.244749 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:09.244818 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:09.281633 1143678 cri.go:89] found id: ""
	I0603 13:54:09.281666 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.281675 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:09.281686 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:09.281700 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:09.341287 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:09.341331 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:09.355379 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:09.355415 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:09.435934 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:09.435960 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:09.435979 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:09.518203 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:09.518248 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:12.061538 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:12.076939 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:12.077020 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:12.114308 1143678 cri.go:89] found id: ""
	I0603 13:54:12.114344 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.114353 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:12.114359 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:12.114427 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:12.150336 1143678 cri.go:89] found id: ""
	I0603 13:54:12.150368 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.150383 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:12.150390 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:12.150455 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:12.189881 1143678 cri.go:89] found id: ""
	I0603 13:54:12.189934 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.189946 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:12.189954 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:12.190020 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:12.226361 1143678 cri.go:89] found id: ""
	I0603 13:54:12.226396 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.226407 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:12.226415 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:12.226488 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:12.264216 1143678 cri.go:89] found id: ""
	I0603 13:54:12.264257 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.264265 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:12.264271 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:12.264341 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:12.306563 1143678 cri.go:89] found id: ""
	I0603 13:54:12.306600 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.306612 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:12.306620 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:12.306690 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:12.347043 1143678 cri.go:89] found id: ""
	I0603 13:54:12.347082 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.347094 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:12.347105 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:12.347170 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:08.317824 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:10.816743 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:12.816776 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:10.372079 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:12.872314 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:10.990240 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:13.489762 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:12.383947 1143678 cri.go:89] found id: ""
	I0603 13:54:12.383978 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.383989 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:12.384001 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:12.384018 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:12.464306 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:12.464348 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:12.505079 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:12.505110 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:12.563631 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:12.563666 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:12.578328 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:12.578357 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:12.646015 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:15.147166 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:15.163786 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:15.163865 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:15.202249 1143678 cri.go:89] found id: ""
	I0603 13:54:15.202286 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.202296 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:15.202304 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:15.202372 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:15.236305 1143678 cri.go:89] found id: ""
	I0603 13:54:15.236345 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.236359 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:15.236368 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:15.236459 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:15.273457 1143678 cri.go:89] found id: ""
	I0603 13:54:15.273493 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.273510 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:15.273521 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:15.273592 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:15.314917 1143678 cri.go:89] found id: ""
	I0603 13:54:15.314951 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.314963 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:15.314984 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:15.315055 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:15.353060 1143678 cri.go:89] found id: ""
	I0603 13:54:15.353098 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.353112 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:15.353118 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:15.353197 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:15.390412 1143678 cri.go:89] found id: ""
	I0603 13:54:15.390448 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.390460 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:15.390469 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:15.390534 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:15.427735 1143678 cri.go:89] found id: ""
	I0603 13:54:15.427771 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.427782 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:15.427789 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:15.427854 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:15.467134 1143678 cri.go:89] found id: ""
	I0603 13:54:15.467165 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.467175 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:15.467184 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:15.467199 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:15.517924 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:15.517973 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:15.531728 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:15.531760 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:15.608397 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:15.608421 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:15.608444 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:15.688976 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:15.689016 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:15.319250 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:16.817018 1143252 pod_ready.go:81] duration metric: took 4m0.00664589s for pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace to be "Ready" ...
	E0603 13:54:16.817042 1143252 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0603 13:54:16.817049 1143252 pod_ready.go:38] duration metric: took 4m6.670583216s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:54:16.817081 1143252 api_server.go:52] waiting for apiserver process to appear ...
	I0603 13:54:16.817110 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:16.817158 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:16.871314 1143252 cri.go:89] found id: "45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a"
	I0603 13:54:16.871339 1143252 cri.go:89] found id: ""
	I0603 13:54:16.871350 1143252 logs.go:276] 1 containers: [45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a]
	I0603 13:54:16.871405 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:16.876249 1143252 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:16.876319 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:16.917267 1143252 cri.go:89] found id: "114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05"
	I0603 13:54:16.917298 1143252 cri.go:89] found id: ""
	I0603 13:54:16.917310 1143252 logs.go:276] 1 containers: [114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05]
	I0603 13:54:16.917374 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:16.923290 1143252 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:16.923374 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:16.963598 1143252 cri.go:89] found id: "f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574"
	I0603 13:54:16.963619 1143252 cri.go:89] found id: ""
	I0603 13:54:16.963628 1143252 logs.go:276] 1 containers: [f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574]
	I0603 13:54:16.963689 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:16.968201 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:16.968277 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:17.008229 1143252 cri.go:89] found id: "f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87"
	I0603 13:54:17.008264 1143252 cri.go:89] found id: ""
	I0603 13:54:17.008274 1143252 logs.go:276] 1 containers: [f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87]
	I0603 13:54:17.008341 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:17.012719 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:17.012795 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:17.048353 1143252 cri.go:89] found id: "c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1"
	I0603 13:54:17.048384 1143252 cri.go:89] found id: ""
	I0603 13:54:17.048394 1143252 logs.go:276] 1 containers: [c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1]
	I0603 13:54:17.048459 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:17.053094 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:17.053162 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:17.088475 1143252 cri.go:89] found id: "a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d"
	I0603 13:54:17.088507 1143252 cri.go:89] found id: ""
	I0603 13:54:17.088518 1143252 logs.go:276] 1 containers: [a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d]
	I0603 13:54:17.088583 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:17.093293 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:17.093373 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:17.130335 1143252 cri.go:89] found id: ""
	I0603 13:54:17.130370 1143252 logs.go:276] 0 containers: []
	W0603 13:54:17.130381 1143252 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:17.130389 1143252 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0603 13:54:17.130472 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0603 13:54:17.176283 1143252 cri.go:89] found id: "e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b"
	I0603 13:54:17.176317 1143252 cri.go:89] found id: "141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88"
	I0603 13:54:17.176324 1143252 cri.go:89] found id: ""
	I0603 13:54:17.176335 1143252 logs.go:276] 2 containers: [e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b 141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88]
	I0603 13:54:17.176409 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:17.181455 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:17.185881 1143252 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:17.185902 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:17.239636 1143252 logs.go:123] Gathering logs for kube-apiserver [45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a] ...
	I0603 13:54:17.239680 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a"
	I0603 13:54:17.309488 1143252 logs.go:123] Gathering logs for etcd [114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05] ...
	I0603 13:54:17.309532 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05"
	I0603 13:54:17.362243 1143252 logs.go:123] Gathering logs for coredns [f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574] ...
	I0603 13:54:17.362282 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574"
	I0603 13:54:17.401389 1143252 logs.go:123] Gathering logs for kube-proxy [c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1] ...
	I0603 13:54:17.401440 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1"
	I0603 13:54:17.442095 1143252 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:17.442127 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:17.923198 1143252 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:17.923247 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:17.939968 1143252 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:17.940000 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 13:54:18.075054 1143252 logs.go:123] Gathering logs for kube-scheduler [f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87] ...
	I0603 13:54:18.075098 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87"
	I0603 13:54:18.113954 1143252 logs.go:123] Gathering logs for kube-controller-manager [a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d] ...
	I0603 13:54:18.113994 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d"
	I0603 13:54:18.181862 1143252 logs.go:123] Gathering logs for storage-provisioner [e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b] ...
	I0603 13:54:18.181906 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b"
	I0603 13:54:18.227105 1143252 logs.go:123] Gathering logs for storage-provisioner [141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88] ...
	I0603 13:54:18.227137 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88"
	I0603 13:54:18.272684 1143252 logs.go:123] Gathering logs for container status ...
	I0603 13:54:18.272721 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:15.371753 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:17.870321 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:19.879331 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:15.990326 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:18.489960 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:18.228279 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:18.242909 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:18.242985 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:18.285400 1143678 cri.go:89] found id: ""
	I0603 13:54:18.285445 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.285455 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:18.285461 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:18.285521 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:18.321840 1143678 cri.go:89] found id: ""
	I0603 13:54:18.321868 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.321877 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:18.321884 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:18.321943 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:18.358856 1143678 cri.go:89] found id: ""
	I0603 13:54:18.358888 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.358902 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:18.358911 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:18.358979 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:18.395638 1143678 cri.go:89] found id: ""
	I0603 13:54:18.395678 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.395691 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:18.395699 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:18.395766 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:18.435541 1143678 cri.go:89] found id: ""
	I0603 13:54:18.435570 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.435581 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:18.435589 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:18.435653 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:18.469491 1143678 cri.go:89] found id: ""
	I0603 13:54:18.469527 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.469538 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:18.469545 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:18.469615 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:18.507986 1143678 cri.go:89] found id: ""
	I0603 13:54:18.508018 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.508030 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:18.508039 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:18.508106 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:18.542311 1143678 cri.go:89] found id: ""
	I0603 13:54:18.542343 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.542351 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:18.542361 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:18.542375 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:18.619295 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:18.619337 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:18.662500 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:18.662540 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:18.714392 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:18.714432 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:18.728750 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:18.728785 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:18.800786 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:21.301554 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:21.315880 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:21.315944 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:21.358178 1143678 cri.go:89] found id: ""
	I0603 13:54:21.358208 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.358217 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:21.358227 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:21.358289 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:21.395873 1143678 cri.go:89] found id: ""
	I0603 13:54:21.395969 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.395995 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:21.396014 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:21.396111 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:21.431781 1143678 cri.go:89] found id: ""
	I0603 13:54:21.431810 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.431822 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:21.431831 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:21.431906 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:21.472840 1143678 cri.go:89] found id: ""
	I0603 13:54:21.472872 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.472885 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:21.472893 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:21.472955 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:21.512296 1143678 cri.go:89] found id: ""
	I0603 13:54:21.512333 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.512346 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:21.512353 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:21.512421 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:21.547555 1143678 cri.go:89] found id: ""
	I0603 13:54:21.547588 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.547599 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:21.547609 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:21.547670 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:21.584972 1143678 cri.go:89] found id: ""
	I0603 13:54:21.585005 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.585013 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:21.585019 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:21.585085 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:21.621566 1143678 cri.go:89] found id: ""
	I0603 13:54:21.621599 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.621610 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:21.621623 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:21.621639 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:21.637223 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:21.637263 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:21.712272 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:21.712294 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:21.712310 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:21.800453 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:21.800490 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:21.841477 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:21.841525 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:20.819740 1143252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:20.836917 1143252 api_server.go:72] duration metric: took 4m15.913250824s to wait for apiserver process to appear ...
	I0603 13:54:20.836947 1143252 api_server.go:88] waiting for apiserver healthz status ...
	I0603 13:54:20.836988 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:20.837038 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:20.874034 1143252 cri.go:89] found id: "45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a"
	I0603 13:54:20.874064 1143252 cri.go:89] found id: ""
	I0603 13:54:20.874076 1143252 logs.go:276] 1 containers: [45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a]
	I0603 13:54:20.874146 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:20.878935 1143252 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:20.879020 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:20.920390 1143252 cri.go:89] found id: "114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05"
	I0603 13:54:20.920417 1143252 cri.go:89] found id: ""
	I0603 13:54:20.920425 1143252 logs.go:276] 1 containers: [114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05]
	I0603 13:54:20.920494 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:20.924858 1143252 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:20.924934 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:20.966049 1143252 cri.go:89] found id: "f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574"
	I0603 13:54:20.966077 1143252 cri.go:89] found id: ""
	I0603 13:54:20.966088 1143252 logs.go:276] 1 containers: [f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574]
	I0603 13:54:20.966174 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:20.970734 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:20.970812 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:21.010892 1143252 cri.go:89] found id: "f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87"
	I0603 13:54:21.010918 1143252 cri.go:89] found id: ""
	I0603 13:54:21.010929 1143252 logs.go:276] 1 containers: [f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87]
	I0603 13:54:21.010994 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:21.016274 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:21.016347 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:21.055294 1143252 cri.go:89] found id: "c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1"
	I0603 13:54:21.055318 1143252 cri.go:89] found id: ""
	I0603 13:54:21.055327 1143252 logs.go:276] 1 containers: [c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1]
	I0603 13:54:21.055375 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:21.060007 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:21.060069 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:21.099200 1143252 cri.go:89] found id: "a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d"
	I0603 13:54:21.099225 1143252 cri.go:89] found id: ""
	I0603 13:54:21.099236 1143252 logs.go:276] 1 containers: [a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d]
	I0603 13:54:21.099309 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:21.103590 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:21.103662 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:21.140375 1143252 cri.go:89] found id: ""
	I0603 13:54:21.140409 1143252 logs.go:276] 0 containers: []
	W0603 13:54:21.140422 1143252 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:21.140431 1143252 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0603 13:54:21.140498 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0603 13:54:21.180709 1143252 cri.go:89] found id: "e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b"
	I0603 13:54:21.180735 1143252 cri.go:89] found id: "141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88"
	I0603 13:54:21.180739 1143252 cri.go:89] found id: ""
	I0603 13:54:21.180747 1143252 logs.go:276] 2 containers: [e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b 141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88]
	I0603 13:54:21.180814 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:21.184952 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:21.189111 1143252 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:21.189140 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:21.663768 1143252 logs.go:123] Gathering logs for kube-apiserver [45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a] ...
	I0603 13:54:21.663807 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a"
	I0603 13:54:21.719542 1143252 logs.go:123] Gathering logs for etcd [114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05] ...
	I0603 13:54:21.719573 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05"
	I0603 13:54:21.786686 1143252 logs.go:123] Gathering logs for coredns [f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574] ...
	I0603 13:54:21.786725 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574"
	I0603 13:54:21.824908 1143252 logs.go:123] Gathering logs for kube-scheduler [f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87] ...
	I0603 13:54:21.824948 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87"
	I0603 13:54:21.864778 1143252 logs.go:123] Gathering logs for kube-proxy [c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1] ...
	I0603 13:54:21.864818 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1"
	I0603 13:54:21.904450 1143252 logs.go:123] Gathering logs for storage-provisioner [e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b] ...
	I0603 13:54:21.904480 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b"
	I0603 13:54:21.942006 1143252 logs.go:123] Gathering logs for storage-provisioner [141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88] ...
	I0603 13:54:21.942040 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88"
	I0603 13:54:21.979636 1143252 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:21.979673 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:22.033943 1143252 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:22.033980 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:22.048545 1143252 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:22.048578 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 13:54:22.154866 1143252 logs.go:123] Gathering logs for kube-controller-manager [a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d] ...
	I0603 13:54:22.154906 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d"
	I0603 13:54:22.218033 1143252 logs.go:123] Gathering logs for container status ...
	I0603 13:54:22.218073 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:22.374700 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:24.871898 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:20.989874 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:23.489083 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:24.394864 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:24.408416 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:24.408527 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:24.444572 1143678 cri.go:89] found id: ""
	I0603 13:54:24.444603 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.444612 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:24.444618 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:24.444672 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:24.483710 1143678 cri.go:89] found id: ""
	I0603 13:54:24.483744 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.483755 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:24.483763 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:24.483837 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:24.522396 1143678 cri.go:89] found id: ""
	I0603 13:54:24.522437 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.522450 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:24.522457 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:24.522520 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:24.560865 1143678 cri.go:89] found id: ""
	I0603 13:54:24.560896 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.560905 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:24.560911 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:24.560964 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:24.598597 1143678 cri.go:89] found id: ""
	I0603 13:54:24.598632 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.598643 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:24.598657 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:24.598722 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:24.638854 1143678 cri.go:89] found id: ""
	I0603 13:54:24.638885 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.638897 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:24.638908 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:24.638979 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:24.678039 1143678 cri.go:89] found id: ""
	I0603 13:54:24.678076 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.678088 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:24.678096 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:24.678166 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:24.712836 1143678 cri.go:89] found id: ""
	I0603 13:54:24.712871 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.712883 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:24.712896 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:24.712913 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:24.763503 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:24.763545 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:24.779383 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:24.779416 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:24.867254 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:24.867287 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:24.867307 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:24.944920 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:24.944957 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:24.768551 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:54:24.774942 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 200:
	ok
	I0603 13:54:24.776278 1143252 api_server.go:141] control plane version: v1.30.1
	I0603 13:54:24.776301 1143252 api_server.go:131] duration metric: took 3.939347802s to wait for apiserver health ...
	I0603 13:54:24.776310 1143252 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 13:54:24.776334 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:24.776386 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:24.827107 1143252 cri.go:89] found id: "45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a"
	I0603 13:54:24.827139 1143252 cri.go:89] found id: ""
	I0603 13:54:24.827152 1143252 logs.go:276] 1 containers: [45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a]
	I0603 13:54:24.827210 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:24.831681 1143252 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:24.831752 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:24.875645 1143252 cri.go:89] found id: "114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05"
	I0603 13:54:24.875689 1143252 cri.go:89] found id: ""
	I0603 13:54:24.875711 1143252 logs.go:276] 1 containers: [114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05]
	I0603 13:54:24.875778 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:24.880157 1143252 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:24.880256 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:24.932131 1143252 cri.go:89] found id: "f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574"
	I0603 13:54:24.932157 1143252 cri.go:89] found id: ""
	I0603 13:54:24.932167 1143252 logs.go:276] 1 containers: [f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574]
	I0603 13:54:24.932262 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:24.938104 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:24.938168 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:24.980289 1143252 cri.go:89] found id: "f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87"
	I0603 13:54:24.980318 1143252 cri.go:89] found id: ""
	I0603 13:54:24.980327 1143252 logs.go:276] 1 containers: [f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87]
	I0603 13:54:24.980389 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:24.985608 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:24.985687 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:25.033726 1143252 cri.go:89] found id: "c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1"
	I0603 13:54:25.033749 1143252 cri.go:89] found id: ""
	I0603 13:54:25.033757 1143252 logs.go:276] 1 containers: [c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1]
	I0603 13:54:25.033811 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:25.038493 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:25.038561 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:25.077447 1143252 cri.go:89] found id: "a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d"
	I0603 13:54:25.077474 1143252 cri.go:89] found id: ""
	I0603 13:54:25.077485 1143252 logs.go:276] 1 containers: [a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d]
	I0603 13:54:25.077545 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:25.081701 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:25.081770 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:25.120216 1143252 cri.go:89] found id: ""
	I0603 13:54:25.120246 1143252 logs.go:276] 0 containers: []
	W0603 13:54:25.120254 1143252 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:25.120261 1143252 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0603 13:54:25.120313 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0603 13:54:25.162562 1143252 cri.go:89] found id: "e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b"
	I0603 13:54:25.162596 1143252 cri.go:89] found id: "141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88"
	I0603 13:54:25.162602 1143252 cri.go:89] found id: ""
	I0603 13:54:25.162613 1143252 logs.go:276] 2 containers: [e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b 141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88]
	I0603 13:54:25.162678 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:25.167179 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:25.171531 1143252 logs.go:123] Gathering logs for container status ...
	I0603 13:54:25.171558 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:25.223749 1143252 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:25.223787 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:25.290251 1143252 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:25.290293 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:25.315271 1143252 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:25.315302 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 13:54:25.433219 1143252 logs.go:123] Gathering logs for coredns [f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574] ...
	I0603 13:54:25.433257 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574"
	I0603 13:54:25.473156 1143252 logs.go:123] Gathering logs for kube-scheduler [f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87] ...
	I0603 13:54:25.473194 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87"
	I0603 13:54:25.513988 1143252 logs.go:123] Gathering logs for kube-controller-manager [a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d] ...
	I0603 13:54:25.514015 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d"
	I0603 13:54:25.587224 1143252 logs.go:123] Gathering logs for kube-apiserver [45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a] ...
	I0603 13:54:25.587260 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a"
	I0603 13:54:25.638872 1143252 logs.go:123] Gathering logs for etcd [114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05] ...
	I0603 13:54:25.638909 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05"
	I0603 13:54:25.687323 1143252 logs.go:123] Gathering logs for kube-proxy [c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1] ...
	I0603 13:54:25.687372 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1"
	I0603 13:54:25.739508 1143252 logs.go:123] Gathering logs for storage-provisioner [e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b] ...
	I0603 13:54:25.739539 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b"
	I0603 13:54:25.775066 1143252 logs.go:123] Gathering logs for storage-provisioner [141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88] ...
	I0603 13:54:25.775096 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88"
	I0603 13:54:25.811982 1143252 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:25.812016 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:28.685228 1143252 system_pods.go:59] 8 kube-system pods found
	I0603 13:54:28.685261 1143252 system_pods.go:61] "coredns-7db6d8ff4d-qdjrv" [9a490ea5-c189-4d28-bd6b-509610d35f37] Running
	I0603 13:54:28.685265 1143252 system_pods.go:61] "etcd-embed-certs-223260" [97807b62-195b-4d94-a7f8-754f68ad4f03] Running
	I0603 13:54:28.685269 1143252 system_pods.go:61] "kube-apiserver-embed-certs-223260" [df2f6cde-407c-4ed2-8fec-5fa61a428a88] Running
	I0603 13:54:28.685272 1143252 system_pods.go:61] "kube-controller-manager-embed-certs-223260" [9b8bc1b7-3f43-4626-b9ee-37f5176b7fd6] Running
	I0603 13:54:28.685276 1143252 system_pods.go:61] "kube-proxy-s5vdl" [4c515f67-d265-4140-82ec-ba9ac4ddda80] Running
	I0603 13:54:28.685279 1143252 system_pods.go:61] "kube-scheduler-embed-certs-223260" [d23001bf-d971-42d2-a901-b2ec4b4db649] Running
	I0603 13:54:28.685285 1143252 system_pods.go:61] "metrics-server-569cc877fc-v7d9t" [e89c698d-7aab-4acd-a9b3-5ba0315ad681] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:54:28.685290 1143252 system_pods.go:61] "storage-provisioner" [6ff65744-2d90-4589-a97f-d6b4d792eab4] Running
	I0603 13:54:28.685298 1143252 system_pods.go:74] duration metric: took 3.908982484s to wait for pod list to return data ...
	I0603 13:54:28.685305 1143252 default_sa.go:34] waiting for default service account to be created ...
	I0603 13:54:28.687914 1143252 default_sa.go:45] found service account: "default"
	I0603 13:54:28.687939 1143252 default_sa.go:55] duration metric: took 2.627402ms for default service account to be created ...
	I0603 13:54:28.687947 1143252 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 13:54:28.693336 1143252 system_pods.go:86] 8 kube-system pods found
	I0603 13:54:28.693369 1143252 system_pods.go:89] "coredns-7db6d8ff4d-qdjrv" [9a490ea5-c189-4d28-bd6b-509610d35f37] Running
	I0603 13:54:28.693375 1143252 system_pods.go:89] "etcd-embed-certs-223260" [97807b62-195b-4d94-a7f8-754f68ad4f03] Running
	I0603 13:54:28.693379 1143252 system_pods.go:89] "kube-apiserver-embed-certs-223260" [df2f6cde-407c-4ed2-8fec-5fa61a428a88] Running
	I0603 13:54:28.693385 1143252 system_pods.go:89] "kube-controller-manager-embed-certs-223260" [9b8bc1b7-3f43-4626-b9ee-37f5176b7fd6] Running
	I0603 13:54:28.693389 1143252 system_pods.go:89] "kube-proxy-s5vdl" [4c515f67-d265-4140-82ec-ba9ac4ddda80] Running
	I0603 13:54:28.693393 1143252 system_pods.go:89] "kube-scheduler-embed-certs-223260" [d23001bf-d971-42d2-a901-b2ec4b4db649] Running
	I0603 13:54:28.693401 1143252 system_pods.go:89] "metrics-server-569cc877fc-v7d9t" [e89c698d-7aab-4acd-a9b3-5ba0315ad681] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:54:28.693418 1143252 system_pods.go:89] "storage-provisioner" [6ff65744-2d90-4589-a97f-d6b4d792eab4] Running
	I0603 13:54:28.693438 1143252 system_pods.go:126] duration metric: took 5.484487ms to wait for k8s-apps to be running ...
	I0603 13:54:28.693450 1143252 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 13:54:28.693497 1143252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:54:28.710364 1143252 system_svc.go:56] duration metric: took 16.901982ms WaitForService to wait for kubelet
	I0603 13:54:28.710399 1143252 kubeadm.go:576] duration metric: took 4m23.786738812s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 13:54:28.710444 1143252 node_conditions.go:102] verifying NodePressure condition ...
	I0603 13:54:28.713300 1143252 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 13:54:28.713328 1143252 node_conditions.go:123] node cpu capacity is 2
	I0603 13:54:28.713362 1143252 node_conditions.go:105] duration metric: took 2.909242ms to run NodePressure ...
	I0603 13:54:28.713382 1143252 start.go:240] waiting for startup goroutines ...
	I0603 13:54:28.713392 1143252 start.go:245] waiting for cluster config update ...
	I0603 13:54:28.713424 1143252 start.go:254] writing updated cluster config ...
	I0603 13:54:28.713798 1143252 ssh_runner.go:195] Run: rm -f paused
	I0603 13:54:28.767538 1143252 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 13:54:28.769737 1143252 out.go:177] * Done! kubectl is now configured to use "embed-certs-223260" cluster and "default" namespace by default
	I0603 13:54:27.370695 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:29.870214 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:25.990136 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:28.489276 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:30.489392 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:27.495908 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:27.509885 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:27.509968 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:27.545591 1143678 cri.go:89] found id: ""
	I0603 13:54:27.545626 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.545635 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:27.545641 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:27.545695 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:27.583699 1143678 cri.go:89] found id: ""
	I0603 13:54:27.583728 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.583740 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:27.583748 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:27.583835 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:27.623227 1143678 cri.go:89] found id: ""
	I0603 13:54:27.623268 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.623277 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:27.623283 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:27.623341 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:27.663057 1143678 cri.go:89] found id: ""
	I0603 13:54:27.663090 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.663102 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:27.663109 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:27.663187 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:27.708448 1143678 cri.go:89] found id: ""
	I0603 13:54:27.708481 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.708489 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:27.708495 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:27.708551 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:27.743629 1143678 cri.go:89] found id: ""
	I0603 13:54:27.743663 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.743674 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:27.743682 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:27.743748 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:27.778094 1143678 cri.go:89] found id: ""
	I0603 13:54:27.778128 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.778137 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:27.778147 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:27.778210 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:27.813137 1143678 cri.go:89] found id: ""
	I0603 13:54:27.813170 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.813180 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:27.813192 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:27.813208 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:27.861100 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:27.861136 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:27.914752 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:27.914794 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:27.929479 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:27.929511 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:28.002898 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:28.002926 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:28.002942 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:30.581890 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:30.595982 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:30.596068 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:30.638804 1143678 cri.go:89] found id: ""
	I0603 13:54:30.638841 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.638853 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:30.638862 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:30.638942 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:30.677202 1143678 cri.go:89] found id: ""
	I0603 13:54:30.677242 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.677253 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:30.677262 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:30.677329 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:30.717382 1143678 cri.go:89] found id: ""
	I0603 13:54:30.717436 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.717446 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:30.717455 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:30.717523 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:30.753691 1143678 cri.go:89] found id: ""
	I0603 13:54:30.753719 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.753728 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:30.753734 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:30.753798 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:30.790686 1143678 cri.go:89] found id: ""
	I0603 13:54:30.790714 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.790723 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:30.790729 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:30.790783 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:30.830196 1143678 cri.go:89] found id: ""
	I0603 13:54:30.830224 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.830237 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:30.830245 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:30.830299 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:30.865952 1143678 cri.go:89] found id: ""
	I0603 13:54:30.865980 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.865992 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:30.866000 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:30.866066 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:30.901561 1143678 cri.go:89] found id: ""
	I0603 13:54:30.901592 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.901601 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:30.901610 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:30.901627 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:30.979416 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:30.979459 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:31.035024 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:31.035061 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:31.089005 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:31.089046 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:31.105176 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:31.105210 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:31.172862 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:32.371040 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:34.870810 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:32.989041 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:34.989599 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:33.674069 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:33.688423 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:33.688499 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:33.729840 1143678 cri.go:89] found id: ""
	I0603 13:54:33.729876 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.729886 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:33.729893 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:33.729945 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:33.764984 1143678 cri.go:89] found id: ""
	I0603 13:54:33.765010 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.765018 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:33.765025 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:33.765075 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:33.798411 1143678 cri.go:89] found id: ""
	I0603 13:54:33.798446 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.798459 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:33.798468 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:33.798547 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:33.831565 1143678 cri.go:89] found id: ""
	I0603 13:54:33.831600 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.831611 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:33.831620 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:33.831688 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:33.869701 1143678 cri.go:89] found id: ""
	I0603 13:54:33.869727 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.869735 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:33.869741 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:33.869802 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:33.906108 1143678 cri.go:89] found id: ""
	I0603 13:54:33.906134 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.906144 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:33.906153 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:33.906218 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:33.946577 1143678 cri.go:89] found id: ""
	I0603 13:54:33.946607 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.946615 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:33.946621 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:33.946673 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:33.986691 1143678 cri.go:89] found id: ""
	I0603 13:54:33.986724 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.986743 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:33.986757 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:33.986775 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:34.044068 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:34.044110 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:34.059686 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:34.059724 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:34.141490 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:34.141514 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:34.141531 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:34.227890 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:34.227930 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:36.778969 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:36.792527 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:36.792612 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:36.828044 1143678 cri.go:89] found id: ""
	I0603 13:54:36.828083 1143678 logs.go:276] 0 containers: []
	W0603 13:54:36.828096 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:36.828102 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:36.828166 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:36.863869 1143678 cri.go:89] found id: ""
	I0603 13:54:36.863905 1143678 logs.go:276] 0 containers: []
	W0603 13:54:36.863917 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:36.863926 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:36.863996 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:36.899610 1143678 cri.go:89] found id: ""
	I0603 13:54:36.899649 1143678 logs.go:276] 0 containers: []
	W0603 13:54:36.899661 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:36.899669 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:36.899742 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:36.938627 1143678 cri.go:89] found id: ""
	I0603 13:54:36.938664 1143678 logs.go:276] 0 containers: []
	W0603 13:54:36.938675 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:36.938683 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:36.938739 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:36.973810 1143678 cri.go:89] found id: ""
	I0603 13:54:36.973842 1143678 logs.go:276] 0 containers: []
	W0603 13:54:36.973857 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:36.973863 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:36.973915 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:37.013759 1143678 cri.go:89] found id: ""
	I0603 13:54:37.013792 1143678 logs.go:276] 0 containers: []
	W0603 13:54:37.013805 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:37.013813 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:37.013881 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:37.049665 1143678 cri.go:89] found id: ""
	I0603 13:54:37.049697 1143678 logs.go:276] 0 containers: []
	W0603 13:54:37.049706 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:37.049712 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:37.049787 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:37.087405 1143678 cri.go:89] found id: ""
	I0603 13:54:37.087436 1143678 logs.go:276] 0 containers: []
	W0603 13:54:37.087446 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:37.087457 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:37.087470 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:37.126443 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:37.126476 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:37.177976 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:37.178015 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:37.192821 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:37.192860 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:37.267895 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:37.267926 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:37.267945 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:36.871536 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:37.371048 1143450 pod_ready.go:81] duration metric: took 4m0.007102739s for pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace to be "Ready" ...
	E0603 13:54:37.371080 1143450 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0603 13:54:37.371092 1143450 pod_ready.go:38] duration metric: took 4m5.236838117s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:54:37.371111 1143450 api_server.go:52] waiting for apiserver process to appear ...
	I0603 13:54:37.371145 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:37.371202 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:37.428454 1143450 cri.go:89] found id: "50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836"
	I0603 13:54:37.428487 1143450 cri.go:89] found id: ""
	I0603 13:54:37.428498 1143450 logs.go:276] 1 containers: [50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836]
	I0603 13:54:37.428564 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:37.434473 1143450 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:37.434552 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:37.476251 1143450 cri.go:89] found id: "c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d"
	I0603 13:54:37.476288 1143450 cri.go:89] found id: ""
	I0603 13:54:37.476300 1143450 logs.go:276] 1 containers: [c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d]
	I0603 13:54:37.476368 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:37.483190 1143450 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:37.483280 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:37.528660 1143450 cri.go:89] found id: "bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d"
	I0603 13:54:37.528693 1143450 cri.go:89] found id: ""
	I0603 13:54:37.528704 1143450 logs.go:276] 1 containers: [bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d]
	I0603 13:54:37.528797 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:37.533716 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:37.533809 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:37.573995 1143450 cri.go:89] found id: "7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a"
	I0603 13:54:37.574016 1143450 cri.go:89] found id: ""
	I0603 13:54:37.574025 1143450 logs.go:276] 1 containers: [7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a]
	I0603 13:54:37.574071 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:37.578385 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:37.578465 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:37.616468 1143450 cri.go:89] found id: "9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b"
	I0603 13:54:37.616511 1143450 cri.go:89] found id: ""
	I0603 13:54:37.616522 1143450 logs.go:276] 1 containers: [9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b]
	I0603 13:54:37.616603 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:37.621204 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:37.621277 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:37.661363 1143450 cri.go:89] found id: "b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7"
	I0603 13:54:37.661390 1143450 cri.go:89] found id: ""
	I0603 13:54:37.661401 1143450 logs.go:276] 1 containers: [b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7]
	I0603 13:54:37.661507 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:37.665969 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:37.666055 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:37.705096 1143450 cri.go:89] found id: ""
	I0603 13:54:37.705128 1143450 logs.go:276] 0 containers: []
	W0603 13:54:37.705136 1143450 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:37.705142 1143450 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0603 13:54:37.705210 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0603 13:54:37.746365 1143450 cri.go:89] found id: "969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240"
	I0603 13:54:37.746400 1143450 cri.go:89] found id: "bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4"
	I0603 13:54:37.746404 1143450 cri.go:89] found id: ""
	I0603 13:54:37.746412 1143450 logs.go:276] 2 containers: [969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240 bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4]
	I0603 13:54:37.746470 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:37.750874 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:37.755146 1143450 logs.go:123] Gathering logs for kube-controller-manager [b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7] ...
	I0603 13:54:37.755175 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7"
	I0603 13:54:37.811365 1143450 logs.go:123] Gathering logs for storage-provisioner [bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4] ...
	I0603 13:54:37.811403 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4"
	I0603 13:54:37.849687 1143450 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:37.849729 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:37.904870 1143450 logs.go:123] Gathering logs for etcd [c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d] ...
	I0603 13:54:37.904909 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d"
	I0603 13:54:37.955448 1143450 logs.go:123] Gathering logs for coredns [bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d] ...
	I0603 13:54:37.955497 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d"
	I0603 13:54:37.996659 1143450 logs.go:123] Gathering logs for kube-proxy [9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b] ...
	I0603 13:54:37.996687 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b"
	I0603 13:54:38.047501 1143450 logs.go:123] Gathering logs for storage-provisioner [969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240] ...
	I0603 13:54:38.047540 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240"
	I0603 13:54:38.090932 1143450 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:38.090969 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:38.606612 1143450 logs.go:123] Gathering logs for container status ...
	I0603 13:54:38.606672 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:38.652732 1143450 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:38.652774 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:38.670570 1143450 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:38.670620 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 13:54:38.812156 1143450 logs.go:123] Gathering logs for kube-apiserver [50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836] ...
	I0603 13:54:38.812208 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836"
	I0603 13:54:38.862940 1143450 logs.go:123] Gathering logs for kube-scheduler [7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a] ...
	I0603 13:54:38.862988 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a"
	I0603 13:54:37.491134 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:39.990379 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:39.846505 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:39.860426 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:39.860514 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:39.896684 1143678 cri.go:89] found id: ""
	I0603 13:54:39.896712 1143678 logs.go:276] 0 containers: []
	W0603 13:54:39.896726 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:39.896736 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:39.896801 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:39.932437 1143678 cri.go:89] found id: ""
	I0603 13:54:39.932482 1143678 logs.go:276] 0 containers: []
	W0603 13:54:39.932494 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:39.932503 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:39.932571 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:39.967850 1143678 cri.go:89] found id: ""
	I0603 13:54:39.967883 1143678 logs.go:276] 0 containers: []
	W0603 13:54:39.967891 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:39.967898 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:39.967952 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:40.003255 1143678 cri.go:89] found id: ""
	I0603 13:54:40.003284 1143678 logs.go:276] 0 containers: []
	W0603 13:54:40.003292 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:40.003298 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:40.003351 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:40.045865 1143678 cri.go:89] found id: ""
	I0603 13:54:40.045892 1143678 logs.go:276] 0 containers: []
	W0603 13:54:40.045904 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:40.045912 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:40.045976 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:40.082469 1143678 cri.go:89] found id: ""
	I0603 13:54:40.082498 1143678 logs.go:276] 0 containers: []
	W0603 13:54:40.082507 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:40.082513 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:40.082584 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:40.117181 1143678 cri.go:89] found id: ""
	I0603 13:54:40.117231 1143678 logs.go:276] 0 containers: []
	W0603 13:54:40.117242 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:40.117250 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:40.117320 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:40.157776 1143678 cri.go:89] found id: ""
	I0603 13:54:40.157813 1143678 logs.go:276] 0 containers: []
	W0603 13:54:40.157822 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:40.157832 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:40.157848 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:40.213374 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:40.213437 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:40.228298 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:40.228330 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:40.305450 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:40.305485 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:40.305503 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:40.393653 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:40.393704 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:41.405129 1143450 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:41.423234 1143450 api_server.go:72] duration metric: took 4m14.998447047s to wait for apiserver process to appear ...
	I0603 13:54:41.423266 1143450 api_server.go:88] waiting for apiserver healthz status ...
	I0603 13:54:41.423312 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:41.423374 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:41.463540 1143450 cri.go:89] found id: "50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836"
	I0603 13:54:41.463562 1143450 cri.go:89] found id: ""
	I0603 13:54:41.463570 1143450 logs.go:276] 1 containers: [50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836]
	I0603 13:54:41.463620 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:41.468145 1143450 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:41.468226 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:41.511977 1143450 cri.go:89] found id: "c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d"
	I0603 13:54:41.512000 1143450 cri.go:89] found id: ""
	I0603 13:54:41.512017 1143450 logs.go:276] 1 containers: [c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d]
	I0603 13:54:41.512081 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:41.516600 1143450 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:41.516674 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:41.554392 1143450 cri.go:89] found id: "bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d"
	I0603 13:54:41.554420 1143450 cri.go:89] found id: ""
	I0603 13:54:41.554443 1143450 logs.go:276] 1 containers: [bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d]
	I0603 13:54:41.554508 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:41.558983 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:41.559039 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:41.597710 1143450 cri.go:89] found id: "7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a"
	I0603 13:54:41.597737 1143450 cri.go:89] found id: ""
	I0603 13:54:41.597747 1143450 logs.go:276] 1 containers: [7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a]
	I0603 13:54:41.597811 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:41.602164 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:41.602227 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:41.639422 1143450 cri.go:89] found id: "9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b"
	I0603 13:54:41.639452 1143450 cri.go:89] found id: ""
	I0603 13:54:41.639462 1143450 logs.go:276] 1 containers: [9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b]
	I0603 13:54:41.639532 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:41.644093 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:41.644171 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:41.682475 1143450 cri.go:89] found id: "b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7"
	I0603 13:54:41.682506 1143450 cri.go:89] found id: ""
	I0603 13:54:41.682515 1143450 logs.go:276] 1 containers: [b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7]
	I0603 13:54:41.682578 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:41.687654 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:41.687734 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:41.724804 1143450 cri.go:89] found id: ""
	I0603 13:54:41.724839 1143450 logs.go:276] 0 containers: []
	W0603 13:54:41.724850 1143450 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:41.724858 1143450 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0603 13:54:41.724928 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0603 13:54:41.764625 1143450 cri.go:89] found id: "969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240"
	I0603 13:54:41.764653 1143450 cri.go:89] found id: "bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4"
	I0603 13:54:41.764659 1143450 cri.go:89] found id: ""
	I0603 13:54:41.764670 1143450 logs.go:276] 2 containers: [969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240 bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4]
	I0603 13:54:41.764736 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:41.769499 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:41.773782 1143450 logs.go:123] Gathering logs for container status ...
	I0603 13:54:41.773806 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:41.816486 1143450 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:41.816520 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:41.833538 1143450 logs.go:123] Gathering logs for kube-apiserver [50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836] ...
	I0603 13:54:41.833569 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836"
	I0603 13:54:41.877958 1143450 logs.go:123] Gathering logs for etcd [c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d] ...
	I0603 13:54:41.878004 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d"
	I0603 13:54:41.922575 1143450 logs.go:123] Gathering logs for kube-controller-manager [b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7] ...
	I0603 13:54:41.922612 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7"
	I0603 13:54:41.983865 1143450 logs.go:123] Gathering logs for storage-provisioner [969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240] ...
	I0603 13:54:41.983900 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240"
	I0603 13:54:42.032746 1143450 logs.go:123] Gathering logs for storage-provisioner [bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4] ...
	I0603 13:54:42.032773 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4"
	I0603 13:54:42.076129 1143450 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:42.076166 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:42.129061 1143450 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:42.129099 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 13:54:42.248179 1143450 logs.go:123] Gathering logs for coredns [bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d] ...
	I0603 13:54:42.248213 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d"
	I0603 13:54:42.292179 1143450 logs.go:123] Gathering logs for kube-scheduler [7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a] ...
	I0603 13:54:42.292288 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a"
	I0603 13:54:42.340447 1143450 logs.go:123] Gathering logs for kube-proxy [9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b] ...
	I0603 13:54:42.340493 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b"
	I0603 13:54:42.381993 1143450 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:42.382024 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:42.488926 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:44.990221 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:42.934691 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:42.948505 1143678 kubeadm.go:591] duration metric: took 4m4.45791317s to restartPrimaryControlPlane
	W0603 13:54:42.948592 1143678 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0603 13:54:42.948629 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 13:54:48.316951 1143678 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.36829775s)
	I0603 13:54:48.317039 1143678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:54:48.333630 1143678 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 13:54:48.345772 1143678 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 13:54:48.357359 1143678 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 13:54:48.357386 1143678 kubeadm.go:156] found existing configuration files:
	
	I0603 13:54:48.357477 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 13:54:48.367844 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 13:54:48.367917 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 13:54:48.379349 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 13:54:48.389684 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 13:54:48.389760 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 13:54:48.401562 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 13:54:48.412670 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 13:54:48.412743 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 13:54:48.424261 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 13:54:48.434598 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 13:54:48.434674 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 13:54:48.446187 1143678 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 13:54:48.527873 1143678 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0603 13:54:48.528073 1143678 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 13:54:48.695244 1143678 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 13:54:48.695401 1143678 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 13:54:48.695581 1143678 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 13:54:48.930141 1143678 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 13:54:45.281199 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:54:45.286305 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 200:
	ok
	I0603 13:54:45.287421 1143450 api_server.go:141] control plane version: v1.30.1
	I0603 13:54:45.287444 1143450 api_server.go:131] duration metric: took 3.864171356s to wait for apiserver health ...
	I0603 13:54:45.287455 1143450 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 13:54:45.287486 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:45.287540 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:45.328984 1143450 cri.go:89] found id: "50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836"
	I0603 13:54:45.329012 1143450 cri.go:89] found id: ""
	I0603 13:54:45.329022 1143450 logs.go:276] 1 containers: [50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836]
	I0603 13:54:45.329075 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:45.334601 1143450 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:45.334683 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:45.382942 1143450 cri.go:89] found id: "c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d"
	I0603 13:54:45.382967 1143450 cri.go:89] found id: ""
	I0603 13:54:45.382978 1143450 logs.go:276] 1 containers: [c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d]
	I0603 13:54:45.383039 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:45.387904 1143450 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:45.387969 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:45.431948 1143450 cri.go:89] found id: "bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d"
	I0603 13:54:45.431981 1143450 cri.go:89] found id: ""
	I0603 13:54:45.431992 1143450 logs.go:276] 1 containers: [bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d]
	I0603 13:54:45.432052 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:45.440993 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:45.441074 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:45.490086 1143450 cri.go:89] found id: "7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a"
	I0603 13:54:45.490114 1143450 cri.go:89] found id: ""
	I0603 13:54:45.490125 1143450 logs.go:276] 1 containers: [7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a]
	I0603 13:54:45.490194 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:45.494628 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:45.494688 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:45.532264 1143450 cri.go:89] found id: "9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b"
	I0603 13:54:45.532296 1143450 cri.go:89] found id: ""
	I0603 13:54:45.532307 1143450 logs.go:276] 1 containers: [9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b]
	I0603 13:54:45.532374 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:45.536914 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:45.536985 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:45.576641 1143450 cri.go:89] found id: "b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7"
	I0603 13:54:45.576663 1143450 cri.go:89] found id: ""
	I0603 13:54:45.576671 1143450 logs.go:276] 1 containers: [b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7]
	I0603 13:54:45.576720 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:45.580872 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:45.580926 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:45.628834 1143450 cri.go:89] found id: ""
	I0603 13:54:45.628864 1143450 logs.go:276] 0 containers: []
	W0603 13:54:45.628872 1143450 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:45.628879 1143450 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0603 13:54:45.628931 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0603 13:54:45.671689 1143450 cri.go:89] found id: "969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240"
	I0603 13:54:45.671719 1143450 cri.go:89] found id: "bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4"
	I0603 13:54:45.671727 1143450 cri.go:89] found id: ""
	I0603 13:54:45.671740 1143450 logs.go:276] 2 containers: [969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240 bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4]
	I0603 13:54:45.671799 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:45.677161 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:45.682179 1143450 logs.go:123] Gathering logs for container status ...
	I0603 13:54:45.682219 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:45.731155 1143450 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:45.731192 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 13:54:45.846365 1143450 logs.go:123] Gathering logs for kube-apiserver [50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836] ...
	I0603 13:54:45.846411 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836"
	I0603 13:54:45.907694 1143450 logs.go:123] Gathering logs for coredns [bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d] ...
	I0603 13:54:45.907733 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d"
	I0603 13:54:45.952881 1143450 logs.go:123] Gathering logs for kube-scheduler [7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a] ...
	I0603 13:54:45.952919 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a"
	I0603 13:54:45.998674 1143450 logs.go:123] Gathering logs for kube-controller-manager [b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7] ...
	I0603 13:54:45.998722 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7"
	I0603 13:54:46.061902 1143450 logs.go:123] Gathering logs for storage-provisioner [969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240] ...
	I0603 13:54:46.061949 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240"
	I0603 13:54:46.106017 1143450 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:46.106056 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:46.473915 1143450 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:46.473981 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:46.530212 1143450 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:46.530260 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:46.545954 1143450 logs.go:123] Gathering logs for etcd [c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d] ...
	I0603 13:54:46.545996 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d"
	I0603 13:54:46.595057 1143450 logs.go:123] Gathering logs for kube-proxy [9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b] ...
	I0603 13:54:46.595097 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b"
	I0603 13:54:46.637835 1143450 logs.go:123] Gathering logs for storage-provisioner [bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4] ...
	I0603 13:54:46.637872 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4"
	I0603 13:54:49.190539 1143450 system_pods.go:59] 8 kube-system pods found
	I0603 13:54:49.190572 1143450 system_pods.go:61] "coredns-7db6d8ff4d-flxqj" [a116f363-ca50-4e2d-8c77-e99498c81e36] Running
	I0603 13:54:49.190577 1143450 system_pods.go:61] "etcd-default-k8s-diff-port-030870" [4134b8e4-b7c4-4571-ae7f-f1eff2be2427] Running
	I0603 13:54:49.190582 1143450 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-030870" [38fe3d48-9d20-448a-b8d1-7c3af8ab1d2b] Running
	I0603 13:54:49.190586 1143450 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-030870" [5c8f2fc4-fc4f-48f8-8d81-3b64aa9a93c3] Running
	I0603 13:54:49.190590 1143450 system_pods.go:61] "kube-proxy-thsrx" [96df5442-b343-47c8-a561-681a2d568d50] Running
	I0603 13:54:49.190593 1143450 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-030870" [1f2c23a1-1c2c-463f-a5f0-e8f1bb8956f6] Running
	I0603 13:54:49.190602 1143450 system_pods.go:61] "metrics-server-569cc877fc-8xw9v" [4ab08177-2171-493b-928c-456d8a21fd68] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:54:49.190609 1143450 system_pods.go:61] "storage-provisioner" [64d080e5-d582-4ee5-adbc-a652e8e2b820] Running
	I0603 13:54:49.190620 1143450 system_pods.go:74] duration metric: took 3.903157143s to wait for pod list to return data ...
	I0603 13:54:49.190633 1143450 default_sa.go:34] waiting for default service account to be created ...
	I0603 13:54:49.193192 1143450 default_sa.go:45] found service account: "default"
	I0603 13:54:49.193219 1143450 default_sa.go:55] duration metric: took 2.575016ms for default service account to be created ...
	I0603 13:54:49.193229 1143450 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 13:54:49.202028 1143450 system_pods.go:86] 8 kube-system pods found
	I0603 13:54:49.202065 1143450 system_pods.go:89] "coredns-7db6d8ff4d-flxqj" [a116f363-ca50-4e2d-8c77-e99498c81e36] Running
	I0603 13:54:49.202074 1143450 system_pods.go:89] "etcd-default-k8s-diff-port-030870" [4134b8e4-b7c4-4571-ae7f-f1eff2be2427] Running
	I0603 13:54:49.202081 1143450 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-030870" [38fe3d48-9d20-448a-b8d1-7c3af8ab1d2b] Running
	I0603 13:54:49.202088 1143450 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-030870" [5c8f2fc4-fc4f-48f8-8d81-3b64aa9a93c3] Running
	I0603 13:54:49.202094 1143450 system_pods.go:89] "kube-proxy-thsrx" [96df5442-b343-47c8-a561-681a2d568d50] Running
	I0603 13:54:49.202100 1143450 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-030870" [1f2c23a1-1c2c-463f-a5f0-e8f1bb8956f6] Running
	I0603 13:54:49.202113 1143450 system_pods.go:89] "metrics-server-569cc877fc-8xw9v" [4ab08177-2171-493b-928c-456d8a21fd68] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:54:49.202124 1143450 system_pods.go:89] "storage-provisioner" [64d080e5-d582-4ee5-adbc-a652e8e2b820] Running
	I0603 13:54:49.202135 1143450 system_pods.go:126] duration metric: took 8.899065ms to wait for k8s-apps to be running ...
	I0603 13:54:49.202152 1143450 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 13:54:49.202209 1143450 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:54:49.220199 1143450 system_svc.go:56] duration metric: took 18.025994ms WaitForService to wait for kubelet
	I0603 13:54:49.220242 1143450 kubeadm.go:576] duration metric: took 4m22.79546223s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 13:54:49.220269 1143450 node_conditions.go:102] verifying NodePressure condition ...
	I0603 13:54:49.223327 1143450 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 13:54:49.223354 1143450 node_conditions.go:123] node cpu capacity is 2
	I0603 13:54:49.223367 1143450 node_conditions.go:105] duration metric: took 3.093435ms to run NodePressure ...
	I0603 13:54:49.223383 1143450 start.go:240] waiting for startup goroutines ...
	I0603 13:54:49.223393 1143450 start.go:245] waiting for cluster config update ...
	I0603 13:54:49.223408 1143450 start.go:254] writing updated cluster config ...
	I0603 13:54:49.223704 1143450 ssh_runner.go:195] Run: rm -f paused
	I0603 13:54:49.277924 1143450 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 13:54:49.280442 1143450 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-030870" cluster and "default" namespace by default
	I0603 13:54:48.932024 1143678 out.go:204]   - Generating certificates and keys ...
	I0603 13:54:48.932110 1143678 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 13:54:48.932168 1143678 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 13:54:48.932235 1143678 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 13:54:48.932305 1143678 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 13:54:48.932481 1143678 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 13:54:48.932639 1143678 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 13:54:48.933272 1143678 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 13:54:48.933771 1143678 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 13:54:48.934251 1143678 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 13:54:48.934654 1143678 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 13:54:48.934712 1143678 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 13:54:48.934762 1143678 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 13:54:49.063897 1143678 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 13:54:49.266680 1143678 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 13:54:49.364943 1143678 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 13:54:49.628905 1143678 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 13:54:49.645861 1143678 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 13:54:49.645991 1143678 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 13:54:49.646049 1143678 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 13:54:49.795196 1143678 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 13:54:47.490336 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:49.989543 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:49.798407 1143678 out.go:204]   - Booting up control plane ...
	I0603 13:54:49.798564 1143678 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 13:54:49.800163 1143678 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 13:54:49.802226 1143678 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 13:54:49.803809 1143678 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 13:54:49.806590 1143678 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0603 13:54:52.490088 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:54.990092 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:57.488119 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:59.489775 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:01.490194 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:03.989075 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:05.990054 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:08.489226 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:10.989028 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:13.489118 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:15.489176 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:17.989008 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:20.489091 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:22.989284 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:24.990020 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:27.489326 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:27.983679 1142862 pod_ready.go:81] duration metric: took 4m0.001142992s for pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace to be "Ready" ...
	E0603 13:55:27.983708 1142862 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace to be "Ready" (will not retry!)
	I0603 13:55:27.983731 1142862 pod_ready.go:38] duration metric: took 4m12.038904247s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:55:27.983760 1142862 kubeadm.go:591] duration metric: took 4m21.273943202s to restartPrimaryControlPlane
	W0603 13:55:27.983831 1142862 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0603 13:55:27.983865 1142862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 13:55:29.807867 1143678 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0603 13:55:29.808474 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:55:29.808754 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:55:34.809455 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:55:34.809722 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:55:44.810305 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:55:44.810491 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:55:59.870853 1142862 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.886953189s)
	I0603 13:55:59.870958 1142862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:55:59.889658 1142862 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 13:55:59.901529 1142862 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 13:55:59.914241 1142862 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 13:55:59.914266 1142862 kubeadm.go:156] found existing configuration files:
	
	I0603 13:55:59.914312 1142862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 13:55:59.924884 1142862 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 13:55:59.924950 1142862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 13:55:59.935494 1142862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 13:55:59.946222 1142862 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 13:55:59.946321 1142862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 13:55:59.956749 1142862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 13:55:59.967027 1142862 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 13:55:59.967110 1142862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 13:55:59.979124 1142862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 13:55:59.989689 1142862 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 13:55:59.989751 1142862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 13:56:00.000616 1142862 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 13:56:00.230878 1142862 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 13:56:04.811725 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:56:04.811929 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:56:08.995375 1142862 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0603 13:56:08.995463 1142862 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 13:56:08.995588 1142862 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 13:56:08.995724 1142862 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 13:56:08.995874 1142862 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 13:56:08.995970 1142862 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 13:56:08.997810 1142862 out.go:204]   - Generating certificates and keys ...
	I0603 13:56:08.997914 1142862 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 13:56:08.998045 1142862 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 13:56:08.998154 1142862 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 13:56:08.998321 1142862 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 13:56:08.998423 1142862 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 13:56:08.998506 1142862 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 13:56:08.998578 1142862 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 13:56:08.998665 1142862 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 13:56:08.998764 1142862 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 13:56:08.998860 1142862 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 13:56:08.998919 1142862 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 13:56:08.999011 1142862 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 13:56:08.999111 1142862 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 13:56:08.999202 1142862 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0603 13:56:08.999275 1142862 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 13:56:08.999354 1142862 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 13:56:08.999423 1142862 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 13:56:08.999538 1142862 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 13:56:08.999692 1142862 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 13:56:09.001133 1142862 out.go:204]   - Booting up control plane ...
	I0603 13:56:09.001218 1142862 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 13:56:09.001293 1142862 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 13:56:09.001354 1142862 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 13:56:09.001499 1142862 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 13:56:09.001584 1142862 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 13:56:09.001637 1142862 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 13:56:09.001768 1142862 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0603 13:56:09.001881 1142862 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0603 13:56:09.001941 1142862 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.923053ms
	I0603 13:56:09.002010 1142862 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0603 13:56:09.002090 1142862 kubeadm.go:309] [api-check] The API server is healthy after 5.502208975s
	I0603 13:56:09.002224 1142862 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0603 13:56:09.002363 1142862 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0603 13:56:09.002457 1142862 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0603 13:56:09.002647 1142862 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-817450 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0603 13:56:09.002713 1142862 kubeadm.go:309] [bootstrap-token] Using token: a7hbk8.xb8is7k6ewa3l3ya
	I0603 13:56:09.004666 1142862 out.go:204]   - Configuring RBAC rules ...
	I0603 13:56:09.004792 1142862 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0603 13:56:09.004883 1142862 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0603 13:56:09.005026 1142862 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0603 13:56:09.005234 1142862 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0603 13:56:09.005389 1142862 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0603 13:56:09.005531 1142862 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0603 13:56:09.005651 1142862 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0603 13:56:09.005709 1142862 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0603 13:56:09.005779 1142862 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0603 13:56:09.005787 1142862 kubeadm.go:309] 
	I0603 13:56:09.005869 1142862 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0603 13:56:09.005885 1142862 kubeadm.go:309] 
	I0603 13:56:09.006014 1142862 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0603 13:56:09.006034 1142862 kubeadm.go:309] 
	I0603 13:56:09.006076 1142862 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0603 13:56:09.006136 1142862 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0603 13:56:09.006197 1142862 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0603 13:56:09.006203 1142862 kubeadm.go:309] 
	I0603 13:56:09.006263 1142862 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0603 13:56:09.006273 1142862 kubeadm.go:309] 
	I0603 13:56:09.006330 1142862 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0603 13:56:09.006338 1142862 kubeadm.go:309] 
	I0603 13:56:09.006393 1142862 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0603 13:56:09.006476 1142862 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0603 13:56:09.006542 1142862 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0603 13:56:09.006548 1142862 kubeadm.go:309] 
	I0603 13:56:09.006629 1142862 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0603 13:56:09.006746 1142862 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0603 13:56:09.006758 1142862 kubeadm.go:309] 
	I0603 13:56:09.006850 1142862 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token a7hbk8.xb8is7k6ewa3l3ya \
	I0603 13:56:09.006987 1142862 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c33e9516f6d05db03b44f9194bafe44692a1b8ae1d860b8bc74f77578e93fdb1 \
	I0603 13:56:09.007028 1142862 kubeadm.go:309] 	--control-plane 
	I0603 13:56:09.007037 1142862 kubeadm.go:309] 
	I0603 13:56:09.007141 1142862 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0603 13:56:09.007170 1142862 kubeadm.go:309] 
	I0603 13:56:09.007266 1142862 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token a7hbk8.xb8is7k6ewa3l3ya \
	I0603 13:56:09.007427 1142862 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c33e9516f6d05db03b44f9194bafe44692a1b8ae1d860b8bc74f77578e93fdb1 
	I0603 13:56:09.007451 1142862 cni.go:84] Creating CNI manager for ""
	I0603 13:56:09.007464 1142862 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:56:09.009292 1142862 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 13:56:09.010750 1142862 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 13:56:09.022810 1142862 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 13:56:09.052132 1142862 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 13:56:09.052150 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:09.052150 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-817450 minikube.k8s.io/updated_at=2024_06_03T13_56_09_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354 minikube.k8s.io/name=no-preload-817450 minikube.k8s.io/primary=true
	I0603 13:56:09.291610 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:09.296892 1142862 ops.go:34] apiserver oom_adj: -16
	I0603 13:56:09.792736 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:10.292471 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:10.792688 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:11.291782 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:11.792454 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:12.292056 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:12.792150 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:13.292620 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:13.792024 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:14.292501 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:14.791790 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:15.292128 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:15.792608 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:16.292106 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:16.791876 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:17.292276 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:17.791876 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:18.292644 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:18.792571 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:19.292064 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:19.791908 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:20.292511 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:20.792137 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:21.292153 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:21.791809 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:21.882178 1142862 kubeadm.go:1107] duration metric: took 12.830108615s to wait for elevateKubeSystemPrivileges
	W0603 13:56:21.882223 1142862 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0603 13:56:21.882236 1142862 kubeadm.go:393] duration metric: took 5m15.237452092s to StartCluster
	I0603 13:56:21.882260 1142862 settings.go:142] acquiring lock: {Name:mka7155af15d143794eb08b8670f7d850f44839e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:56:21.882368 1142862 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 13:56:21.883986 1142862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/kubeconfig: {Name:mk082a4c41fd0f4876b4085806e1bc5ef6533b14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:56:21.884288 1142862 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.125 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 13:56:21.885915 1142862 out.go:177] * Verifying Kubernetes components...
	I0603 13:56:21.884411 1142862 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 13:56:21.884504 1142862 config.go:182] Loaded profile config "no-preload-817450": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:56:21.887156 1142862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:56:21.887168 1142862 addons.go:69] Setting storage-provisioner=true in profile "no-preload-817450"
	I0603 13:56:21.887199 1142862 addons.go:69] Setting metrics-server=true in profile "no-preload-817450"
	I0603 13:56:21.887230 1142862 addons.go:234] Setting addon storage-provisioner=true in "no-preload-817450"
	W0603 13:56:21.887245 1142862 addons.go:243] addon storage-provisioner should already be in state true
	I0603 13:56:21.887261 1142862 addons.go:234] Setting addon metrics-server=true in "no-preload-817450"
	W0603 13:56:21.887276 1142862 addons.go:243] addon metrics-server should already be in state true
	I0603 13:56:21.887295 1142862 host.go:66] Checking if "no-preload-817450" exists ...
	I0603 13:56:21.887316 1142862 host.go:66] Checking if "no-preload-817450" exists ...
	I0603 13:56:21.887156 1142862 addons.go:69] Setting default-storageclass=true in profile "no-preload-817450"
	I0603 13:56:21.887366 1142862 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-817450"
	I0603 13:56:21.887709 1142862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:56:21.887711 1142862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:56:21.887749 1142862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:56:21.887752 1142862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:56:21.887779 1142862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:56:21.887778 1142862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:56:21.906019 1142862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37881
	I0603 13:56:21.906319 1142862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42985
	I0603 13:56:21.906563 1142862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43371
	I0603 13:56:21.906601 1142862 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:56:21.906714 1142862 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:56:21.907043 1142862 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:56:21.907126 1142862 main.go:141] libmachine: Using API Version  1
	I0603 13:56:21.907143 1142862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:56:21.907269 1142862 main.go:141] libmachine: Using API Version  1
	I0603 13:56:21.907288 1142862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:56:21.907558 1142862 main.go:141] libmachine: Using API Version  1
	I0603 13:56:21.907578 1142862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:56:21.907752 1142862 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:56:21.907891 1142862 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:56:21.908248 1142862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:56:21.908269 1142862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:56:21.908419 1142862 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:56:21.908487 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetState
	I0603 13:56:21.909150 1142862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:56:21.909175 1142862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:56:21.912898 1142862 addons.go:234] Setting addon default-storageclass=true in "no-preload-817450"
	W0603 13:56:21.912926 1142862 addons.go:243] addon default-storageclass should already be in state true
	I0603 13:56:21.912963 1142862 host.go:66] Checking if "no-preload-817450" exists ...
	I0603 13:56:21.913361 1142862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:56:21.913413 1142862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:56:21.928877 1142862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32983
	I0603 13:56:21.929336 1142862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44735
	I0603 13:56:21.929541 1142862 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:56:21.930006 1142862 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:56:21.930064 1142862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43105
	I0603 13:56:21.930161 1142862 main.go:141] libmachine: Using API Version  1
	I0603 13:56:21.930186 1142862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:56:21.930580 1142862 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:56:21.930723 1142862 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:56:21.930798 1142862 main.go:141] libmachine: Using API Version  1
	I0603 13:56:21.930812 1142862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:56:21.930891 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetState
	I0603 13:56:21.931037 1142862 main.go:141] libmachine: Using API Version  1
	I0603 13:56:21.931052 1142862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:56:21.931187 1142862 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:56:21.931369 1142862 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:56:21.931394 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetState
	I0603 13:56:21.932113 1142862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:56:21.932140 1142862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:56:21.933613 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:56:21.936068 1142862 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:56:21.934518 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:56:21.937788 1142862 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 13:56:21.937821 1142862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 13:56:21.937844 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:56:21.939174 1142862 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0603 13:56:21.940435 1142862 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0603 13:56:21.940458 1142862 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0603 13:56:21.940559 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:56:21.942628 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:56:21.943950 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:56:21.944227 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:56:21.944257 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:56:21.944449 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:56:21.944658 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:56:21.944734 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:56:21.944754 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:56:21.944780 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:56:21.944919 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:56:21.944932 1142862 sshutil.go:53] new ssh client: &{IP:192.168.72.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa Username:docker}
	I0603 13:56:21.945154 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:56:21.945309 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:56:21.945457 1142862 sshutil.go:53] new ssh client: &{IP:192.168.72.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa Username:docker}
	I0603 13:56:21.951140 1142862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45619
	I0603 13:56:21.951606 1142862 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:56:21.952125 1142862 main.go:141] libmachine: Using API Version  1
	I0603 13:56:21.952152 1142862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:56:21.952579 1142862 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:56:21.952808 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetState
	I0603 13:56:21.954505 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:56:21.954760 1142862 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 13:56:21.954781 1142862 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 13:56:21.954801 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:56:21.958298 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:56:21.958816 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:56:21.958851 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:56:21.959086 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:56:21.959325 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:56:21.959515 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:56:21.959678 1142862 sshutil.go:53] new ssh client: &{IP:192.168.72.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa Username:docker}
	I0603 13:56:22.102359 1142862 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:56:22.121380 1142862 node_ready.go:35] waiting up to 6m0s for node "no-preload-817450" to be "Ready" ...
	I0603 13:56:22.135572 1142862 node_ready.go:49] node "no-preload-817450" has status "Ready":"True"
	I0603 13:56:22.135599 1142862 node_ready.go:38] duration metric: took 14.156504ms for node "no-preload-817450" to be "Ready" ...
	I0603 13:56:22.135614 1142862 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:56:22.151036 1142862 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-f8pbl" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:22.283805 1142862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 13:56:22.288913 1142862 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0603 13:56:22.288938 1142862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0603 13:56:22.297769 1142862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 13:56:22.329187 1142862 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0603 13:56:22.329221 1142862 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0603 13:56:22.393569 1142862 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 13:56:22.393594 1142862 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0603 13:56:22.435605 1142862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 13:56:23.470078 1142862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.18622743s)
	I0603 13:56:23.470155 1142862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.172344092s)
	I0603 13:56:23.470171 1142862 main.go:141] libmachine: Making call to close driver server
	I0603 13:56:23.470192 1142862 main.go:141] libmachine: (no-preload-817450) Calling .Close
	I0603 13:56:23.470200 1142862 main.go:141] libmachine: Making call to close driver server
	I0603 13:56:23.470216 1142862 main.go:141] libmachine: (no-preload-817450) Calling .Close
	I0603 13:56:23.470515 1142862 main.go:141] libmachine: (no-preload-817450) DBG | Closing plugin on server side
	I0603 13:56:23.470553 1142862 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:56:23.470567 1142862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:56:23.470576 1142862 main.go:141] libmachine: Making call to close driver server
	I0603 13:56:23.470586 1142862 main.go:141] libmachine: (no-preload-817450) Calling .Close
	I0603 13:56:23.470589 1142862 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:56:23.470602 1142862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:56:23.470613 1142862 main.go:141] libmachine: Making call to close driver server
	I0603 13:56:23.470625 1142862 main.go:141] libmachine: (no-preload-817450) Calling .Close
	I0603 13:56:23.470807 1142862 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:56:23.470823 1142862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:56:23.471108 1142862 main.go:141] libmachine: (no-preload-817450) DBG | Closing plugin on server side
	I0603 13:56:23.471138 1142862 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:56:23.471180 1142862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:56:23.492187 1142862 main.go:141] libmachine: Making call to close driver server
	I0603 13:56:23.492226 1142862 main.go:141] libmachine: (no-preload-817450) Calling .Close
	I0603 13:56:23.492596 1142862 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:56:23.492618 1142862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:56:23.492636 1142862 main.go:141] libmachine: (no-preload-817450) DBG | Closing plugin on server side
	I0603 13:56:23.892903 1142862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.45716212s)
	I0603 13:56:23.892991 1142862 main.go:141] libmachine: Making call to close driver server
	I0603 13:56:23.893006 1142862 main.go:141] libmachine: (no-preload-817450) Calling .Close
	I0603 13:56:23.893418 1142862 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:56:23.893426 1142862 main.go:141] libmachine: (no-preload-817450) DBG | Closing plugin on server side
	I0603 13:56:23.893442 1142862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:56:23.893459 1142862 main.go:141] libmachine: Making call to close driver server
	I0603 13:56:23.893468 1142862 main.go:141] libmachine: (no-preload-817450) Calling .Close
	I0603 13:56:23.893745 1142862 main.go:141] libmachine: (no-preload-817450) DBG | Closing plugin on server side
	I0603 13:56:23.893790 1142862 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:56:23.893811 1142862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:56:23.893832 1142862 addons.go:475] Verifying addon metrics-server=true in "no-preload-817450"
	I0603 13:56:23.895990 1142862 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0603 13:56:23.897968 1142862 addons.go:510] duration metric: took 2.013558036s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0603 13:56:24.157803 1142862 pod_ready.go:102] pod "coredns-7db6d8ff4d-f8pbl" in "kube-system" namespace has status "Ready":"False"
	I0603 13:56:24.658730 1142862 pod_ready.go:92] pod "coredns-7db6d8ff4d-f8pbl" in "kube-system" namespace has status "Ready":"True"
	I0603 13:56:24.658765 1142862 pod_ready.go:81] duration metric: took 2.507699067s for pod "coredns-7db6d8ff4d-f8pbl" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.658779 1142862 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.664053 1142862 pod_ready.go:92] pod "etcd-no-preload-817450" in "kube-system" namespace has status "Ready":"True"
	I0603 13:56:24.664084 1142862 pod_ready.go:81] duration metric: took 5.2962ms for pod "etcd-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.664096 1142862 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.668496 1142862 pod_ready.go:92] pod "kube-apiserver-no-preload-817450" in "kube-system" namespace has status "Ready":"True"
	I0603 13:56:24.668521 1142862 pod_ready.go:81] duration metric: took 4.417565ms for pod "kube-apiserver-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.668533 1142862 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.673549 1142862 pod_ready.go:92] pod "kube-controller-manager-no-preload-817450" in "kube-system" namespace has status "Ready":"True"
	I0603 13:56:24.673568 1142862 pod_ready.go:81] duration metric: took 5.026882ms for pod "kube-controller-manager-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.673577 1142862 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-t45fn" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.678207 1142862 pod_ready.go:92] pod "kube-proxy-t45fn" in "kube-system" namespace has status "Ready":"True"
	I0603 13:56:24.678228 1142862 pod_ready.go:81] duration metric: took 4.644345ms for pod "kube-proxy-t45fn" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.678239 1142862 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:25.056174 1142862 pod_ready.go:92] pod "kube-scheduler-no-preload-817450" in "kube-system" namespace has status "Ready":"True"
	I0603 13:56:25.056204 1142862 pod_ready.go:81] duration metric: took 377.957963ms for pod "kube-scheduler-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:25.056214 1142862 pod_ready.go:38] duration metric: took 2.920586356s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:56:25.056231 1142862 api_server.go:52] waiting for apiserver process to appear ...
	I0603 13:56:25.056294 1142862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:56:25.071253 1142862 api_server.go:72] duration metric: took 3.186917827s to wait for apiserver process to appear ...
	I0603 13:56:25.071291 1142862 api_server.go:88] waiting for apiserver healthz status ...
	I0603 13:56:25.071319 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:56:25.076592 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 200:
	ok
	I0603 13:56:25.077531 1142862 api_server.go:141] control plane version: v1.30.1
	I0603 13:56:25.077553 1142862 api_server.go:131] duration metric: took 6.255263ms to wait for apiserver health ...
	I0603 13:56:25.077561 1142862 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 13:56:25.258520 1142862 system_pods.go:59] 9 kube-system pods found
	I0603 13:56:25.258552 1142862 system_pods.go:61] "coredns-7db6d8ff4d-f8pbl" [201e687b-1c1b-4030-8b59-b0257a0f876c] Running
	I0603 13:56:25.258557 1142862 system_pods.go:61] "coredns-7db6d8ff4d-jgk4p" [75956644-426d-49a7-b80c-492c4284f438] Running
	I0603 13:56:25.258560 1142862 system_pods.go:61] "etcd-no-preload-817450" [51d6541e-42ba-4d69-938d-0f2d379572ec] Running
	I0603 13:56:25.258565 1142862 system_pods.go:61] "kube-apiserver-no-preload-817450" [76c05ee7-8f8c-4280-af34-534c73422c51] Running
	I0603 13:56:25.258569 1142862 system_pods.go:61] "kube-controller-manager-no-preload-817450" [e3394427-3c75-4fb4-bd08-b22b8b6ad9eb] Running
	I0603 13:56:25.258573 1142862 system_pods.go:61] "kube-proxy-t45fn" [0578c151-2b36-4125-83f8-f4fbd62a1dc4] Running
	I0603 13:56:25.258578 1142862 system_pods.go:61] "kube-scheduler-no-preload-817450" [9d7c419f-d671-4b0a-bfee-7fe26c690312] Running
	I0603 13:56:25.258585 1142862 system_pods.go:61] "metrics-server-569cc877fc-j2lpf" [4f776017-1575-4461-a7c8-656e5a170460] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:56:25.258591 1142862 system_pods.go:61] "storage-provisioner" [f22655fc-5571-496e-a93f-3970d1693435] Running
	I0603 13:56:25.258603 1142862 system_pods.go:74] duration metric: took 181.034608ms to wait for pod list to return data ...
	I0603 13:56:25.258618 1142862 default_sa.go:34] waiting for default service account to be created ...
	I0603 13:56:25.454775 1142862 default_sa.go:45] found service account: "default"
	I0603 13:56:25.454810 1142862 default_sa.go:55] duration metric: took 196.18004ms for default service account to be created ...
	I0603 13:56:25.454820 1142862 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 13:56:25.658868 1142862 system_pods.go:86] 9 kube-system pods found
	I0603 13:56:25.658908 1142862 system_pods.go:89] "coredns-7db6d8ff4d-f8pbl" [201e687b-1c1b-4030-8b59-b0257a0f876c] Running
	I0603 13:56:25.658919 1142862 system_pods.go:89] "coredns-7db6d8ff4d-jgk4p" [75956644-426d-49a7-b80c-492c4284f438] Running
	I0603 13:56:25.658926 1142862 system_pods.go:89] "etcd-no-preload-817450" [51d6541e-42ba-4d69-938d-0f2d379572ec] Running
	I0603 13:56:25.658932 1142862 system_pods.go:89] "kube-apiserver-no-preload-817450" [76c05ee7-8f8c-4280-af34-534c73422c51] Running
	I0603 13:56:25.658938 1142862 system_pods.go:89] "kube-controller-manager-no-preload-817450" [e3394427-3c75-4fb4-bd08-b22b8b6ad9eb] Running
	I0603 13:56:25.658944 1142862 system_pods.go:89] "kube-proxy-t45fn" [0578c151-2b36-4125-83f8-f4fbd62a1dc4] Running
	I0603 13:56:25.658950 1142862 system_pods.go:89] "kube-scheduler-no-preload-817450" [9d7c419f-d671-4b0a-bfee-7fe26c690312] Running
	I0603 13:56:25.658959 1142862 system_pods.go:89] "metrics-server-569cc877fc-j2lpf" [4f776017-1575-4461-a7c8-656e5a170460] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:56:25.658970 1142862 system_pods.go:89] "storage-provisioner" [f22655fc-5571-496e-a93f-3970d1693435] Running
	I0603 13:56:25.658983 1142862 system_pods.go:126] duration metric: took 204.156078ms to wait for k8s-apps to be running ...
	I0603 13:56:25.658999 1142862 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 13:56:25.659058 1142862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:56:25.674728 1142862 system_svc.go:56] duration metric: took 15.717684ms WaitForService to wait for kubelet
	I0603 13:56:25.674759 1142862 kubeadm.go:576] duration metric: took 3.790431991s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 13:56:25.674777 1142862 node_conditions.go:102] verifying NodePressure condition ...
	I0603 13:56:25.855640 1142862 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 13:56:25.855671 1142862 node_conditions.go:123] node cpu capacity is 2
	I0603 13:56:25.855684 1142862 node_conditions.go:105] duration metric: took 180.901974ms to run NodePressure ...
	I0603 13:56:25.855696 1142862 start.go:240] waiting for startup goroutines ...
	I0603 13:56:25.855703 1142862 start.go:245] waiting for cluster config update ...
	I0603 13:56:25.855716 1142862 start.go:254] writing updated cluster config ...
	I0603 13:56:25.856020 1142862 ssh_runner.go:195] Run: rm -f paused
	I0603 13:56:25.908747 1142862 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 13:56:25.911049 1142862 out.go:177] * Done! kubectl is now configured to use "no-preload-817450" cluster and "default" namespace by default
	I0603 13:56:44.813650 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:56:44.813933 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:56:44.813964 1143678 kubeadm.go:309] 
	I0603 13:56:44.814039 1143678 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0603 13:56:44.814075 1143678 kubeadm.go:309] 		timed out waiting for the condition
	I0603 13:56:44.814115 1143678 kubeadm.go:309] 
	I0603 13:56:44.814197 1143678 kubeadm.go:309] 	This error is likely caused by:
	I0603 13:56:44.814246 1143678 kubeadm.go:309] 		- The kubelet is not running
	I0603 13:56:44.814369 1143678 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0603 13:56:44.814378 1143678 kubeadm.go:309] 
	I0603 13:56:44.814496 1143678 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0603 13:56:44.814540 1143678 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0603 13:56:44.814573 1143678 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0603 13:56:44.814580 1143678 kubeadm.go:309] 
	I0603 13:56:44.814685 1143678 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0603 13:56:44.814785 1143678 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0603 13:56:44.814798 1143678 kubeadm.go:309] 
	I0603 13:56:44.814896 1143678 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0603 13:56:44.815001 1143678 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0603 13:56:44.815106 1143678 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0603 13:56:44.815208 1143678 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0603 13:56:44.815220 1143678 kubeadm.go:309] 
	I0603 13:56:44.816032 1143678 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 13:56:44.816137 1143678 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0603 13:56:44.816231 1143678 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0603 13:56:44.816405 1143678 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0603 13:56:44.816480 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 13:56:45.288649 1143678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:56:45.305284 1143678 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 13:56:45.316705 1143678 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 13:56:45.316736 1143678 kubeadm.go:156] found existing configuration files:
	
	I0603 13:56:45.316804 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 13:56:45.327560 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 13:56:45.327630 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 13:56:45.337910 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 13:56:45.349864 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 13:56:45.349948 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 13:56:45.361369 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 13:56:45.371797 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 13:56:45.371866 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 13:56:45.382861 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 13:56:45.393310 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 13:56:45.393382 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 13:56:45.403822 1143678 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 13:56:45.476725 1143678 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0603 13:56:45.476794 1143678 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 13:56:45.630786 1143678 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 13:56:45.630956 1143678 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 13:56:45.631125 1143678 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 13:56:45.814370 1143678 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 13:56:45.816372 1143678 out.go:204]   - Generating certificates and keys ...
	I0603 13:56:45.816481 1143678 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 13:56:45.816556 1143678 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 13:56:45.816710 1143678 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 13:56:45.816831 1143678 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 13:56:45.816928 1143678 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 13:56:45.817003 1143678 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 13:56:45.817093 1143678 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 13:56:45.817178 1143678 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 13:56:45.817328 1143678 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 13:56:45.817477 1143678 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 13:56:45.817533 1143678 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 13:56:45.817607 1143678 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 13:56:46.025905 1143678 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 13:56:46.331809 1143678 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 13:56:46.551488 1143678 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 13:56:46.636938 1143678 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 13:56:46.663292 1143678 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 13:56:46.663400 1143678 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 13:56:46.663448 1143678 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 13:56:46.840318 1143678 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 13:56:46.842399 1143678 out.go:204]   - Booting up control plane ...
	I0603 13:56:46.842530 1143678 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 13:56:46.851940 1143678 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 13:56:46.855283 1143678 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 13:56:46.855443 1143678 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 13:56:46.857883 1143678 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0603 13:57:26.860915 1143678 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0603 13:57:26.861047 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:57:26.861296 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:57:31.861724 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:57:31.862046 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:57:41.862803 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:57:41.863057 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:58:01.862907 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:58:01.863136 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:58:41.862069 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:58:41.862391 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:58:41.862430 1143678 kubeadm.go:309] 
	I0603 13:58:41.862535 1143678 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0603 13:58:41.862613 1143678 kubeadm.go:309] 		timed out waiting for the condition
	I0603 13:58:41.862624 1143678 kubeadm.go:309] 
	I0603 13:58:41.862675 1143678 kubeadm.go:309] 	This error is likely caused by:
	I0603 13:58:41.862737 1143678 kubeadm.go:309] 		- The kubelet is not running
	I0603 13:58:41.862895 1143678 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0603 13:58:41.862909 1143678 kubeadm.go:309] 
	I0603 13:58:41.863030 1143678 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0603 13:58:41.863060 1143678 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0603 13:58:41.863090 1143678 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0603 13:58:41.863100 1143678 kubeadm.go:309] 
	I0603 13:58:41.863230 1143678 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0603 13:58:41.863388 1143678 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0603 13:58:41.863406 1143678 kubeadm.go:309] 
	I0603 13:58:41.863583 1143678 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0603 13:58:41.863709 1143678 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0603 13:58:41.863811 1143678 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0603 13:58:41.863894 1143678 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0603 13:58:41.863917 1143678 kubeadm.go:309] 
	I0603 13:58:41.865001 1143678 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 13:58:41.865120 1143678 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0603 13:58:41.865209 1143678 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0603 13:58:41.865361 1143678 kubeadm.go:393] duration metric: took 8m3.432874561s to StartCluster
	I0603 13:58:41.865460 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:58:41.865537 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:58:41.912780 1143678 cri.go:89] found id: ""
	I0603 13:58:41.912812 1143678 logs.go:276] 0 containers: []
	W0603 13:58:41.912826 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:58:41.912832 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:58:41.912901 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:58:41.951372 1143678 cri.go:89] found id: ""
	I0603 13:58:41.951402 1143678 logs.go:276] 0 containers: []
	W0603 13:58:41.951411 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:58:41.951418 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:58:41.951490 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:58:41.989070 1143678 cri.go:89] found id: ""
	I0603 13:58:41.989104 1143678 logs.go:276] 0 containers: []
	W0603 13:58:41.989115 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:58:41.989123 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:58:41.989191 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:58:42.026208 1143678 cri.go:89] found id: ""
	I0603 13:58:42.026238 1143678 logs.go:276] 0 containers: []
	W0603 13:58:42.026246 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:58:42.026252 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:58:42.026312 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:58:42.064899 1143678 cri.go:89] found id: ""
	I0603 13:58:42.064941 1143678 logs.go:276] 0 containers: []
	W0603 13:58:42.064950 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:58:42.064971 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:58:42.065043 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:58:42.098817 1143678 cri.go:89] found id: ""
	I0603 13:58:42.098858 1143678 logs.go:276] 0 containers: []
	W0603 13:58:42.098868 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:58:42.098876 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:58:42.098939 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:58:42.133520 1143678 cri.go:89] found id: ""
	I0603 13:58:42.133558 1143678 logs.go:276] 0 containers: []
	W0603 13:58:42.133570 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:58:42.133579 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:58:42.133639 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:58:42.187356 1143678 cri.go:89] found id: ""
	I0603 13:58:42.187387 1143678 logs.go:276] 0 containers: []
	W0603 13:58:42.187399 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:58:42.187412 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:58:42.187434 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:58:42.249992 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:58:42.250034 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:58:42.272762 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:58:42.272801 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:58:42.362004 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:58:42.362030 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:58:42.362046 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:58:42.468630 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:58:42.468676 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0603 13:58:42.510945 1143678 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0603 13:58:42.511002 1143678 out.go:239] * 
	W0603 13:58:42.511094 1143678 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0603 13:58:42.511119 1143678 out.go:239] * 
	W0603 13:58:42.512307 1143678 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0603 13:58:42.516199 1143678 out.go:177] 
	W0603 13:58:42.517774 1143678 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0603 13:58:42.517848 1143678 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0603 13:58:42.517883 1143678 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0603 13:58:42.519747 1143678 out.go:177] 
	
	
	==> CRI-O <==
	Jun 03 13:58:44 old-k8s-version-151788 crio[659]: time="2024-06-03 13:58:44.583437417Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717423124583409387,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e5ad731e-65b0-4d1f-bff9-b6582f2716df name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 13:58:44 old-k8s-version-151788 crio[659]: time="2024-06-03 13:58:44.583967311Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=76e6ebbb-c45c-4294-867f-23a7f4927267 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:58:44 old-k8s-version-151788 crio[659]: time="2024-06-03 13:58:44.584969055Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=76e6ebbb-c45c-4294-867f-23a7f4927267 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:58:44 old-k8s-version-151788 crio[659]: time="2024-06-03 13:58:44.585479397Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=76e6ebbb-c45c-4294-867f-23a7f4927267 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:58:44 old-k8s-version-151788 crio[659]: time="2024-06-03 13:58:44.625033118Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=93e361ab-8911-46cf-8d55-4adacd4225e6 name=/runtime.v1.RuntimeService/Version
	Jun 03 13:58:44 old-k8s-version-151788 crio[659]: time="2024-06-03 13:58:44.625139686Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=93e361ab-8911-46cf-8d55-4adacd4225e6 name=/runtime.v1.RuntimeService/Version
	Jun 03 13:58:44 old-k8s-version-151788 crio[659]: time="2024-06-03 13:58:44.626627957Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f6b55ff2-a75e-438d-81fb-99286b2b1426 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 13:58:44 old-k8s-version-151788 crio[659]: time="2024-06-03 13:58:44.626991941Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717423124626972703,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f6b55ff2-a75e-438d-81fb-99286b2b1426 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 13:58:44 old-k8s-version-151788 crio[659]: time="2024-06-03 13:58:44.627732456Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=90e52383-4b56-4110-b1ee-0d6577420a96 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:58:44 old-k8s-version-151788 crio[659]: time="2024-06-03 13:58:44.627805248Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=90e52383-4b56-4110-b1ee-0d6577420a96 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:58:44 old-k8s-version-151788 crio[659]: time="2024-06-03 13:58:44.627838023Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=90e52383-4b56-4110-b1ee-0d6577420a96 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:58:44 old-k8s-version-151788 crio[659]: time="2024-06-03 13:58:44.662351510Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f60418aa-4465-4393-af4d-cb8e281b7c91 name=/runtime.v1.RuntimeService/Version
	Jun 03 13:58:44 old-k8s-version-151788 crio[659]: time="2024-06-03 13:58:44.662440009Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f60418aa-4465-4393-af4d-cb8e281b7c91 name=/runtime.v1.RuntimeService/Version
	Jun 03 13:58:44 old-k8s-version-151788 crio[659]: time="2024-06-03 13:58:44.663631712Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=23311657-c5db-4506-8a68-4a96f5fe585b name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 13:58:44 old-k8s-version-151788 crio[659]: time="2024-06-03 13:58:44.663993187Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717423124663972605,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=23311657-c5db-4506-8a68-4a96f5fe585b name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 13:58:44 old-k8s-version-151788 crio[659]: time="2024-06-03 13:58:44.664547070Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=806f7da8-b761-4145-8169-902bc244b550 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:58:44 old-k8s-version-151788 crio[659]: time="2024-06-03 13:58:44.664611351Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=806f7da8-b761-4145-8169-902bc244b550 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:58:44 old-k8s-version-151788 crio[659]: time="2024-06-03 13:58:44.664645675Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=806f7da8-b761-4145-8169-902bc244b550 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:58:44 old-k8s-version-151788 crio[659]: time="2024-06-03 13:58:44.705659258Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4882afbb-3756-431a-af06-30d4dfc90216 name=/runtime.v1.RuntimeService/Version
	Jun 03 13:58:44 old-k8s-version-151788 crio[659]: time="2024-06-03 13:58:44.705776847Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4882afbb-3756-431a-af06-30d4dfc90216 name=/runtime.v1.RuntimeService/Version
	Jun 03 13:58:44 old-k8s-version-151788 crio[659]: time="2024-06-03 13:58:44.706991933Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=351edaa2-f233-40bc-b455-453282ef6c54 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 13:58:44 old-k8s-version-151788 crio[659]: time="2024-06-03 13:58:44.707597387Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717423124707568242,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=351edaa2-f233-40bc-b455-453282ef6c54 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 13:58:44 old-k8s-version-151788 crio[659]: time="2024-06-03 13:58:44.708366822Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=17f5ba5d-1e57-498c-8ec4-bd895e6ce4bf name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:58:44 old-k8s-version-151788 crio[659]: time="2024-06-03 13:58:44.708474464Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=17f5ba5d-1e57-498c-8ec4-bd895e6ce4bf name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 13:58:44 old-k8s-version-151788 crio[659]: time="2024-06-03 13:58:44.708530174Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=17f5ba5d-1e57-498c-8ec4-bd895e6ce4bf name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jun 3 13:50] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055954] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042975] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.825342] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.576562] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.695734] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.047871] systemd-fstab-generator[580]: Ignoring "noauto" option for root device
	[  +0.063174] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.087641] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.197728] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.185593] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.323645] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +6.685681] systemd-fstab-generator[846]: Ignoring "noauto" option for root device
	[  +0.076136] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.031593] systemd-fstab-generator[973]: Ignoring "noauto" option for root device
	[ +10.661843] kauditd_printk_skb: 46 callbacks suppressed
	[Jun 3 13:54] systemd-fstab-generator[5027]: Ignoring "noauto" option for root device
	[Jun 3 13:56] systemd-fstab-generator[5309]: Ignoring "noauto" option for root device
	[  +0.079564] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 13:58:44 up 8 min,  0 users,  load average: 0.25, 0.19, 0.12
	Linux old-k8s-version-151788 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jun 03 13:58:41 old-k8s-version-151788 kubelet[5487]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000ad53a0, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000db8c00, 0x24, 0x1000000000060, 0x7f58a57c09e8, 0x118, ...)
	Jun 03 13:58:41 old-k8s-version-151788 kubelet[5487]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Jun 03 13:58:41 old-k8s-version-151788 kubelet[5487]: net/http.(*Transport).dial(0xc000198b40, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000db8c00, 0x24, 0x0, 0x0, 0x0, ...)
	Jun 03 13:58:41 old-k8s-version-151788 kubelet[5487]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Jun 03 13:58:41 old-k8s-version-151788 kubelet[5487]: net/http.(*Transport).dialConn(0xc000198b40, 0x4f7fe00, 0xc000052030, 0x0, 0xc000c12180, 0x5, 0xc000db8c00, 0x24, 0x0, 0xc0006e0120, ...)
	Jun 03 13:58:41 old-k8s-version-151788 kubelet[5487]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Jun 03 13:58:41 old-k8s-version-151788 kubelet[5487]: net/http.(*Transport).dialConnFor(0xc000198b40, 0xc000b1b6b0)
	Jun 03 13:58:41 old-k8s-version-151788 kubelet[5487]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Jun 03 13:58:41 old-k8s-version-151788 kubelet[5487]: created by net/http.(*Transport).queueForDial
	Jun 03 13:58:41 old-k8s-version-151788 kubelet[5487]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Jun 03 13:58:41 old-k8s-version-151788 kubelet[5487]: goroutine 156 [select]:
	Jun 03 13:58:41 old-k8s-version-151788 kubelet[5487]: net.(*netFD).connect.func2(0x4f7fe40, 0xc000c0eae0, 0xc0001b0200, 0xc0004d6780, 0xc0004d66c0)
	Jun 03 13:58:41 old-k8s-version-151788 kubelet[5487]:         /usr/local/go/src/net/fd_unix.go:118 +0xc5
	Jun 03 13:58:41 old-k8s-version-151788 kubelet[5487]: created by net.(*netFD).connect
	Jun 03 13:58:41 old-k8s-version-151788 kubelet[5487]:         /usr/local/go/src/net/fd_unix.go:117 +0x234
	Jun 03 13:58:41 old-k8s-version-151788 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jun 03 13:58:41 old-k8s-version-151788 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jun 03 13:58:42 old-k8s-version-151788 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Jun 03 13:58:42 old-k8s-version-151788 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jun 03 13:58:42 old-k8s-version-151788 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jun 03 13:58:42 old-k8s-version-151788 kubelet[5532]: I0603 13:58:42.254530    5532 server.go:416] Version: v1.20.0
	Jun 03 13:58:42 old-k8s-version-151788 kubelet[5532]: I0603 13:58:42.254763    5532 server.go:837] Client rotation is on, will bootstrap in background
	Jun 03 13:58:42 old-k8s-version-151788 kubelet[5532]: I0603 13:58:42.256760    5532 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jun 03 13:58:42 old-k8s-version-151788 kubelet[5532]: W0603 13:58:42.258159    5532 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jun 03 13:58:42 old-k8s-version-151788 kubelet[5532]: I0603 13:58:42.258298    5532 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-151788 -n old-k8s-version-151788
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-151788 -n old-k8s-version-151788: exit status 2 (230.159064ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-151788" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (744.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0603 13:54:30.123493 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kindnet-021279/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-223260 -n embed-certs-223260
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-06-03 14:03:29.334758732 +0000 UTC m=+5961.539317236
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-223260 -n embed-certs-223260
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-223260 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-223260 logs -n 25: (2.249681267s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-021279 sudo cat                              | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-021279 sudo                                  | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-021279 sudo                                  | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-021279 sudo                                  | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-021279 sudo find                             | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-021279 sudo crio                             | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-021279                                       | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	| delete  | -p                                                     | disable-driver-mounts-069000 | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | disable-driver-mounts-069000                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-030870 | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:43 UTC |
	|         | default-k8s-diff-port-030870                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-817450             | no-preload-817450            | jenkins | v1.33.1 | 03 Jun 24 13:42 UTC | 03 Jun 24 13:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-817450                                   | no-preload-817450            | jenkins | v1.33.1 | 03 Jun 24 13:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-223260            | embed-certs-223260           | jenkins | v1.33.1 | 03 Jun 24 13:43 UTC | 03 Jun 24 13:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-223260                                  | embed-certs-223260           | jenkins | v1.33.1 | 03 Jun 24 13:43 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-030870  | default-k8s-diff-port-030870 | jenkins | v1.33.1 | 03 Jun 24 13:43 UTC | 03 Jun 24 13:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-030870 | jenkins | v1.33.1 | 03 Jun 24 13:43 UTC |                     |
	|         | default-k8s-diff-port-030870                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-817450                  | no-preload-817450            | jenkins | v1.33.1 | 03 Jun 24 13:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-151788        | old-k8s-version-151788       | jenkins | v1.33.1 | 03 Jun 24 13:44 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p no-preload-817450                                   | no-preload-817450            | jenkins | v1.33.1 | 03 Jun 24 13:44 UTC | 03 Jun 24 13:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-223260                 | embed-certs-223260           | jenkins | v1.33.1 | 03 Jun 24 13:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-223260                                  | embed-certs-223260           | jenkins | v1.33.1 | 03 Jun 24 13:45 UTC | 03 Jun 24 13:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-030870       | default-k8s-diff-port-030870 | jenkins | v1.33.1 | 03 Jun 24 13:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-030870 | jenkins | v1.33.1 | 03 Jun 24 13:46 UTC | 03 Jun 24 13:54 UTC |
	|         | default-k8s-diff-port-030870                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-151788                              | old-k8s-version-151788       | jenkins | v1.33.1 | 03 Jun 24 13:46 UTC | 03 Jun 24 13:46 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-151788             | old-k8s-version-151788       | jenkins | v1.33.1 | 03 Jun 24 13:46 UTC | 03 Jun 24 13:46 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-151788                              | old-k8s-version-151788       | jenkins | v1.33.1 | 03 Jun 24 13:46 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 13:46:22
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 13:46:22.347386 1143678 out.go:291] Setting OutFile to fd 1 ...
	I0603 13:46:22.347655 1143678 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:46:22.347666 1143678 out.go:304] Setting ErrFile to fd 2...
	I0603 13:46:22.347672 1143678 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:46:22.347855 1143678 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
	I0603 13:46:22.348458 1143678 out.go:298] Setting JSON to false
	I0603 13:46:22.349502 1143678 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":16129,"bootTime":1717406253,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 13:46:22.349571 1143678 start.go:139] virtualization: kvm guest
	I0603 13:46:22.351720 1143678 out.go:177] * [old-k8s-version-151788] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 13:46:22.353180 1143678 out.go:177]   - MINIKUBE_LOCATION=19011
	I0603 13:46:22.353235 1143678 notify.go:220] Checking for updates...
	I0603 13:46:22.354400 1143678 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 13:46:22.355680 1143678 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 13:46:22.356796 1143678 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 13:46:22.357952 1143678 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 13:46:22.359052 1143678 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 13:46:22.360807 1143678 config.go:182] Loaded profile config "old-k8s-version-151788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0603 13:46:22.361230 1143678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:46:22.361306 1143678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:46:22.376241 1143678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42729
	I0603 13:46:22.376679 1143678 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:46:22.377267 1143678 main.go:141] libmachine: Using API Version  1
	I0603 13:46:22.377292 1143678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:46:22.377663 1143678 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:46:22.377897 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:46:22.379705 1143678 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0603 13:46:22.380895 1143678 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 13:46:22.381188 1143678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:46:22.381222 1143678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:46:22.396163 1143678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43961
	I0603 13:46:22.396669 1143678 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:46:22.397158 1143678 main.go:141] libmachine: Using API Version  1
	I0603 13:46:22.397180 1143678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:46:22.397509 1143678 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:46:22.397693 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:46:22.433731 1143678 out.go:177] * Using the kvm2 driver based on existing profile
	I0603 13:46:22.434876 1143678 start.go:297] selected driver: kvm2
	I0603 13:46:22.434897 1143678 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-151788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-151788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:46:22.435028 1143678 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 13:46:22.435716 1143678 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 13:46:22.435807 1143678 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19011-1078924/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0603 13:46:22.451200 1143678 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0603 13:46:22.451663 1143678 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 13:46:22.451755 1143678 cni.go:84] Creating CNI manager for ""
	I0603 13:46:22.451773 1143678 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:46:22.451832 1143678 start.go:340] cluster config:
	{Name:old-k8s-version-151788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-151788 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:46:22.451961 1143678 iso.go:125] acquiring lock: {Name:mka26d6a83f88b83737ccc78b57cc462fbe70fe1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 13:46:22.454327 1143678 out.go:177] * Starting "old-k8s-version-151788" primary control-plane node in "old-k8s-version-151788" cluster
	I0603 13:46:22.057705 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:46:22.455453 1143678 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0603 13:46:22.455492 1143678 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0603 13:46:22.455501 1143678 cache.go:56] Caching tarball of preloaded images
	I0603 13:46:22.455591 1143678 preload.go:173] Found /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 13:46:22.455604 1143678 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0603 13:46:22.455685 1143678 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/config.json ...
	I0603 13:46:22.455860 1143678 start.go:360] acquireMachinesLock for old-k8s-version-151788: {Name:mk20baaab39609d00406b78ad309423511e633ec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 13:46:28.137725 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:46:31.209684 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:46:37.289692 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:46:40.361614 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:46:46.441692 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:46:49.513686 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:46:55.593727 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:46:58.665749 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:04.745752 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:07.817726 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:13.897702 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:16.969727 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:23.049716 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:26.121758 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:32.201765 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:35.273759 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:41.353716 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:44.425767 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:50.505743 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:53.577777 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:59.657729 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:02.729769 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:08.809709 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:11.881708 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:17.961759 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:21.033726 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:27.113698 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:30.185691 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:36.265722 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:39.337764 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:45.417711 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:48.489729 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:54.569746 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:57.641701 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:49:03.721772 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:49:06.793709 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:49:12.873710 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:49:15.945728 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:49:22.025678 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:49:25.097675 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:49:28.102218 1143252 start.go:364] duration metric: took 3m44.709006863s to acquireMachinesLock for "embed-certs-223260"
	I0603 13:49:28.102293 1143252 start.go:96] Skipping create...Using existing machine configuration
	I0603 13:49:28.102302 1143252 fix.go:54] fixHost starting: 
	I0603 13:49:28.102635 1143252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:49:28.102666 1143252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:49:28.118384 1143252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44729
	I0603 13:49:28.119014 1143252 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:49:28.119601 1143252 main.go:141] libmachine: Using API Version  1
	I0603 13:49:28.119630 1143252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:49:28.119930 1143252 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:49:28.120116 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:49:28.120302 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetState
	I0603 13:49:28.122003 1143252 fix.go:112] recreateIfNeeded on embed-certs-223260: state=Stopped err=<nil>
	I0603 13:49:28.122030 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	W0603 13:49:28.122167 1143252 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 13:49:28.123963 1143252 out.go:177] * Restarting existing kvm2 VM for "embed-certs-223260" ...
	I0603 13:49:28.125564 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .Start
	I0603 13:49:28.125750 1143252 main.go:141] libmachine: (embed-certs-223260) Ensuring networks are active...
	I0603 13:49:28.126598 1143252 main.go:141] libmachine: (embed-certs-223260) Ensuring network default is active
	I0603 13:49:28.126965 1143252 main.go:141] libmachine: (embed-certs-223260) Ensuring network mk-embed-certs-223260 is active
	I0603 13:49:28.127319 1143252 main.go:141] libmachine: (embed-certs-223260) Getting domain xml...
	I0603 13:49:28.128017 1143252 main.go:141] libmachine: (embed-certs-223260) Creating domain...
	I0603 13:49:28.099474 1142862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 13:49:28.099536 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetMachineName
	I0603 13:49:28.099883 1142862 buildroot.go:166] provisioning hostname "no-preload-817450"
	I0603 13:49:28.099915 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetMachineName
	I0603 13:49:28.100115 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:49:28.102052 1142862 machine.go:97] duration metric: took 4m37.409499751s to provisionDockerMachine
	I0603 13:49:28.102123 1142862 fix.go:56] duration metric: took 4m37.432963538s for fixHost
	I0603 13:49:28.102135 1142862 start.go:83] releasing machines lock for "no-preload-817450", held for 4m37.432994587s
	W0603 13:49:28.102158 1142862 start.go:713] error starting host: provision: host is not running
	W0603 13:49:28.102317 1142862 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0603 13:49:28.102332 1142862 start.go:728] Will try again in 5 seconds ...
	I0603 13:49:29.332986 1143252 main.go:141] libmachine: (embed-certs-223260) Waiting to get IP...
	I0603 13:49:29.333963 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:29.334430 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:29.334475 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:29.334403 1144333 retry.go:31] will retry after 203.681987ms: waiting for machine to come up
	I0603 13:49:29.539995 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:29.540496 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:29.540564 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:29.540457 1144333 retry.go:31] will retry after 368.548292ms: waiting for machine to come up
	I0603 13:49:29.911212 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:29.911632 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:29.911665 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:29.911566 1144333 retry.go:31] will retry after 402.690969ms: waiting for machine to come up
	I0603 13:49:30.316480 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:30.316889 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:30.316920 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:30.316852 1144333 retry.go:31] will retry after 500.397867ms: waiting for machine to come up
	I0603 13:49:30.818653 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:30.819082 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:30.819107 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:30.819026 1144333 retry.go:31] will retry after 663.669804ms: waiting for machine to come up
	I0603 13:49:31.483776 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:31.484117 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:31.484144 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:31.484079 1144333 retry.go:31] will retry after 938.433137ms: waiting for machine to come up
	I0603 13:49:32.424128 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:32.424609 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:32.424640 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:32.424548 1144333 retry.go:31] will retry after 919.793328ms: waiting for machine to come up
	I0603 13:49:33.103895 1142862 start.go:360] acquireMachinesLock for no-preload-817450: {Name:mk20baaab39609d00406b78ad309423511e633ec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 13:49:33.346091 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:33.346549 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:33.346574 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:33.346511 1144333 retry.go:31] will retry after 1.115349726s: waiting for machine to come up
	I0603 13:49:34.463875 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:34.464588 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:34.464616 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:34.464529 1144333 retry.go:31] will retry after 1.153940362s: waiting for machine to come up
	I0603 13:49:35.619844 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:35.620243 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:35.620275 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:35.620176 1144333 retry.go:31] will retry after 1.514504154s: waiting for machine to come up
	I0603 13:49:37.135961 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:37.136409 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:37.136431 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:37.136382 1144333 retry.go:31] will retry after 2.757306897s: waiting for machine to come up
	I0603 13:49:39.895589 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:39.895942 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:39.895970 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:39.895881 1144333 retry.go:31] will retry after 3.019503072s: waiting for machine to come up
	I0603 13:49:42.919177 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:42.919640 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:42.919670 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:42.919588 1144333 retry.go:31] will retry after 3.150730989s: waiting for machine to come up
	I0603 13:49:47.494462 1143450 start.go:364] duration metric: took 3m37.207410663s to acquireMachinesLock for "default-k8s-diff-port-030870"
	I0603 13:49:47.494544 1143450 start.go:96] Skipping create...Using existing machine configuration
	I0603 13:49:47.494557 1143450 fix.go:54] fixHost starting: 
	I0603 13:49:47.494876 1143450 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:49:47.494918 1143450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:49:47.511570 1143450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44939
	I0603 13:49:47.512072 1143450 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:49:47.512568 1143450 main.go:141] libmachine: Using API Version  1
	I0603 13:49:47.512593 1143450 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:49:47.512923 1143450 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:49:47.513117 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:49:47.513276 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetState
	I0603 13:49:47.514783 1143450 fix.go:112] recreateIfNeeded on default-k8s-diff-port-030870: state=Stopped err=<nil>
	I0603 13:49:47.514817 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	W0603 13:49:47.514999 1143450 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 13:49:47.517441 1143450 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-030870" ...
	I0603 13:49:46.071609 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.072094 1143252 main.go:141] libmachine: (embed-certs-223260) Found IP for machine: 192.168.83.246
	I0603 13:49:46.072117 1143252 main.go:141] libmachine: (embed-certs-223260) Reserving static IP address...
	I0603 13:49:46.072132 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has current primary IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.072552 1143252 main.go:141] libmachine: (embed-certs-223260) Reserved static IP address: 192.168.83.246
	I0603 13:49:46.072585 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "embed-certs-223260", mac: "52:54:00:8e:14:a8", ip: "192.168.83.246"} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.072593 1143252 main.go:141] libmachine: (embed-certs-223260) Waiting for SSH to be available...
	I0603 13:49:46.072632 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | skip adding static IP to network mk-embed-certs-223260 - found existing host DHCP lease matching {name: "embed-certs-223260", mac: "52:54:00:8e:14:a8", ip: "192.168.83.246"}
	I0603 13:49:46.072655 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | Getting to WaitForSSH function...
	I0603 13:49:46.074738 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.075059 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.075091 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.075189 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | Using SSH client type: external
	I0603 13:49:46.075213 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | Using SSH private key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa (-rw-------)
	I0603 13:49:46.075249 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.246 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 13:49:46.075271 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | About to run SSH command:
	I0603 13:49:46.075283 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | exit 0
	I0603 13:49:46.197971 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | SSH cmd err, output: <nil>: 
	I0603 13:49:46.198498 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetConfigRaw
	I0603 13:49:46.199179 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetIP
	I0603 13:49:46.201821 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.202239 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.202277 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.202533 1143252 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260/config.json ...
	I0603 13:49:46.202727 1143252 machine.go:94] provisionDockerMachine start ...
	I0603 13:49:46.202745 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:49:46.202964 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:46.205259 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.205636 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.205663 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.205773 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:46.205954 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.206100 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.206318 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:46.206538 1143252 main.go:141] libmachine: Using SSH client type: native
	I0603 13:49:46.206819 1143252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.246 22 <nil> <nil>}
	I0603 13:49:46.206837 1143252 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 13:49:46.310241 1143252 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 13:49:46.310277 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetMachineName
	I0603 13:49:46.310583 1143252 buildroot.go:166] provisioning hostname "embed-certs-223260"
	I0603 13:49:46.310616 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetMachineName
	I0603 13:49:46.310836 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:46.313692 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.314078 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.314116 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.314222 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:46.314446 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.314631 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.314800 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:46.314969 1143252 main.go:141] libmachine: Using SSH client type: native
	I0603 13:49:46.315166 1143252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.246 22 <nil> <nil>}
	I0603 13:49:46.315183 1143252 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-223260 && echo "embed-certs-223260" | sudo tee /etc/hostname
	I0603 13:49:46.428560 1143252 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-223260
	
	I0603 13:49:46.428600 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:46.431381 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.431757 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.431784 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.432021 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:46.432283 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.432477 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.432609 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:46.432785 1143252 main.go:141] libmachine: Using SSH client type: native
	I0603 13:49:46.432960 1143252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.246 22 <nil> <nil>}
	I0603 13:49:46.432976 1143252 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-223260' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-223260/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-223260' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 13:49:46.542400 1143252 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 13:49:46.542446 1143252 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19011-1078924/.minikube CaCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19011-1078924/.minikube}
	I0603 13:49:46.542536 1143252 buildroot.go:174] setting up certificates
	I0603 13:49:46.542557 1143252 provision.go:84] configureAuth start
	I0603 13:49:46.542576 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetMachineName
	I0603 13:49:46.542913 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetIP
	I0603 13:49:46.545940 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.546339 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.546368 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.546499 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:46.548715 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.549097 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.549127 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.549294 1143252 provision.go:143] copyHostCerts
	I0603 13:49:46.549382 1143252 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem, removing ...
	I0603 13:49:46.549397 1143252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 13:49:46.549486 1143252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem (1078 bytes)
	I0603 13:49:46.549578 1143252 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem, removing ...
	I0603 13:49:46.549587 1143252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 13:49:46.549613 1143252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem (1123 bytes)
	I0603 13:49:46.549664 1143252 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem, removing ...
	I0603 13:49:46.549671 1143252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 13:49:46.549690 1143252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem (1675 bytes)
	I0603 13:49:46.549740 1143252 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem org=jenkins.embed-certs-223260 san=[127.0.0.1 192.168.83.246 embed-certs-223260 localhost minikube]
	I0603 13:49:46.807050 1143252 provision.go:177] copyRemoteCerts
	I0603 13:49:46.807111 1143252 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 13:49:46.807140 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:46.809916 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.810303 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.810347 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.810513 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:46.810758 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.810929 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:46.811168 1143252 sshutil.go:53] new ssh client: &{IP:192.168.83.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa Username:docker}
	I0603 13:49:46.892182 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 13:49:46.916657 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0603 13:49:46.941896 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 13:49:46.967292 1143252 provision.go:87] duration metric: took 424.714334ms to configureAuth
	I0603 13:49:46.967331 1143252 buildroot.go:189] setting minikube options for container-runtime
	I0603 13:49:46.967539 1143252 config.go:182] Loaded profile config "embed-certs-223260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:49:46.967626 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:46.970350 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.970668 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.970703 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.970870 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:46.971115 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.971314 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.971454 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:46.971625 1143252 main.go:141] libmachine: Using SSH client type: native
	I0603 13:49:46.971809 1143252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.246 22 <nil> <nil>}
	I0603 13:49:46.971831 1143252 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 13:49:47.264894 1143252 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 13:49:47.264922 1143252 machine.go:97] duration metric: took 1.062182146s to provisionDockerMachine
	I0603 13:49:47.264935 1143252 start.go:293] postStartSetup for "embed-certs-223260" (driver="kvm2")
	I0603 13:49:47.264946 1143252 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 13:49:47.264963 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:49:47.265368 1143252 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 13:49:47.265398 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:47.268412 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.268765 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:47.268796 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.268989 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:47.269223 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:47.269455 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:47.269625 1143252 sshutil.go:53] new ssh client: &{IP:192.168.83.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa Username:docker}
	I0603 13:49:47.348583 1143252 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 13:49:47.352828 1143252 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 13:49:47.352867 1143252 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/addons for local assets ...
	I0603 13:49:47.352949 1143252 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/files for local assets ...
	I0603 13:49:47.353046 1143252 filesync.go:149] local asset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> 10862512.pem in /etc/ssl/certs
	I0603 13:49:47.353164 1143252 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 13:49:47.363222 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:49:47.388132 1143252 start.go:296] duration metric: took 123.177471ms for postStartSetup
	I0603 13:49:47.388202 1143252 fix.go:56] duration metric: took 19.285899119s for fixHost
	I0603 13:49:47.388233 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:47.390960 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.391414 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:47.391477 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.391681 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:47.391937 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:47.392127 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:47.392266 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:47.392436 1143252 main.go:141] libmachine: Using SSH client type: native
	I0603 13:49:47.392670 1143252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.246 22 <nil> <nil>}
	I0603 13:49:47.392687 1143252 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 13:49:47.494294 1143252 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717422587.469729448
	
	I0603 13:49:47.494320 1143252 fix.go:216] guest clock: 1717422587.469729448
	I0603 13:49:47.494328 1143252 fix.go:229] Guest: 2024-06-03 13:49:47.469729448 +0000 UTC Remote: 2024-06-03 13:49:47.388208749 +0000 UTC m=+244.138441135 (delta=81.520699ms)
	I0603 13:49:47.494354 1143252 fix.go:200] guest clock delta is within tolerance: 81.520699ms
	I0603 13:49:47.494361 1143252 start.go:83] releasing machines lock for "embed-certs-223260", held for 19.392103897s
	I0603 13:49:47.494394 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:49:47.494686 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetIP
	I0603 13:49:47.497515 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.497930 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:47.497976 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.498110 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:49:47.498672 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:49:47.498859 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:49:47.498934 1143252 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 13:49:47.498988 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:47.499062 1143252 ssh_runner.go:195] Run: cat /version.json
	I0603 13:49:47.499082 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:47.501788 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.502075 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.502131 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:47.502156 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.502291 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:47.502390 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:47.502427 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.502550 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:47.502647 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:47.502738 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:47.502806 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:47.502942 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:47.502955 1143252 sshutil.go:53] new ssh client: &{IP:192.168.83.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa Username:docker}
	I0603 13:49:47.503078 1143252 sshutil.go:53] new ssh client: &{IP:192.168.83.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa Username:docker}
	I0603 13:49:47.612706 1143252 ssh_runner.go:195] Run: systemctl --version
	I0603 13:49:47.618922 1143252 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 13:49:47.764749 1143252 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 13:49:47.770936 1143252 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 13:49:47.771023 1143252 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 13:49:47.788401 1143252 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 13:49:47.788427 1143252 start.go:494] detecting cgroup driver to use...
	I0603 13:49:47.788486 1143252 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 13:49:47.805000 1143252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 13:49:47.822258 1143252 docker.go:217] disabling cri-docker service (if available) ...
	I0603 13:49:47.822315 1143252 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 13:49:47.837826 1143252 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 13:49:47.853818 1143252 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 13:49:47.978204 1143252 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 13:49:48.106302 1143252 docker.go:233] disabling docker service ...
	I0603 13:49:48.106366 1143252 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 13:49:48.120974 1143252 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 13:49:48.134911 1143252 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 13:49:48.278103 1143252 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 13:49:48.398238 1143252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 13:49:48.413207 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 13:49:48.432211 1143252 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 13:49:48.432281 1143252 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:49:48.443668 1143252 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 13:49:48.443746 1143252 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:49:48.454990 1143252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:49:48.467119 1143252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:49:48.479875 1143252 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 13:49:48.496767 1143252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:49:48.508872 1143252 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:49:48.530972 1143252 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:49:48.542631 1143252 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 13:49:48.552775 1143252 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 13:49:48.552836 1143252 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 13:49:48.566528 1143252 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 13:49:48.582917 1143252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:49:48.716014 1143252 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 13:49:48.860157 1143252 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 13:49:48.860283 1143252 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 13:49:48.865046 1143252 start.go:562] Will wait 60s for crictl version
	I0603 13:49:48.865121 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:49:48.869520 1143252 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 13:49:48.909721 1143252 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 13:49:48.909819 1143252 ssh_runner.go:195] Run: crio --version
	I0603 13:49:48.939080 1143252 ssh_runner.go:195] Run: crio --version
	I0603 13:49:48.970595 1143252 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 13:49:47.518807 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .Start
	I0603 13:49:47.518981 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Ensuring networks are active...
	I0603 13:49:47.519623 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Ensuring network default is active
	I0603 13:49:47.519926 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Ensuring network mk-default-k8s-diff-port-030870 is active
	I0603 13:49:47.520408 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Getting domain xml...
	I0603 13:49:47.521014 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Creating domain...
	I0603 13:49:48.798483 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting to get IP...
	I0603 13:49:48.799695 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:48.800174 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:48.800305 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:48.800165 1144471 retry.go:31] will retry after 204.161843ms: waiting for machine to come up
	I0603 13:49:49.005669 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:49.006143 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:49.006180 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:49.006091 1144471 retry.go:31] will retry after 382.751679ms: waiting for machine to come up
	I0603 13:49:49.391162 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:49.391717 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:49.391750 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:49.391670 1144471 retry.go:31] will retry after 314.248576ms: waiting for machine to come up
	I0603 13:49:49.707349 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:49.707957 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:49.707990 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:49.707856 1144471 retry.go:31] will retry after 446.461931ms: waiting for machine to come up
	I0603 13:49:50.155616 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:50.156238 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:50.156274 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:50.156174 1144471 retry.go:31] will retry after 712.186964ms: waiting for machine to come up
	I0603 13:49:48.971971 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetIP
	I0603 13:49:48.975079 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:48.975439 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:48.975471 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:48.975721 1143252 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0603 13:49:48.980114 1143252 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:49:48.993380 1143252 kubeadm.go:877] updating cluster {Name:embed-certs-223260 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:embed-certs-223260 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.246 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 13:49:48.993543 1143252 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 13:49:48.993636 1143252 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:49:49.032289 1143252 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0603 13:49:49.032364 1143252 ssh_runner.go:195] Run: which lz4
	I0603 13:49:49.036707 1143252 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 13:49:49.040973 1143252 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 13:49:49.041000 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0603 13:49:50.554295 1143252 crio.go:462] duration metric: took 1.517623353s to copy over tarball
	I0603 13:49:50.554387 1143252 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 13:49:52.823733 1143252 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.269303423s)
	I0603 13:49:52.823785 1143252 crio.go:469] duration metric: took 2.269454274s to extract the tarball
	I0603 13:49:52.823799 1143252 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 13:49:52.862060 1143252 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:49:52.906571 1143252 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 13:49:52.906602 1143252 cache_images.go:84] Images are preloaded, skipping loading
	I0603 13:49:52.906618 1143252 kubeadm.go:928] updating node { 192.168.83.246 8443 v1.30.1 crio true true} ...
	I0603 13:49:52.906774 1143252 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-223260 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:embed-certs-223260 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 13:49:52.906866 1143252 ssh_runner.go:195] Run: crio config
	I0603 13:49:52.954082 1143252 cni.go:84] Creating CNI manager for ""
	I0603 13:49:52.954111 1143252 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:49:52.954129 1143252 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 13:49:52.954159 1143252 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.246 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-223260 NodeName:embed-certs-223260 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.246"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.246 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 13:49:52.954355 1143252 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.246
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-223260"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.246
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.246"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 13:49:52.954446 1143252 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 13:49:52.964488 1143252 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 13:49:52.964582 1143252 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 13:49:52.974118 1143252 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0603 13:49:52.990701 1143252 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 13:49:53.007539 1143252 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0603 13:49:53.024943 1143252 ssh_runner.go:195] Run: grep 192.168.83.246	control-plane.minikube.internal$ /etc/hosts
	I0603 13:49:53.029097 1143252 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.246	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:49:53.041234 1143252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:49:53.178449 1143252 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:49:53.195718 1143252 certs.go:68] Setting up /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260 for IP: 192.168.83.246
	I0603 13:49:53.195750 1143252 certs.go:194] generating shared ca certs ...
	I0603 13:49:53.195769 1143252 certs.go:226] acquiring lock for ca certs: {Name:mkeec5aabce7c9540fcb31b78e4f96c2851d54f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:49:53.195954 1143252 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key
	I0603 13:49:53.196021 1143252 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key
	I0603 13:49:53.196035 1143252 certs.go:256] generating profile certs ...
	I0603 13:49:53.196256 1143252 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260/client.key
	I0603 13:49:53.196341 1143252 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260/apiserver.key.90d43877
	I0603 13:49:53.196437 1143252 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260/proxy-client.key
	I0603 13:49:53.196605 1143252 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem (1338 bytes)
	W0603 13:49:53.196663 1143252 certs.go:480] ignoring /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251_empty.pem, impossibly tiny 0 bytes
	I0603 13:49:53.196678 1143252 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 13:49:53.196708 1143252 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem (1078 bytes)
	I0603 13:49:53.196756 1143252 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem (1123 bytes)
	I0603 13:49:53.196787 1143252 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem (1675 bytes)
	I0603 13:49:53.196838 1143252 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:49:53.197895 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 13:49:53.231612 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 13:49:53.263516 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 13:49:50.870317 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:50.870816 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:50.870841 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:50.870781 1144471 retry.go:31] will retry after 855.15183ms: waiting for machine to come up
	I0603 13:49:51.727393 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:51.727926 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:51.727960 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:51.727869 1144471 retry.go:31] will retry after 997.293541ms: waiting for machine to come up
	I0603 13:49:52.726578 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:52.727036 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:52.727073 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:52.726953 1144471 retry.go:31] will retry after 1.4233414s: waiting for machine to come up
	I0603 13:49:54.151594 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:54.152072 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:54.152099 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:54.152021 1144471 retry.go:31] will retry after 1.348888248s: waiting for machine to come up
	I0603 13:49:53.303724 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0603 13:49:53.334700 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0603 13:49:53.371594 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 13:49:53.396381 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 13:49:53.420985 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0603 13:49:53.445334 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem --> /usr/share/ca-certificates/1086251.pem (1338 bytes)
	I0603 13:49:53.469632 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /usr/share/ca-certificates/10862512.pem (1708 bytes)
	I0603 13:49:53.495720 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 13:49:53.522416 1143252 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 13:49:53.541593 1143252 ssh_runner.go:195] Run: openssl version
	I0603 13:49:53.547653 1143252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1086251.pem && ln -fs /usr/share/ca-certificates/1086251.pem /etc/ssl/certs/1086251.pem"
	I0603 13:49:53.558802 1143252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1086251.pem
	I0603 13:49:53.563511 1143252 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:37 /usr/share/ca-certificates/1086251.pem
	I0603 13:49:53.563579 1143252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1086251.pem
	I0603 13:49:53.569691 1143252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1086251.pem /etc/ssl/certs/51391683.0"
	I0603 13:49:53.582814 1143252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10862512.pem && ln -fs /usr/share/ca-certificates/10862512.pem /etc/ssl/certs/10862512.pem"
	I0603 13:49:53.595684 1143252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10862512.pem
	I0603 13:49:53.600613 1143252 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:37 /usr/share/ca-certificates/10862512.pem
	I0603 13:49:53.600675 1143252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10862512.pem
	I0603 13:49:53.607008 1143252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10862512.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 13:49:53.619919 1143252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 13:49:53.632663 1143252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:49:53.637604 1143252 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:24 /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:49:53.637675 1143252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:49:53.643844 1143252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 13:49:53.655934 1143252 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 13:49:53.660801 1143252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 13:49:53.667391 1143252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 13:49:53.674382 1143252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 13:49:53.681121 1143252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 13:49:53.687496 1143252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 13:49:53.693623 1143252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 13:49:53.699764 1143252 kubeadm.go:391] StartCluster: {Name:embed-certs-223260 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:embed-certs-223260 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.246 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:49:53.699871 1143252 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 13:49:53.699928 1143252 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:49:53.736588 1143252 cri.go:89] found id: ""
	I0603 13:49:53.736662 1143252 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0603 13:49:53.750620 1143252 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 13:49:53.750644 1143252 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 13:49:53.750652 1143252 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 13:49:53.750716 1143252 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 13:49:53.765026 1143252 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 13:49:53.766297 1143252 kubeconfig.go:125] found "embed-certs-223260" server: "https://192.168.83.246:8443"
	I0603 13:49:53.768662 1143252 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 13:49:53.779583 1143252 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.83.246
	I0603 13:49:53.779625 1143252 kubeadm.go:1154] stopping kube-system containers ...
	I0603 13:49:53.779639 1143252 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0603 13:49:53.779695 1143252 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:49:53.820312 1143252 cri.go:89] found id: ""
	I0603 13:49:53.820398 1143252 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 13:49:53.838446 1143252 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 13:49:53.849623 1143252 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 13:49:53.849643 1143252 kubeadm.go:156] found existing configuration files:
	
	I0603 13:49:53.849700 1143252 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 13:49:53.859379 1143252 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 13:49:53.859451 1143252 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 13:49:53.869939 1143252 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 13:49:53.880455 1143252 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 13:49:53.880527 1143252 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 13:49:53.890918 1143252 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 13:49:53.900841 1143252 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 13:49:53.900894 1143252 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 13:49:53.910968 1143252 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 13:49:53.921064 1143252 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 13:49:53.921121 1143252 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 13:49:53.931550 1143252 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 13:49:53.942309 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:49:54.078959 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:49:54.842079 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:49:55.043420 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:49:55.111164 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:49:55.220384 1143252 api_server.go:52] waiting for apiserver process to appear ...
	I0603 13:49:55.220475 1143252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:49:55.721612 1143252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:49:56.221513 1143252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:49:56.257801 1143252 api_server.go:72] duration metric: took 1.037411844s to wait for apiserver process to appear ...
	I0603 13:49:56.257845 1143252 api_server.go:88] waiting for apiserver healthz status ...
	I0603 13:49:56.257874 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:49:55.502734 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:55.503282 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:55.503313 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:55.503226 1144471 retry.go:31] will retry after 1.733012887s: waiting for machine to come up
	I0603 13:49:57.238544 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:57.238975 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:57.239006 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:57.238917 1144471 retry.go:31] will retry after 2.565512625s: waiting for machine to come up
	I0603 13:49:59.806662 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:59.807077 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:59.807105 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:59.807024 1144471 retry.go:31] will retry after 2.759375951s: waiting for machine to come up
	I0603 13:49:59.684015 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 13:49:59.684058 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 13:49:59.684078 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:49:59.757751 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 13:49:59.757791 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 13:49:59.758846 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:49:59.779923 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:49:59.779974 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:00.258098 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:50:00.265061 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:00.265089 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:00.758643 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:50:00.764364 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:00.764400 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:01.257950 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:50:01.262846 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:01.262875 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:01.758078 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:50:01.763269 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:01.763301 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:02.258641 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:50:02.263628 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:02.263658 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:02.758205 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:50:02.765436 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:02.765470 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:03.258663 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:50:03.263141 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 200:
	ok
	I0603 13:50:03.269787 1143252 api_server.go:141] control plane version: v1.30.1
	I0603 13:50:03.269817 1143252 api_server.go:131] duration metric: took 7.011964721s to wait for apiserver health ...
	I0603 13:50:03.269827 1143252 cni.go:84] Creating CNI manager for ""
	I0603 13:50:03.269833 1143252 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:50:03.271812 1143252 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 13:50:03.273154 1143252 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 13:50:03.285329 1143252 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 13:50:03.305480 1143252 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 13:50:03.317546 1143252 system_pods.go:59] 8 kube-system pods found
	I0603 13:50:03.317601 1143252 system_pods.go:61] "coredns-7db6d8ff4d-qdjrv" [9a490ea5-c189-4d28-bd6b-509610d35f37] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 13:50:03.317614 1143252 system_pods.go:61] "etcd-embed-certs-223260" [97807b62-195b-4d94-a7f8-754f68ad4f03] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0603 13:50:03.317627 1143252 system_pods.go:61] "kube-apiserver-embed-certs-223260" [df2f6cde-407c-4ed2-8fec-5fa61a428a88] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0603 13:50:03.317637 1143252 system_pods.go:61] "kube-controller-manager-embed-certs-223260" [9b8bc1b7-3f43-4626-b9ee-37f5176b7fd6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0603 13:50:03.317645 1143252 system_pods.go:61] "kube-proxy-s5vdl" [4c515f67-d265-4140-82ec-ba9ac4ddda80] Running
	I0603 13:50:03.317658 1143252 system_pods.go:61] "kube-scheduler-embed-certs-223260" [d23001bf-d971-42d2-a901-b2ec4b4db649] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0603 13:50:03.317667 1143252 system_pods.go:61] "metrics-server-569cc877fc-v7d9t" [e89c698d-7aab-4acd-a9b3-5ba0315ad681] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:50:03.317677 1143252 system_pods.go:61] "storage-provisioner" [6ff65744-2d90-4589-a97f-d6b4d792eab4] Running
	I0603 13:50:03.317686 1143252 system_pods.go:74] duration metric: took 12.177585ms to wait for pod list to return data ...
	I0603 13:50:03.317695 1143252 node_conditions.go:102] verifying NodePressure condition ...
	I0603 13:50:03.321445 1143252 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 13:50:03.321479 1143252 node_conditions.go:123] node cpu capacity is 2
	I0603 13:50:03.321493 1143252 node_conditions.go:105] duration metric: took 3.787651ms to run NodePressure ...
	I0603 13:50:03.321512 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:03.598576 1143252 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0603 13:50:03.604196 1143252 kubeadm.go:733] kubelet initialised
	I0603 13:50:03.604219 1143252 kubeadm.go:734] duration metric: took 5.606021ms waiting for restarted kubelet to initialise ...
	I0603 13:50:03.604236 1143252 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:50:03.611441 1143252 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-qdjrv" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:03.615911 1143252 pod_ready.go:97] node "embed-certs-223260" hosting pod "coredns-7db6d8ff4d-qdjrv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:03.615936 1143252 pod_ready.go:81] duration metric: took 4.468017ms for pod "coredns-7db6d8ff4d-qdjrv" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:03.615945 1143252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-223260" hosting pod "coredns-7db6d8ff4d-qdjrv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:03.615955 1143252 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:03.620663 1143252 pod_ready.go:97] node "embed-certs-223260" hosting pod "etcd-embed-certs-223260" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:03.620683 1143252 pod_ready.go:81] duration metric: took 4.71967ms for pod "etcd-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:03.620691 1143252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-223260" hosting pod "etcd-embed-certs-223260" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:03.620697 1143252 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:03.624894 1143252 pod_ready.go:97] node "embed-certs-223260" hosting pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:03.624917 1143252 pod_ready.go:81] duration metric: took 4.212227ms for pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:03.624925 1143252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-223260" hosting pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:03.624933 1143252 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:03.708636 1143252 pod_ready.go:97] node "embed-certs-223260" hosting pod "kube-controller-manager-embed-certs-223260" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:03.708665 1143252 pod_ready.go:81] duration metric: took 83.72445ms for pod "kube-controller-manager-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:03.708675 1143252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-223260" hosting pod "kube-controller-manager-embed-certs-223260" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:03.708681 1143252 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-s5vdl" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:04.109391 1143252 pod_ready.go:97] node "embed-certs-223260" hosting pod "kube-proxy-s5vdl" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:04.109454 1143252 pod_ready.go:81] duration metric: took 400.761651ms for pod "kube-proxy-s5vdl" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:04.109469 1143252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-223260" hosting pod "kube-proxy-s5vdl" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:04.109478 1143252 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:04.509683 1143252 pod_ready.go:97] node "embed-certs-223260" hosting pod "kube-scheduler-embed-certs-223260" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:04.509712 1143252 pod_ready.go:81] duration metric: took 400.226435ms for pod "kube-scheduler-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:04.509723 1143252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-223260" hosting pod "kube-scheduler-embed-certs-223260" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:04.509730 1143252 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:04.909629 1143252 pod_ready.go:97] node "embed-certs-223260" hosting pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:04.909659 1143252 pod_ready.go:81] duration metric: took 399.917901ms for pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:04.909669 1143252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-223260" hosting pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:04.909679 1143252 pod_ready.go:38] duration metric: took 1.30543039s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:50:04.909697 1143252 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 13:50:04.921682 1143252 ops.go:34] apiserver oom_adj: -16
	I0603 13:50:04.921708 1143252 kubeadm.go:591] duration metric: took 11.171050234s to restartPrimaryControlPlane
	I0603 13:50:04.921717 1143252 kubeadm.go:393] duration metric: took 11.221962831s to StartCluster
	I0603 13:50:04.921737 1143252 settings.go:142] acquiring lock: {Name:mka7155af15d143794eb08b8670f7d850f44839e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:50:04.921807 1143252 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 13:50:04.923342 1143252 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/kubeconfig: {Name:mk082a4c41fd0f4876b4085806e1bc5ef6533b14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:50:04.923628 1143252 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.83.246 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 13:50:04.927063 1143252 out.go:177] * Verifying Kubernetes components...
	I0603 13:50:04.923693 1143252 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 13:50:04.923865 1143252 config.go:182] Loaded profile config "embed-certs-223260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:50:04.928850 1143252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:50:04.928873 1143252 addons.go:69] Setting default-storageclass=true in profile "embed-certs-223260"
	I0603 13:50:04.928872 1143252 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-223260"
	I0603 13:50:04.928889 1143252 addons.go:69] Setting metrics-server=true in profile "embed-certs-223260"
	I0603 13:50:04.928906 1143252 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-223260"
	I0603 13:50:04.928923 1143252 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-223260"
	I0603 13:50:04.928935 1143252 addons.go:234] Setting addon metrics-server=true in "embed-certs-223260"
	W0603 13:50:04.928938 1143252 addons.go:243] addon storage-provisioner should already be in state true
	W0603 13:50:04.928945 1143252 addons.go:243] addon metrics-server should already be in state true
	I0603 13:50:04.928980 1143252 host.go:66] Checking if "embed-certs-223260" exists ...
	I0603 13:50:04.928980 1143252 host.go:66] Checking if "embed-certs-223260" exists ...
	I0603 13:50:04.929307 1143252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:04.929346 1143252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:04.929352 1143252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:04.929372 1143252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:04.929597 1143252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:04.929630 1143252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:04.944948 1143252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38847
	I0603 13:50:04.945071 1143252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44523
	I0603 13:50:04.945489 1143252 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:04.945571 1143252 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:04.946137 1143252 main.go:141] libmachine: Using API Version  1
	I0603 13:50:04.946166 1143252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:04.946299 1143252 main.go:141] libmachine: Using API Version  1
	I0603 13:50:04.946319 1143252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:04.946589 1143252 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:04.946650 1143252 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:04.946798 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetState
	I0603 13:50:04.947022 1143252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35157
	I0603 13:50:04.947210 1143252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:04.947250 1143252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:04.947517 1143252 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:04.948043 1143252 main.go:141] libmachine: Using API Version  1
	I0603 13:50:04.948069 1143252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:04.948437 1143252 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:04.949064 1143252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:04.949107 1143252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:04.950532 1143252 addons.go:234] Setting addon default-storageclass=true in "embed-certs-223260"
	W0603 13:50:04.950558 1143252 addons.go:243] addon default-storageclass should already be in state true
	I0603 13:50:04.950589 1143252 host.go:66] Checking if "embed-certs-223260" exists ...
	I0603 13:50:04.950951 1143252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:04.951008 1143252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:04.964051 1143252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37589
	I0603 13:50:04.964078 1143252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35097
	I0603 13:50:04.964513 1143252 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:04.964562 1143252 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:04.965062 1143252 main.go:141] libmachine: Using API Version  1
	I0603 13:50:04.965088 1143252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:04.965128 1143252 main.go:141] libmachine: Using API Version  1
	I0603 13:50:04.965153 1143252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:04.965473 1143252 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:04.965532 1143252 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:04.965652 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetState
	I0603 13:50:04.965740 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetState
	I0603 13:50:04.967606 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:50:04.967739 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:50:04.969783 1143252 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:04.971193 1143252 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0603 13:50:02.567560 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:02.567988 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:50:02.568020 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:50:02.567915 1144471 retry.go:31] will retry after 3.955051362s: waiting for machine to come up
	I0603 13:50:04.972568 1143252 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0603 13:50:04.972588 1143252 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0603 13:50:04.972606 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:50:04.971275 1143252 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 13:50:04.972634 1143252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 13:50:04.972658 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:50:04.971495 1143252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42379
	I0603 13:50:04.973108 1143252 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:04.973575 1143252 main.go:141] libmachine: Using API Version  1
	I0603 13:50:04.973599 1143252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:04.973931 1143252 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:04.974623 1143252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:04.974658 1143252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:04.976128 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:50:04.976251 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:50:04.976535 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:50:04.976559 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:50:04.976709 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:50:04.976724 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:50:04.976768 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:50:04.976915 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:50:04.976989 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:50:04.977099 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:50:04.977156 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:50:04.977242 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:50:04.977305 1143252 sshutil.go:53] new ssh client: &{IP:192.168.83.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa Username:docker}
	I0603 13:50:04.977500 1143252 sshutil.go:53] new ssh client: &{IP:192.168.83.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa Username:docker}
	I0603 13:50:04.990810 1143252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36607
	I0603 13:50:04.991293 1143252 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:04.991844 1143252 main.go:141] libmachine: Using API Version  1
	I0603 13:50:04.991875 1143252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:04.992279 1143252 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:04.992499 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetState
	I0603 13:50:04.994225 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:50:04.994456 1143252 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 13:50:04.994476 1143252 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 13:50:04.994490 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:50:04.997771 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:50:04.998210 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:50:04.998239 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:50:04.998418 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:50:04.998627 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:50:04.998811 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:50:04.998941 1143252 sshutil.go:53] new ssh client: &{IP:192.168.83.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa Username:docker}
	I0603 13:50:05.119962 1143252 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:50:05.140880 1143252 node_ready.go:35] waiting up to 6m0s for node "embed-certs-223260" to be "Ready" ...
	I0603 13:50:05.271863 1143252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 13:50:05.275815 1143252 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0603 13:50:05.275843 1143252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0603 13:50:05.294572 1143252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 13:50:05.346520 1143252 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0603 13:50:05.346553 1143252 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0603 13:50:05.417100 1143252 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 13:50:05.417141 1143252 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0603 13:50:05.496250 1143252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 13:50:06.207746 1143252 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:06.207781 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .Close
	I0603 13:50:06.207849 1143252 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:06.207873 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .Close
	I0603 13:50:06.208103 1143252 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:06.208152 1143252 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:06.208161 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | Closing plugin on server side
	I0603 13:50:06.208182 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | Closing plugin on server side
	I0603 13:50:06.208157 1143252 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:06.208197 1143252 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:06.208200 1143252 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:06.208216 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .Close
	I0603 13:50:06.208208 1143252 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:06.208284 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .Close
	I0603 13:50:06.208572 1143252 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:06.208590 1143252 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:06.208691 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | Closing plugin on server side
	I0603 13:50:06.208703 1143252 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:06.208724 1143252 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:06.216764 1143252 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:06.216783 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .Close
	I0603 13:50:06.217095 1143252 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:06.217111 1143252 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:06.374254 1143252 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:06.374281 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .Close
	I0603 13:50:06.374603 1143252 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:06.374623 1143252 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:06.374634 1143252 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:06.374638 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | Closing plugin on server side
	I0603 13:50:06.374644 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .Close
	I0603 13:50:06.374901 1143252 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:06.374916 1143252 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:06.374933 1143252 addons.go:475] Verifying addon metrics-server=true in "embed-certs-223260"
	I0603 13:50:06.374948 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | Closing plugin on server side
	I0603 13:50:06.377491 1143252 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0603 13:50:08.083130 1143678 start.go:364] duration metric: took 3m45.627229097s to acquireMachinesLock for "old-k8s-version-151788"
	I0603 13:50:08.083256 1143678 start.go:96] Skipping create...Using existing machine configuration
	I0603 13:50:08.083266 1143678 fix.go:54] fixHost starting: 
	I0603 13:50:08.083762 1143678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:08.083812 1143678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:08.103187 1143678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36483
	I0603 13:50:08.103693 1143678 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:08.104269 1143678 main.go:141] libmachine: Using API Version  1
	I0603 13:50:08.104299 1143678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:08.104746 1143678 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:08.105115 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:50:08.105347 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetState
	I0603 13:50:08.107125 1143678 fix.go:112] recreateIfNeeded on old-k8s-version-151788: state=Stopped err=<nil>
	I0603 13:50:08.107173 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	W0603 13:50:08.107340 1143678 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 13:50:08.109207 1143678 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-151788" ...
	I0603 13:50:06.378684 1143252 addons.go:510] duration metric: took 1.4549999s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0603 13:50:07.145643 1143252 node_ready.go:53] node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:06.526793 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.527302 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Found IP for machine: 192.168.39.177
	I0603 13:50:06.527341 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has current primary IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.527366 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Reserving static IP address...
	I0603 13:50:06.527822 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Reserved static IP address: 192.168.39.177
	I0603 13:50:06.527857 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for SSH to be available...
	I0603 13:50:06.527902 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-030870", mac: "52:54:00:62:09:d4", ip: "192.168.39.177"} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:06.527956 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | skip adding static IP to network mk-default-k8s-diff-port-030870 - found existing host DHCP lease matching {name: "default-k8s-diff-port-030870", mac: "52:54:00:62:09:d4", ip: "192.168.39.177"}
	I0603 13:50:06.527973 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | Getting to WaitForSSH function...
	I0603 13:50:06.530287 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.530662 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:06.530696 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.530802 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | Using SSH client type: external
	I0603 13:50:06.530827 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | Using SSH private key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa (-rw-------)
	I0603 13:50:06.530849 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.177 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 13:50:06.530866 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | About to run SSH command:
	I0603 13:50:06.530877 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | exit 0
	I0603 13:50:06.653910 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | SSH cmd err, output: <nil>: 
	I0603 13:50:06.654259 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetConfigRaw
	I0603 13:50:06.654981 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetIP
	I0603 13:50:06.658094 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.658561 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:06.658600 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.658921 1143450 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870/config.json ...
	I0603 13:50:06.659144 1143450 machine.go:94] provisionDockerMachine start ...
	I0603 13:50:06.659168 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:06.659486 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:06.662534 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.662915 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:06.662959 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.663059 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:06.663258 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:06.663476 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:06.663660 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:06.663866 1143450 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:06.664103 1143450 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0603 13:50:06.664115 1143450 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 13:50:06.766054 1143450 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 13:50:06.766083 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetMachineName
	I0603 13:50:06.766406 1143450 buildroot.go:166] provisioning hostname "default-k8s-diff-port-030870"
	I0603 13:50:06.766440 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetMachineName
	I0603 13:50:06.766708 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:06.769445 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.769820 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:06.769871 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.770029 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:06.770244 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:06.770423 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:06.770670 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:06.770893 1143450 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:06.771057 1143450 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0603 13:50:06.771070 1143450 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-030870 && echo "default-k8s-diff-port-030870" | sudo tee /etc/hostname
	I0603 13:50:06.889997 1143450 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-030870
	
	I0603 13:50:06.890029 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:06.893778 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.894260 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:06.894297 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.894614 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:06.894826 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:06.895029 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:06.895211 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:06.895423 1143450 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:06.895608 1143450 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0603 13:50:06.895625 1143450 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-030870' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-030870/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-030870' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 13:50:07.007930 1143450 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 13:50:07.007971 1143450 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19011-1078924/.minikube CaCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19011-1078924/.minikube}
	I0603 13:50:07.008009 1143450 buildroot.go:174] setting up certificates
	I0603 13:50:07.008020 1143450 provision.go:84] configureAuth start
	I0603 13:50:07.008034 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetMachineName
	I0603 13:50:07.008433 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetIP
	I0603 13:50:07.011208 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.011607 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:07.011640 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.011774 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:07.013986 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.014431 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:07.014462 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.014655 1143450 provision.go:143] copyHostCerts
	I0603 13:50:07.014726 1143450 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem, removing ...
	I0603 13:50:07.014737 1143450 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 13:50:07.014787 1143450 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem (1078 bytes)
	I0603 13:50:07.014874 1143450 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem, removing ...
	I0603 13:50:07.014882 1143450 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 13:50:07.014902 1143450 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem (1123 bytes)
	I0603 13:50:07.014952 1143450 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem, removing ...
	I0603 13:50:07.014959 1143450 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 13:50:07.014974 1143450 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem (1675 bytes)
	I0603 13:50:07.015020 1143450 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-030870 san=[127.0.0.1 192.168.39.177 default-k8s-diff-port-030870 localhost minikube]
	I0603 13:50:07.402535 1143450 provision.go:177] copyRemoteCerts
	I0603 13:50:07.402595 1143450 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 13:50:07.402626 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:07.405891 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.406240 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:07.406272 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.406484 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:07.406718 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:07.406943 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:07.407132 1143450 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa Username:docker}
	I0603 13:50:07.489480 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 13:50:07.517212 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0603 13:50:07.543510 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 13:50:07.570284 1143450 provision.go:87] duration metric: took 562.244781ms to configureAuth
	I0603 13:50:07.570318 1143450 buildroot.go:189] setting minikube options for container-runtime
	I0603 13:50:07.570537 1143450 config.go:182] Loaded profile config "default-k8s-diff-port-030870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:50:07.570629 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:07.574171 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.574706 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:07.574739 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.574948 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:07.575262 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:07.575549 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:07.575781 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:07.575965 1143450 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:07.576217 1143450 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0603 13:50:07.576247 1143450 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 13:50:07.839415 1143450 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 13:50:07.839455 1143450 machine.go:97] duration metric: took 1.180296439s to provisionDockerMachine
	I0603 13:50:07.839468 1143450 start.go:293] postStartSetup for "default-k8s-diff-port-030870" (driver="kvm2")
	I0603 13:50:07.839482 1143450 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 13:50:07.839506 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:07.839843 1143450 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 13:50:07.839872 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:07.842547 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.842884 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:07.842918 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.843234 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:07.843471 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:07.843708 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:07.843952 1143450 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa Username:docker}
	I0603 13:50:07.927654 1143450 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 13:50:07.932965 1143450 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 13:50:07.932997 1143450 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/addons for local assets ...
	I0603 13:50:07.933082 1143450 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/files for local assets ...
	I0603 13:50:07.933202 1143450 filesync.go:149] local asset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> 10862512.pem in /etc/ssl/certs
	I0603 13:50:07.933343 1143450 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 13:50:07.945059 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:50:07.975774 1143450 start.go:296] duration metric: took 136.280559ms for postStartSetup
	I0603 13:50:07.975822 1143450 fix.go:56] duration metric: took 20.481265153s for fixHost
	I0603 13:50:07.975848 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:07.979035 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.979436 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:07.979486 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.979737 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:07.980012 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:07.980228 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:07.980452 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:07.980691 1143450 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:07.980935 1143450 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0603 13:50:07.980954 1143450 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 13:50:08.082946 1143450 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717422608.057620379
	
	I0603 13:50:08.082978 1143450 fix.go:216] guest clock: 1717422608.057620379
	I0603 13:50:08.082988 1143450 fix.go:229] Guest: 2024-06-03 13:50:08.057620379 +0000 UTC Remote: 2024-06-03 13:50:07.975826846 +0000 UTC m=+237.845886752 (delta=81.793533ms)
	I0603 13:50:08.083018 1143450 fix.go:200] guest clock delta is within tolerance: 81.793533ms
	I0603 13:50:08.083025 1143450 start.go:83] releasing machines lock for "default-k8s-diff-port-030870", held for 20.588515063s
	I0603 13:50:08.083060 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:08.083369 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetIP
	I0603 13:50:08.086674 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:08.087202 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:08.087285 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:08.087508 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:08.088324 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:08.088575 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:08.088673 1143450 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 13:50:08.088758 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:08.088823 1143450 ssh_runner.go:195] Run: cat /version.json
	I0603 13:50:08.088852 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:08.092020 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:08.092175 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:08.092406 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:08.092485 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:08.092863 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:08.092893 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:08.092916 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:08.092924 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:08.093273 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:08.093276 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:08.093522 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:08.093541 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:08.093708 1143450 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa Username:docker}
	I0603 13:50:08.093710 1143450 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa Username:docker}
	I0603 13:50:08.176292 1143450 ssh_runner.go:195] Run: systemctl --version
	I0603 13:50:08.204977 1143450 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 13:50:08.367121 1143450 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 13:50:08.376347 1143450 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 13:50:08.376431 1143450 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 13:50:08.398639 1143450 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 13:50:08.398672 1143450 start.go:494] detecting cgroup driver to use...
	I0603 13:50:08.398750 1143450 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 13:50:08.422776 1143450 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 13:50:08.443035 1143450 docker.go:217] disabling cri-docker service (if available) ...
	I0603 13:50:08.443108 1143450 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 13:50:08.459853 1143450 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 13:50:08.482009 1143450 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 13:50:08.631237 1143450 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 13:50:08.806623 1143450 docker.go:233] disabling docker service ...
	I0603 13:50:08.806715 1143450 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 13:50:08.827122 1143450 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 13:50:08.842457 1143450 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 13:50:08.999795 1143450 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 13:50:09.148706 1143450 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 13:50:09.167314 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 13:50:09.188867 1143450 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 13:50:09.188959 1143450 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:09.202239 1143450 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 13:50:09.202319 1143450 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:09.216228 1143450 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:09.231140 1143450 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:09.246767 1143450 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 13:50:09.260418 1143450 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:09.274349 1143450 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:09.300588 1143450 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:09.314659 1143450 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 13:50:09.326844 1143450 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 13:50:09.326919 1143450 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 13:50:09.344375 1143450 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 13:50:09.357955 1143450 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:50:09.504105 1143450 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 13:50:09.685468 1143450 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 13:50:09.685562 1143450 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 13:50:09.690863 1143450 start.go:562] Will wait 60s for crictl version
	I0603 13:50:09.690943 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:50:09.696532 1143450 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 13:50:09.742785 1143450 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 13:50:09.742891 1143450 ssh_runner.go:195] Run: crio --version
	I0603 13:50:09.782137 1143450 ssh_runner.go:195] Run: crio --version
	I0603 13:50:09.816251 1143450 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 13:50:09.817854 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetIP
	I0603 13:50:09.821049 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:09.821555 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:09.821595 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:09.821855 1143450 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0603 13:50:09.826658 1143450 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:50:09.841351 1143450 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-030870 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:default-k8s-diff-port-030870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.177 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 13:50:09.841521 1143450 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 13:50:09.841586 1143450 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:50:09.883751 1143450 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0603 13:50:09.883825 1143450 ssh_runner.go:195] Run: which lz4
	I0603 13:50:09.888383 1143450 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 13:50:09.893662 1143450 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 13:50:09.893704 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0603 13:50:08.110706 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .Start
	I0603 13:50:08.110954 1143678 main.go:141] libmachine: (old-k8s-version-151788) Ensuring networks are active...
	I0603 13:50:08.111890 1143678 main.go:141] libmachine: (old-k8s-version-151788) Ensuring network default is active
	I0603 13:50:08.112291 1143678 main.go:141] libmachine: (old-k8s-version-151788) Ensuring network mk-old-k8s-version-151788 is active
	I0603 13:50:08.112708 1143678 main.go:141] libmachine: (old-k8s-version-151788) Getting domain xml...
	I0603 13:50:08.113547 1143678 main.go:141] libmachine: (old-k8s-version-151788) Creating domain...
	I0603 13:50:09.528855 1143678 main.go:141] libmachine: (old-k8s-version-151788) Waiting to get IP...
	I0603 13:50:09.529978 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:09.530410 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:09.530453 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:09.530382 1144654 retry.go:31] will retry after 208.935457ms: waiting for machine to come up
	I0603 13:50:09.741245 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:09.741816 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:09.741864 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:09.741769 1144654 retry.go:31] will retry after 376.532154ms: waiting for machine to come up
	I0603 13:50:10.120533 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:10.121261 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:10.121337 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:10.121239 1144654 retry.go:31] will retry after 339.126643ms: waiting for machine to come up
	I0603 13:50:10.461708 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:10.462488 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:10.462514 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:10.462425 1144654 retry.go:31] will retry after 490.057426ms: waiting for machine to come up
	I0603 13:50:10.954107 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:10.954887 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:10.954921 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:10.954840 1144654 retry.go:31] will retry after 711.209001ms: waiting for machine to come up
	I0603 13:50:11.667459 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:11.668198 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:11.668231 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:11.668135 1144654 retry.go:31] will retry after 928.879285ms: waiting for machine to come up
	I0603 13:50:09.645006 1143252 node_ready.go:53] node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:10.146403 1143252 node_ready.go:49] node "embed-certs-223260" has status "Ready":"True"
	I0603 13:50:10.146438 1143252 node_ready.go:38] duration metric: took 5.005510729s for node "embed-certs-223260" to be "Ready" ...
	I0603 13:50:10.146453 1143252 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:50:10.154249 1143252 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-qdjrv" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:10.164361 1143252 pod_ready.go:92] pod "coredns-7db6d8ff4d-qdjrv" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:10.164401 1143252 pod_ready.go:81] duration metric: took 10.115855ms for pod "coredns-7db6d8ff4d-qdjrv" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:10.164419 1143252 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:11.675214 1143252 pod_ready.go:92] pod "etcd-embed-certs-223260" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:11.675243 1143252 pod_ready.go:81] duration metric: took 1.510815036s for pod "etcd-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:11.675254 1143252 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:11.522734 1143450 crio.go:462] duration metric: took 1.634406537s to copy over tarball
	I0603 13:50:11.522837 1143450 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 13:50:13.983446 1143450 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.460564522s)
	I0603 13:50:13.983484 1143450 crio.go:469] duration metric: took 2.460706596s to extract the tarball
	I0603 13:50:13.983503 1143450 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 13:50:14.029942 1143450 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:50:14.083084 1143450 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 13:50:14.083113 1143450 cache_images.go:84] Images are preloaded, skipping loading
	I0603 13:50:14.083122 1143450 kubeadm.go:928] updating node { 192.168.39.177 8444 v1.30.1 crio true true} ...
	I0603 13:50:14.083247 1143450 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-030870 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.177
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-030870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 13:50:14.083319 1143450 ssh_runner.go:195] Run: crio config
	I0603 13:50:14.142320 1143450 cni.go:84] Creating CNI manager for ""
	I0603 13:50:14.142344 1143450 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:50:14.142354 1143450 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 13:50:14.142379 1143450 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.177 APIServerPort:8444 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-030870 NodeName:default-k8s-diff-port-030870 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.177"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.177 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 13:50:14.142517 1143450 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.177
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-030870"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.177
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.177"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 13:50:14.142577 1143450 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 13:50:14.153585 1143450 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 13:50:14.153687 1143450 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 13:50:14.164499 1143450 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0603 13:50:14.186564 1143450 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 13:50:14.205489 1143450 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0603 13:50:14.227005 1143450 ssh_runner.go:195] Run: grep 192.168.39.177	control-plane.minikube.internal$ /etc/hosts
	I0603 13:50:14.231782 1143450 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.177	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:50:14.247433 1143450 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:50:14.368336 1143450 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:50:14.391791 1143450 certs.go:68] Setting up /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870 for IP: 192.168.39.177
	I0603 13:50:14.391816 1143450 certs.go:194] generating shared ca certs ...
	I0603 13:50:14.391840 1143450 certs.go:226] acquiring lock for ca certs: {Name:mkeec5aabce7c9540fcb31b78e4f96c2851d54f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:50:14.392015 1143450 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key
	I0603 13:50:14.392075 1143450 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key
	I0603 13:50:14.392090 1143450 certs.go:256] generating profile certs ...
	I0603 13:50:14.392282 1143450 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870/client.key
	I0603 13:50:14.392373 1143450 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870/apiserver.key.7a30187e
	I0603 13:50:14.392428 1143450 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870/proxy-client.key
	I0603 13:50:14.392545 1143450 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem (1338 bytes)
	W0603 13:50:14.392602 1143450 certs.go:480] ignoring /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251_empty.pem, impossibly tiny 0 bytes
	I0603 13:50:14.392616 1143450 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 13:50:14.392650 1143450 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem (1078 bytes)
	I0603 13:50:14.392687 1143450 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem (1123 bytes)
	I0603 13:50:14.392722 1143450 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem (1675 bytes)
	I0603 13:50:14.392780 1143450 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:50:14.393706 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 13:50:14.424354 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 13:50:14.476267 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 13:50:14.514457 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0603 13:50:14.548166 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0603 13:50:14.584479 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0603 13:50:14.626894 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 13:50:14.663103 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0603 13:50:14.696750 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /usr/share/ca-certificates/10862512.pem (1708 bytes)
	I0603 13:50:14.725770 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 13:50:14.755779 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem --> /usr/share/ca-certificates/1086251.pem (1338 bytes)
	I0603 13:50:14.786060 1143450 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 13:50:14.805976 1143450 ssh_runner.go:195] Run: openssl version
	I0603 13:50:14.812737 1143450 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10862512.pem && ln -fs /usr/share/ca-certificates/10862512.pem /etc/ssl/certs/10862512.pem"
	I0603 13:50:14.824707 1143450 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10862512.pem
	I0603 13:50:14.831139 1143450 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:37 /usr/share/ca-certificates/10862512.pem
	I0603 13:50:14.831255 1143450 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10862512.pem
	I0603 13:50:14.838855 1143450 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10862512.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 13:50:14.850974 1143450 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 13:50:14.865613 1143450 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:50:14.871431 1143450 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:24 /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:50:14.871518 1143450 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:50:14.878919 1143450 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 13:50:14.891371 1143450 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1086251.pem && ln -fs /usr/share/ca-certificates/1086251.pem /etc/ssl/certs/1086251.pem"
	I0603 13:50:14.903721 1143450 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1086251.pem
	I0603 13:50:14.909069 1143450 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:37 /usr/share/ca-certificates/1086251.pem
	I0603 13:50:14.909180 1143450 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1086251.pem
	I0603 13:50:14.915904 1143450 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1086251.pem /etc/ssl/certs/51391683.0"
	I0603 13:50:14.928622 1143450 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 13:50:14.934466 1143450 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 13:50:14.941321 1143450 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 13:50:14.947960 1143450 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 13:50:14.955629 1143450 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 13:50:14.962761 1143450 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 13:50:14.970396 1143450 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 13:50:14.977381 1143450 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-030870 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.1 ClusterName:default-k8s-diff-port-030870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.177 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:50:14.977543 1143450 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 13:50:14.977599 1143450 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:50:15.042628 1143450 cri.go:89] found id: ""
	I0603 13:50:15.042733 1143450 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0603 13:50:15.055439 1143450 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 13:50:15.055469 1143450 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 13:50:15.055476 1143450 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 13:50:15.055535 1143450 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 13:50:15.067250 1143450 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 13:50:15.068159 1143450 kubeconfig.go:125] found "default-k8s-diff-port-030870" server: "https://192.168.39.177:8444"
	I0603 13:50:15.070060 1143450 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 13:50:15.082723 1143450 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.177
	I0603 13:50:15.082788 1143450 kubeadm.go:1154] stopping kube-system containers ...
	I0603 13:50:15.082809 1143450 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0603 13:50:15.082972 1143450 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:50:15.124369 1143450 cri.go:89] found id: ""
	I0603 13:50:15.124509 1143450 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 13:50:15.144064 1143450 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 13:50:15.156148 1143450 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 13:50:15.156174 1143450 kubeadm.go:156] found existing configuration files:
	
	I0603 13:50:15.156240 1143450 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0603 13:50:15.166927 1143450 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 13:50:15.167006 1143450 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 13:50:12.598536 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:12.598972 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:12.599008 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:12.598948 1144654 retry.go:31] will retry after 882.970422ms: waiting for machine to come up
	I0603 13:50:13.483171 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:13.483723 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:13.483758 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:13.483640 1144654 retry.go:31] will retry after 1.215665556s: waiting for machine to come up
	I0603 13:50:14.701392 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:14.701960 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:14.701991 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:14.701899 1144654 retry.go:31] will retry after 1.614371992s: waiting for machine to come up
	I0603 13:50:16.318708 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:16.319127 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:16.319148 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:16.319103 1144654 retry.go:31] will retry after 2.146267337s: waiting for machine to come up
	I0603 13:50:13.683419 1143252 pod_ready.go:102] pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:15.684744 1143252 pod_ready.go:102] pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:16.792510 1143252 pod_ready.go:92] pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:16.792538 1143252 pod_ready.go:81] duration metric: took 5.117277447s for pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:16.792549 1143252 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:16.798083 1143252 pod_ready.go:92] pod "kube-controller-manager-embed-certs-223260" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:16.798112 1143252 pod_ready.go:81] duration metric: took 5.554915ms for pod "kube-controller-manager-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:16.798126 1143252 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s5vdl" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:16.804217 1143252 pod_ready.go:92] pod "kube-proxy-s5vdl" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:16.804247 1143252 pod_ready.go:81] duration metric: took 6.113411ms for pod "kube-proxy-s5vdl" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:16.804262 1143252 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:16.810317 1143252 pod_ready.go:92] pod "kube-scheduler-embed-certs-223260" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:16.810343 1143252 pod_ready.go:81] duration metric: took 6.073098ms for pod "kube-scheduler-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:16.810357 1143252 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:15.178645 1143450 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0603 13:50:15.486524 1143450 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 13:50:15.486608 1143450 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 13:50:15.497694 1143450 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0603 13:50:15.509586 1143450 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 13:50:15.509665 1143450 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 13:50:15.521976 1143450 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0603 13:50:15.533446 1143450 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 13:50:15.533535 1143450 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 13:50:15.545525 1143450 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 13:50:15.557558 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:15.710109 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:16.725380 1143450 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.015227554s)
	I0603 13:50:16.725452 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:16.964275 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:17.061586 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:17.183665 1143450 api_server.go:52] waiting for apiserver process to appear ...
	I0603 13:50:17.183764 1143450 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:17.684365 1143450 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:18.184269 1143450 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:18.254733 1143450 api_server.go:72] duration metric: took 1.07106398s to wait for apiserver process to appear ...
	I0603 13:50:18.254769 1143450 api_server.go:88] waiting for apiserver healthz status ...
	I0603 13:50:18.254797 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:18.466825 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:18.467260 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:18.467292 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:18.467187 1144654 retry.go:31] will retry after 2.752334209s: waiting for machine to come up
	I0603 13:50:21.220813 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:21.221235 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:21.221267 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:21.221182 1144654 retry.go:31] will retry after 3.082080728s: waiting for machine to come up
	I0603 13:50:18.819188 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:21.323790 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:21.193140 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 13:50:21.193177 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 13:50:21.193193 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:21.265534 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 13:50:21.265580 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 13:50:21.265602 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:21.277669 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 13:50:21.277703 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 13:50:21.754973 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:21.761802 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:21.761841 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:22.255071 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:22.262166 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:22.262227 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:22.755128 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:22.759896 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:22.759936 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:23.255520 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:23.262093 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:23.262128 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:23.755784 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:23.760053 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:23.760079 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:24.255534 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:24.259793 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:24.259820 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:24.755365 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:24.759964 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 200:
	ok
	I0603 13:50:24.768830 1143450 api_server.go:141] control plane version: v1.30.1
	I0603 13:50:24.768862 1143450 api_server.go:131] duration metric: took 6.51408552s to wait for apiserver health ...
	I0603 13:50:24.768872 1143450 cni.go:84] Creating CNI manager for ""
	I0603 13:50:24.768879 1143450 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:50:24.771099 1143450 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 13:50:24.772806 1143450 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 13:50:24.784204 1143450 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 13:50:24.805572 1143450 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 13:50:24.816944 1143450 system_pods.go:59] 8 kube-system pods found
	I0603 13:50:24.816988 1143450 system_pods.go:61] "coredns-7db6d8ff4d-flxqj" [a116f363-ca50-4e2d-8c77-e99498c81e36] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 13:50:24.816997 1143450 system_pods.go:61] "etcd-default-k8s-diff-port-030870" [4134b8e4-b7c4-4571-ae7f-f1eff2be2427] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0603 13:50:24.817008 1143450 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-030870" [38fe3d48-9d20-448a-b8d1-7c3af8ab1d2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0603 13:50:24.817021 1143450 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-030870" [5c8f2fc4-fc4f-48f8-8d81-3b64aa9a93c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0603 13:50:24.817028 1143450 system_pods.go:61] "kube-proxy-thsrx" [96df5442-b343-47c8-a561-681a2d568d50] Running
	I0603 13:50:24.817037 1143450 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-030870" [1f2c23a1-1c2c-463f-a5f0-e8f1bb8956f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0603 13:50:24.817044 1143450 system_pods.go:61] "metrics-server-569cc877fc-8xw9v" [4ab08177-2171-493b-928c-456d8a21fd68] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:50:24.817050 1143450 system_pods.go:61] "storage-provisioner" [64d080e5-d582-4ee5-adbc-a652e8e2b820] Running
	I0603 13:50:24.817060 1143450 system_pods.go:74] duration metric: took 11.461696ms to wait for pod list to return data ...
	I0603 13:50:24.817069 1143450 node_conditions.go:102] verifying NodePressure condition ...
	I0603 13:50:24.820804 1143450 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 13:50:24.820834 1143450 node_conditions.go:123] node cpu capacity is 2
	I0603 13:50:24.820846 1143450 node_conditions.go:105] duration metric: took 3.771492ms to run NodePressure ...
	I0603 13:50:24.820865 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:25.098472 1143450 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0603 13:50:25.103237 1143450 kubeadm.go:733] kubelet initialised
	I0603 13:50:25.103263 1143450 kubeadm.go:734] duration metric: took 4.763539ms waiting for restarted kubelet to initialise ...
	I0603 13:50:25.103274 1143450 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:50:25.109364 1143450 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-flxqj" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:25.114629 1143450 pod_ready.go:97] node "default-k8s-diff-port-030870" hosting pod "coredns-7db6d8ff4d-flxqj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.114662 1143450 pod_ready.go:81] duration metric: took 5.268473ms for pod "coredns-7db6d8ff4d-flxqj" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:25.114676 1143450 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-030870" hosting pod "coredns-7db6d8ff4d-flxqj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.114687 1143450 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:25.118734 1143450 pod_ready.go:97] node "default-k8s-diff-port-030870" hosting pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.118777 1143450 pod_ready.go:81] duration metric: took 4.079659ms for pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:25.118790 1143450 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-030870" hosting pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.118810 1143450 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:25.123298 1143450 pod_ready.go:97] node "default-k8s-diff-port-030870" hosting pod "kube-apiserver-default-k8s-diff-port-030870" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.123334 1143450 pod_ready.go:81] duration metric: took 4.509948ms for pod "kube-apiserver-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:25.123351 1143450 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-030870" hosting pod "kube-apiserver-default-k8s-diff-port-030870" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.123361 1143450 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:25.210283 1143450 pod_ready.go:97] node "default-k8s-diff-port-030870" hosting pod "kube-controller-manager-default-k8s-diff-port-030870" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.210316 1143450 pod_ready.go:81] duration metric: took 86.945898ms for pod "kube-controller-manager-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:25.210329 1143450 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-030870" hosting pod "kube-controller-manager-default-k8s-diff-port-030870" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.210338 1143450 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-thsrx" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:25.609043 1143450 pod_ready.go:97] node "default-k8s-diff-port-030870" hosting pod "kube-proxy-thsrx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.609074 1143450 pod_ready.go:81] duration metric: took 398.728553ms for pod "kube-proxy-thsrx" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:25.609084 1143450 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-030870" hosting pod "kube-proxy-thsrx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.609091 1143450 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:26.009831 1143450 pod_ready.go:97] node "default-k8s-diff-port-030870" hosting pod "kube-scheduler-default-k8s-diff-port-030870" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:26.009866 1143450 pod_ready.go:81] duration metric: took 400.766037ms for pod "kube-scheduler-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:26.009880 1143450 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-030870" hosting pod "kube-scheduler-default-k8s-diff-port-030870" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:26.009888 1143450 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:26.410271 1143450 pod_ready.go:97] node "default-k8s-diff-port-030870" hosting pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:26.410301 1143450 pod_ready.go:81] duration metric: took 400.402293ms for pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:26.410315 1143450 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-030870" hosting pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:26.410326 1143450 pod_ready.go:38] duration metric: took 1.307039933s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:50:26.410347 1143450 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 13:50:26.422726 1143450 ops.go:34] apiserver oom_adj: -16
	I0603 13:50:26.422753 1143450 kubeadm.go:591] duration metric: took 11.367271168s to restartPrimaryControlPlane
	I0603 13:50:26.422763 1143450 kubeadm.go:393] duration metric: took 11.445396197s to StartCluster
	I0603 13:50:26.422784 1143450 settings.go:142] acquiring lock: {Name:mka7155af15d143794eb08b8670f7d850f44839e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:50:26.422866 1143450 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 13:50:26.424423 1143450 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/kubeconfig: {Name:mk082a4c41fd0f4876b4085806e1bc5ef6533b14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:50:26.424744 1143450 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.177 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 13:50:26.426628 1143450 out.go:177] * Verifying Kubernetes components...
	I0603 13:50:26.424855 1143450 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 13:50:26.424985 1143450 config.go:182] Loaded profile config "default-k8s-diff-port-030870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:50:26.428227 1143450 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:50:26.428239 1143450 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-030870"
	I0603 13:50:26.428241 1143450 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-030870"
	I0603 13:50:26.428275 1143450 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-030870"
	I0603 13:50:26.428285 1143450 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-030870"
	W0603 13:50:26.428297 1143450 addons.go:243] addon storage-provisioner should already be in state true
	I0603 13:50:26.428243 1143450 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-030870"
	I0603 13:50:26.428338 1143450 host.go:66] Checking if "default-k8s-diff-port-030870" exists ...
	I0603 13:50:26.428404 1143450 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-030870"
	W0603 13:50:26.428428 1143450 addons.go:243] addon metrics-server should already be in state true
	I0603 13:50:26.428501 1143450 host.go:66] Checking if "default-k8s-diff-port-030870" exists ...
	I0603 13:50:26.428650 1143450 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:26.428676 1143450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:26.428724 1143450 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:26.428751 1143450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:26.428948 1143450 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:26.429001 1143450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:26.445709 1143450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33951
	I0603 13:50:26.446187 1143450 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:26.446719 1143450 main.go:141] libmachine: Using API Version  1
	I0603 13:50:26.446743 1143450 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:26.447152 1143450 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:26.447817 1143450 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:26.447852 1143450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:26.449660 1143450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46507
	I0603 13:50:26.449721 1143450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37871
	I0603 13:50:26.450120 1143450 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:26.450161 1143450 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:26.450735 1143450 main.go:141] libmachine: Using API Version  1
	I0603 13:50:26.450755 1143450 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:26.450906 1143450 main.go:141] libmachine: Using API Version  1
	I0603 13:50:26.450930 1143450 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:26.451177 1143450 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:26.451333 1143450 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:26.451421 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetState
	I0603 13:50:26.451909 1143450 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:26.451951 1143450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:26.455458 1143450 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-030870"
	W0603 13:50:26.455484 1143450 addons.go:243] addon default-storageclass should already be in state true
	I0603 13:50:26.455523 1143450 host.go:66] Checking if "default-k8s-diff-port-030870" exists ...
	I0603 13:50:26.455776 1143450 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:26.455825 1143450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:26.470807 1143450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36503
	I0603 13:50:26.471179 1143450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44131
	I0603 13:50:26.471763 1143450 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:26.471921 1143450 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:26.472042 1143450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34451
	I0603 13:50:26.472471 1143450 main.go:141] libmachine: Using API Version  1
	I0603 13:50:26.472501 1143450 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:26.472575 1143450 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:26.472750 1143450 main.go:141] libmachine: Using API Version  1
	I0603 13:50:26.472760 1143450 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:26.472966 1143450 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:26.473095 1143450 main.go:141] libmachine: Using API Version  1
	I0603 13:50:26.473118 1143450 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:26.473132 1143450 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:26.473134 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetState
	I0603 13:50:26.473357 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetState
	I0603 13:50:26.473486 1143450 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:26.474129 1143450 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:26.474183 1143450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:26.475437 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:26.475594 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:26.477911 1143450 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:26.479474 1143450 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0603 13:50:24.304462 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:24.305104 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:24.305175 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:24.305099 1144654 retry.go:31] will retry after 4.178596743s: waiting for machine to come up
	I0603 13:50:26.480998 1143450 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0603 13:50:26.481021 1143450 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0603 13:50:26.481047 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:26.479556 1143450 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 13:50:26.481095 1143450 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 13:50:26.481116 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:26.484634 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:26.484694 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:26.485083 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:26.485116 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:26.485147 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:26.485160 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:26.485538 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:26.485628 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:26.485729 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:26.485829 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:26.485856 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:26.485993 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:26.486040 1143450 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa Username:docker}
	I0603 13:50:26.486158 1143450 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa Username:docker}
	I0603 13:50:26.496035 1143450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45527
	I0603 13:50:26.496671 1143450 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:26.497270 1143450 main.go:141] libmachine: Using API Version  1
	I0603 13:50:26.497290 1143450 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:26.497719 1143450 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:26.497989 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetState
	I0603 13:50:26.500018 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:26.500280 1143450 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 13:50:26.500298 1143450 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 13:50:26.500318 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:26.503226 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:26.503732 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:26.503768 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:26.503967 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:26.504212 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:26.504399 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:26.504556 1143450 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa Username:docker}
	I0603 13:50:26.608774 1143450 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:50:26.629145 1143450 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-030870" to be "Ready" ...
	I0603 13:50:26.692164 1143450 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 13:50:26.784756 1143450 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 13:50:26.788686 1143450 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0603 13:50:26.788711 1143450 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0603 13:50:26.841094 1143450 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0603 13:50:26.841129 1143450 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0603 13:50:26.907657 1143450 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 13:50:26.907688 1143450 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0603 13:50:26.963244 1143450 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:26.963280 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .Close
	I0603 13:50:26.963618 1143450 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:26.963641 1143450 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:26.963649 1143450 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:26.963653 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | Closing plugin on server side
	I0603 13:50:26.963657 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .Close
	I0603 13:50:26.963962 1143450 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:26.963980 1143450 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:26.963982 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | Closing plugin on server side
	I0603 13:50:26.971726 1143450 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:26.971748 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .Close
	I0603 13:50:26.972101 1143450 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:26.972125 1143450 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:26.975238 1143450 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 13:50:27.653643 1143450 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:27.653689 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .Close
	I0603 13:50:27.654037 1143450 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:27.654061 1143450 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:27.654078 1143450 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:27.654087 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .Close
	I0603 13:50:27.654429 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | Closing plugin on server side
	I0603 13:50:27.654484 1143450 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:27.654507 1143450 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:27.847367 1143450 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:27.847397 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .Close
	I0603 13:50:27.847745 1143450 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:27.847770 1143450 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:27.847779 1143450 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:27.847785 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | Closing plugin on server side
	I0603 13:50:27.847793 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .Close
	I0603 13:50:27.848112 1143450 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:27.848130 1143450 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:27.848144 1143450 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-030870"
	I0603 13:50:27.851386 1143450 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0603 13:50:23.817272 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:25.818013 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:27.818160 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:29.798777 1142862 start.go:364] duration metric: took 56.694826675s to acquireMachinesLock for "no-preload-817450"
	I0603 13:50:29.798855 1142862 start.go:96] Skipping create...Using existing machine configuration
	I0603 13:50:29.798866 1142862 fix.go:54] fixHost starting: 
	I0603 13:50:29.799329 1142862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:29.799369 1142862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:29.817787 1142862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46057
	I0603 13:50:29.818396 1142862 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:29.819003 1142862 main.go:141] libmachine: Using API Version  1
	I0603 13:50:29.819025 1142862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:29.819450 1142862 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:29.819617 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:50:29.819782 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetState
	I0603 13:50:29.821742 1142862 fix.go:112] recreateIfNeeded on no-preload-817450: state=Stopped err=<nil>
	I0603 13:50:29.821777 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	W0603 13:50:29.821973 1142862 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 13:50:29.823915 1142862 out.go:177] * Restarting existing kvm2 VM for "no-preload-817450" ...
	I0603 13:50:27.852929 1143450 addons.go:510] duration metric: took 1.428071927s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0603 13:50:28.633355 1143450 node_ready.go:53] node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:29.825584 1142862 main.go:141] libmachine: (no-preload-817450) Calling .Start
	I0603 13:50:29.825783 1142862 main.go:141] libmachine: (no-preload-817450) Ensuring networks are active...
	I0603 13:50:29.826746 1142862 main.go:141] libmachine: (no-preload-817450) Ensuring network default is active
	I0603 13:50:29.827116 1142862 main.go:141] libmachine: (no-preload-817450) Ensuring network mk-no-preload-817450 is active
	I0603 13:50:29.827617 1142862 main.go:141] libmachine: (no-preload-817450) Getting domain xml...
	I0603 13:50:29.828419 1142862 main.go:141] libmachine: (no-preload-817450) Creating domain...
	I0603 13:50:28.485041 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.485598 1143678 main.go:141] libmachine: (old-k8s-version-151788) Found IP for machine: 192.168.50.65
	I0603 13:50:28.485624 1143678 main.go:141] libmachine: (old-k8s-version-151788) Reserving static IP address...
	I0603 13:50:28.485639 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has current primary IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.486053 1143678 main.go:141] libmachine: (old-k8s-version-151788) Reserved static IP address: 192.168.50.65
	I0603 13:50:28.486109 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "old-k8s-version-151788", mac: "52:54:00:56:4e:c1", ip: "192.168.50.65"} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.486123 1143678 main.go:141] libmachine: (old-k8s-version-151788) Waiting for SSH to be available...
	I0603 13:50:28.486144 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | skip adding static IP to network mk-old-k8s-version-151788 - found existing host DHCP lease matching {name: "old-k8s-version-151788", mac: "52:54:00:56:4e:c1", ip: "192.168.50.65"}
	I0603 13:50:28.486156 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | Getting to WaitForSSH function...
	I0603 13:50:28.488305 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.488754 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.488788 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.489025 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | Using SSH client type: external
	I0603 13:50:28.489048 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | Using SSH private key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/id_rsa (-rw-------)
	I0603 13:50:28.489114 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.65 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 13:50:28.489147 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | About to run SSH command:
	I0603 13:50:28.489167 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | exit 0
	I0603 13:50:28.613732 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | SSH cmd err, output: <nil>: 
	I0603 13:50:28.614183 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetConfigRaw
	I0603 13:50:28.614879 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetIP
	I0603 13:50:28.617742 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.618235 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.618270 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.618481 1143678 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/config.json ...
	I0603 13:50:28.618699 1143678 machine.go:94] provisionDockerMachine start ...
	I0603 13:50:28.618719 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:50:28.618967 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:28.621356 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.621655 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.621685 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.621897 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:28.622117 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:28.622321 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:28.622511 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:28.622750 1143678 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:28.622946 1143678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0603 13:50:28.622958 1143678 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 13:50:28.726383 1143678 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 13:50:28.726419 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetMachineName
	I0603 13:50:28.726740 1143678 buildroot.go:166] provisioning hostname "old-k8s-version-151788"
	I0603 13:50:28.726777 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetMachineName
	I0603 13:50:28.727042 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:28.729901 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.730372 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.730402 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.730599 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:28.730824 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:28.731031 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:28.731205 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:28.731403 1143678 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:28.731585 1143678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0603 13:50:28.731599 1143678 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-151788 && echo "old-k8s-version-151788" | sudo tee /etc/hostname
	I0603 13:50:28.848834 1143678 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-151788
	
	I0603 13:50:28.848867 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:28.852250 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.852698 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.852721 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.852980 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:28.853239 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:28.853536 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:28.853819 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:28.854093 1143678 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:28.854338 1143678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0603 13:50:28.854367 1143678 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-151788' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-151788/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-151788' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 13:50:28.967427 1143678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 13:50:28.967461 1143678 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19011-1078924/.minikube CaCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19011-1078924/.minikube}
	I0603 13:50:28.967520 1143678 buildroot.go:174] setting up certificates
	I0603 13:50:28.967538 1143678 provision.go:84] configureAuth start
	I0603 13:50:28.967550 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetMachineName
	I0603 13:50:28.967946 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetIP
	I0603 13:50:28.970841 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.971226 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.971256 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.971449 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:28.974316 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.974702 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.974732 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.974911 1143678 provision.go:143] copyHostCerts
	I0603 13:50:28.974994 1143678 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem, removing ...
	I0603 13:50:28.975010 1143678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 13:50:28.975068 1143678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem (1675 bytes)
	I0603 13:50:28.975247 1143678 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem, removing ...
	I0603 13:50:28.975260 1143678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 13:50:28.975283 1143678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem (1078 bytes)
	I0603 13:50:28.975354 1143678 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem, removing ...
	I0603 13:50:28.975362 1143678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 13:50:28.975385 1143678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem (1123 bytes)
	I0603 13:50:28.975463 1143678 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-151788 san=[127.0.0.1 192.168.50.65 localhost minikube old-k8s-version-151788]
	I0603 13:50:29.096777 1143678 provision.go:177] copyRemoteCerts
	I0603 13:50:29.096835 1143678 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 13:50:29.096865 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:29.099989 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.100408 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:29.100434 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.100644 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:29.100831 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.100975 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:29.101144 1143678 sshutil.go:53] new ssh client: &{IP:192.168.50.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/id_rsa Username:docker}
	I0603 13:50:29.184886 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 13:50:29.211432 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0603 13:50:29.238552 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 13:50:29.266803 1143678 provision.go:87] duration metric: took 299.247567ms to configureAuth
	I0603 13:50:29.266844 1143678 buildroot.go:189] setting minikube options for container-runtime
	I0603 13:50:29.267107 1143678 config.go:182] Loaded profile config "old-k8s-version-151788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0603 13:50:29.267220 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:29.270966 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.271417 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:29.271472 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.271688 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:29.271893 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.272121 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.272327 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:29.272544 1143678 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:29.272787 1143678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0603 13:50:29.272811 1143678 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 13:50:29.548407 1143678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 13:50:29.548437 1143678 machine.go:97] duration metric: took 929.724002ms to provisionDockerMachine
	I0603 13:50:29.548449 1143678 start.go:293] postStartSetup for "old-k8s-version-151788" (driver="kvm2")
	I0603 13:50:29.548461 1143678 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 13:50:29.548486 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:50:29.548924 1143678 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 13:50:29.548992 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:29.552127 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.552531 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:29.552571 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.552756 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:29.552974 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.553166 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:29.553364 1143678 sshutil.go:53] new ssh client: &{IP:192.168.50.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/id_rsa Username:docker}
	I0603 13:50:29.637026 1143678 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 13:50:29.641264 1143678 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 13:50:29.641293 1143678 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/addons for local assets ...
	I0603 13:50:29.641376 1143678 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/files for local assets ...
	I0603 13:50:29.641509 1143678 filesync.go:149] local asset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> 10862512.pem in /etc/ssl/certs
	I0603 13:50:29.641600 1143678 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 13:50:29.657273 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:50:29.688757 1143678 start.go:296] duration metric: took 140.291954ms for postStartSetup
	I0603 13:50:29.688806 1143678 fix.go:56] duration metric: took 21.605539652s for fixHost
	I0603 13:50:29.688843 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:29.691764 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.692170 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:29.692216 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.692356 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:29.692623 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.692814 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.692996 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:29.693180 1143678 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:29.693372 1143678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0603 13:50:29.693384 1143678 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 13:50:29.798629 1143678 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717422629.770375968
	
	I0603 13:50:29.798655 1143678 fix.go:216] guest clock: 1717422629.770375968
	I0603 13:50:29.798662 1143678 fix.go:229] Guest: 2024-06-03 13:50:29.770375968 +0000 UTC Remote: 2024-06-03 13:50:29.688811675 +0000 UTC m=+247.377673500 (delta=81.564293ms)
	I0603 13:50:29.798683 1143678 fix.go:200] guest clock delta is within tolerance: 81.564293ms
	I0603 13:50:29.798688 1143678 start.go:83] releasing machines lock for "old-k8s-version-151788", held for 21.715483341s
	I0603 13:50:29.798712 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:50:29.799019 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetIP
	I0603 13:50:29.802078 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.802479 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:29.802522 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.802674 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:50:29.803271 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:50:29.803496 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:50:29.803584 1143678 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 13:50:29.803646 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:29.803961 1143678 ssh_runner.go:195] Run: cat /version.json
	I0603 13:50:29.803988 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:29.806505 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.806863 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.806926 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:29.806961 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.807093 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:29.807299 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.807345 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:29.807386 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.807476 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:29.807670 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:29.807669 1143678 sshutil.go:53] new ssh client: &{IP:192.168.50.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/id_rsa Username:docker}
	I0603 13:50:29.807841 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.807947 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:29.808183 1143678 sshutil.go:53] new ssh client: &{IP:192.168.50.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/id_rsa Username:docker}
	I0603 13:50:29.890622 1143678 ssh_runner.go:195] Run: systemctl --version
	I0603 13:50:29.918437 1143678 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 13:50:30.064471 1143678 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 13:50:30.073881 1143678 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 13:50:30.073969 1143678 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 13:50:30.097037 1143678 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 13:50:30.097070 1143678 start.go:494] detecting cgroup driver to use...
	I0603 13:50:30.097147 1143678 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 13:50:30.114374 1143678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 13:50:30.132000 1143678 docker.go:217] disabling cri-docker service (if available) ...
	I0603 13:50:30.132075 1143678 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 13:50:30.148156 1143678 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 13:50:30.164601 1143678 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 13:50:30.303125 1143678 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 13:50:30.475478 1143678 docker.go:233] disabling docker service ...
	I0603 13:50:30.475578 1143678 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 13:50:30.494632 1143678 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 13:50:30.513383 1143678 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 13:50:30.691539 1143678 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 13:50:30.849280 1143678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 13:50:30.869107 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 13:50:30.893451 1143678 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0603 13:50:30.893528 1143678 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:30.909358 1143678 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 13:50:30.909465 1143678 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:30.926891 1143678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:30.941879 1143678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:30.957985 1143678 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 13:50:30.971349 1143678 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 13:50:30.984948 1143678 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 13:50:30.985023 1143678 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 13:50:30.999255 1143678 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 13:50:31.011615 1143678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:50:31.162848 1143678 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 13:50:31.352121 1143678 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 13:50:31.352190 1143678 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 13:50:31.357946 1143678 start.go:562] Will wait 60s for crictl version
	I0603 13:50:31.358032 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:31.362540 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 13:50:31.410642 1143678 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 13:50:31.410757 1143678 ssh_runner.go:195] Run: crio --version
	I0603 13:50:31.444750 1143678 ssh_runner.go:195] Run: crio --version
	I0603 13:50:31.482404 1143678 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0603 13:50:31.484218 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetIP
	I0603 13:50:31.488049 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:31.488663 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:31.488695 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:31.488985 1143678 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0603 13:50:31.494813 1143678 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:50:31.511436 1143678 kubeadm.go:877] updating cluster {Name:old-k8s-version-151788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-151788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 13:50:31.511597 1143678 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0603 13:50:31.511659 1143678 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:50:31.571733 1143678 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0603 13:50:31.571819 1143678 ssh_runner.go:195] Run: which lz4
	I0603 13:50:31.577765 1143678 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 13:50:31.583983 1143678 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 13:50:31.584025 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0603 13:50:30.319230 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:32.824874 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:30.633456 1143450 node_ready.go:53] node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:32.134192 1143450 node_ready.go:49] node "default-k8s-diff-port-030870" has status "Ready":"True"
	I0603 13:50:32.134227 1143450 node_ready.go:38] duration metric: took 5.505047986s for node "default-k8s-diff-port-030870" to be "Ready" ...
	I0603 13:50:32.134241 1143450 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:50:32.143157 1143450 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-flxqj" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:32.150075 1143450 pod_ready.go:92] pod "coredns-7db6d8ff4d-flxqj" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:32.150113 1143450 pod_ready.go:81] duration metric: took 6.922006ms for pod "coredns-7db6d8ff4d-flxqj" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:32.150128 1143450 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:34.157758 1143450 pod_ready.go:102] pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:31.283193 1142862 main.go:141] libmachine: (no-preload-817450) Waiting to get IP...
	I0603 13:50:31.284191 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:31.284681 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:31.284757 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:31.284641 1144889 retry.go:31] will retry after 246.139268ms: waiting for machine to come up
	I0603 13:50:31.532345 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:31.533024 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:31.533056 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:31.532956 1144889 retry.go:31] will retry after 283.586657ms: waiting for machine to come up
	I0603 13:50:31.818610 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:31.819271 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:31.819302 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:31.819235 1144889 retry.go:31] will retry after 345.327314ms: waiting for machine to come up
	I0603 13:50:32.165948 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:32.166532 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:32.166585 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:32.166485 1144889 retry.go:31] will retry after 567.370644ms: waiting for machine to come up
	I0603 13:50:32.735409 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:32.736074 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:32.736118 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:32.735978 1144889 retry.go:31] will retry after 523.349811ms: waiting for machine to come up
	I0603 13:50:33.261023 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:33.261738 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:33.261769 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:33.261685 1144889 retry.go:31] will retry after 617.256992ms: waiting for machine to come up
	I0603 13:50:33.880579 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:33.881159 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:33.881188 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:33.881113 1144889 retry.go:31] will retry after 975.807438ms: waiting for machine to come up
	I0603 13:50:34.858935 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:34.859418 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:34.859447 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:34.859365 1144889 retry.go:31] will retry after 1.257722281s: waiting for machine to come up
	I0603 13:50:33.399678 1143678 crio.go:462] duration metric: took 1.821959808s to copy over tarball
	I0603 13:50:33.399768 1143678 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 13:50:36.631033 1143678 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.231219364s)
	I0603 13:50:36.631081 1143678 crio.go:469] duration metric: took 3.231364789s to extract the tarball
	I0603 13:50:36.631092 1143678 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 13:50:36.677954 1143678 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:50:36.718160 1143678 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0603 13:50:36.718197 1143678 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0603 13:50:36.718295 1143678 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 13:50:36.718335 1143678 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 13:50:36.718295 1143678 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:36.718456 1143678 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0603 13:50:36.718302 1143678 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 13:50:36.718343 1143678 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0603 13:50:36.718335 1143678 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0603 13:50:36.718858 1143678 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 13:50:36.720574 1143678 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 13:50:36.720644 1143678 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:36.720573 1143678 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0603 13:50:36.720574 1143678 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 13:50:36.720576 1143678 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 13:50:36.720603 1143678 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0603 13:50:36.720608 1143678 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 13:50:36.721118 1143678 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0603 13:50:36.907182 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0603 13:50:36.907179 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0603 13:50:36.910017 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:36.920969 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 13:50:36.925739 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0603 13:50:36.935710 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0603 13:50:36.946767 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0603 13:50:36.973425 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0603 13:50:37.050763 1143678 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0603 13:50:37.050817 1143678 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0603 13:50:37.050846 1143678 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0603 13:50:37.050876 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:37.050880 1143678 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 13:50:37.050906 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:37.162505 1143678 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0603 13:50:37.162561 1143678 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 13:50:37.162608 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:37.162706 1143678 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0603 13:50:37.162727 1143678 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 13:50:37.162754 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:37.162858 1143678 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0603 13:50:37.162898 1143678 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 13:50:37.162922 1143678 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0603 13:50:37.162965 1143678 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0603 13:50:37.163001 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:37.162943 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:37.164963 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0603 13:50:37.165019 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0603 13:50:37.165136 1143678 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0603 13:50:37.165260 1143678 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0603 13:50:37.165295 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:37.188179 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0603 13:50:37.188292 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0603 13:50:37.188315 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0603 13:50:37.188371 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0603 13:50:37.188561 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 13:50:37.300592 1143678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0603 13:50:37.300642 1143678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0603 13:50:35.317483 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:37.318105 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:36.160066 1143450 pod_ready.go:102] pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:37.334685 1143450 pod_ready.go:92] pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:37.334719 1143450 pod_ready.go:81] duration metric: took 5.184582613s for pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.334732 1143450 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.341104 1143450 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-030870" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:37.341140 1143450 pod_ready.go:81] duration metric: took 6.399805ms for pod "kube-apiserver-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.341154 1143450 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.347174 1143450 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-030870" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:37.347208 1143450 pod_ready.go:81] duration metric: took 6.044519ms for pod "kube-controller-manager-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.347220 1143450 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-thsrx" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.356909 1143450 pod_ready.go:92] pod "kube-proxy-thsrx" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:37.356949 1143450 pod_ready.go:81] duration metric: took 9.72108ms for pod "kube-proxy-thsrx" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.356962 1143450 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.363891 1143450 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-030870" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:37.363915 1143450 pod_ready.go:81] duration metric: took 6.9442ms for pod "kube-scheduler-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.363927 1143450 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:39.372092 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:36.118754 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:36.119214 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:36.119251 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:36.119148 1144889 retry.go:31] will retry after 1.380813987s: waiting for machine to come up
	I0603 13:50:37.501464 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:37.501889 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:37.501937 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:37.501849 1144889 retry.go:31] will retry after 2.144177789s: waiting for machine to come up
	I0603 13:50:39.648238 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:39.648744 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:39.648768 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:39.648693 1144889 retry.go:31] will retry after 1.947487062s: waiting for machine to come up
	I0603 13:50:37.360149 1143678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0603 13:50:37.360196 1143678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0603 13:50:37.360346 1143678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0603 13:50:37.360371 1143678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0603 13:50:37.360436 1143678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0603 13:50:37.543409 1143678 cache_images.go:92] duration metric: took 825.189409ms to LoadCachedImages
	W0603 13:50:37.543559 1143678 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0603 13:50:37.543581 1143678 kubeadm.go:928] updating node { 192.168.50.65 8443 v1.20.0 crio true true} ...
	I0603 13:50:37.543723 1143678 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-151788 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-151788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 13:50:37.543804 1143678 ssh_runner.go:195] Run: crio config
	I0603 13:50:37.601388 1143678 cni.go:84] Creating CNI manager for ""
	I0603 13:50:37.601428 1143678 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:50:37.601445 1143678 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 13:50:37.601471 1143678 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.65 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-151788 NodeName:old-k8s-version-151788 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.65"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.65 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0603 13:50:37.601664 1143678 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.65
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-151788"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.65
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.65"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 13:50:37.601746 1143678 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0603 13:50:37.613507 1143678 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 13:50:37.613588 1143678 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 13:50:37.623853 1143678 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0603 13:50:37.642298 1143678 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 13:50:37.660863 1143678 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0603 13:50:37.679974 1143678 ssh_runner.go:195] Run: grep 192.168.50.65	control-plane.minikube.internal$ /etc/hosts
	I0603 13:50:37.685376 1143678 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.65	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:50:37.702732 1143678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:50:37.859343 1143678 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:50:37.880684 1143678 certs.go:68] Setting up /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788 for IP: 192.168.50.65
	I0603 13:50:37.880714 1143678 certs.go:194] generating shared ca certs ...
	I0603 13:50:37.880737 1143678 certs.go:226] acquiring lock for ca certs: {Name:mkeec5aabce7c9540fcb31b78e4f96c2851d54f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:50:37.880952 1143678 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key
	I0603 13:50:37.881012 1143678 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key
	I0603 13:50:37.881024 1143678 certs.go:256] generating profile certs ...
	I0603 13:50:37.881179 1143678 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/client.key
	I0603 13:50:37.881279 1143678 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/apiserver.key.9bfe4cc3
	I0603 13:50:37.881334 1143678 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/proxy-client.key
	I0603 13:50:37.881554 1143678 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem (1338 bytes)
	W0603 13:50:37.881602 1143678 certs.go:480] ignoring /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251_empty.pem, impossibly tiny 0 bytes
	I0603 13:50:37.881629 1143678 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 13:50:37.881667 1143678 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem (1078 bytes)
	I0603 13:50:37.881698 1143678 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem (1123 bytes)
	I0603 13:50:37.881730 1143678 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem (1675 bytes)
	I0603 13:50:37.881805 1143678 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:50:37.882741 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 13:50:37.919377 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 13:50:37.957218 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 13:50:37.987016 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0603 13:50:38.024442 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0603 13:50:38.051406 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0603 13:50:38.094816 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 13:50:38.143689 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 13:50:38.171488 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem --> /usr/share/ca-certificates/1086251.pem (1338 bytes)
	I0603 13:50:38.197296 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /usr/share/ca-certificates/10862512.pem (1708 bytes)
	I0603 13:50:38.224025 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 13:50:38.250728 1143678 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 13:50:38.270485 1143678 ssh_runner.go:195] Run: openssl version
	I0603 13:50:38.276995 1143678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10862512.pem && ln -fs /usr/share/ca-certificates/10862512.pem /etc/ssl/certs/10862512.pem"
	I0603 13:50:38.288742 1143678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10862512.pem
	I0603 13:50:38.293880 1143678 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:37 /usr/share/ca-certificates/10862512.pem
	I0603 13:50:38.293955 1143678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10862512.pem
	I0603 13:50:38.300456 1143678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10862512.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 13:50:38.312180 1143678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 13:50:38.324349 1143678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:50:38.329812 1143678 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:24 /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:50:38.329881 1143678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:50:38.337560 1143678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 13:50:38.350229 1143678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1086251.pem && ln -fs /usr/share/ca-certificates/1086251.pem /etc/ssl/certs/1086251.pem"
	I0603 13:50:38.362635 1143678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1086251.pem
	I0603 13:50:38.368842 1143678 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:37 /usr/share/ca-certificates/1086251.pem
	I0603 13:50:38.368920 1143678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1086251.pem
	I0603 13:50:38.376029 1143678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1086251.pem /etc/ssl/certs/51391683.0"
	I0603 13:50:38.387703 1143678 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 13:50:38.393071 1143678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 13:50:38.399760 1143678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 13:50:38.406332 1143678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 13:50:38.413154 1143678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 13:50:38.419162 1143678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 13:50:38.425818 1143678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 13:50:38.432495 1143678 kubeadm.go:391] StartCluster: {Name:old-k8s-version-151788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-151788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:50:38.432659 1143678 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 13:50:38.432718 1143678 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:50:38.479889 1143678 cri.go:89] found id: ""
	I0603 13:50:38.479975 1143678 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0603 13:50:38.490549 1143678 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 13:50:38.490574 1143678 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 13:50:38.490580 1143678 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 13:50:38.490637 1143678 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 13:50:38.501024 1143678 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 13:50:38.503665 1143678 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-151788" does not appear in /home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 13:50:38.504563 1143678 kubeconfig.go:62] /home/jenkins/minikube-integration/19011-1078924/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-151788" cluster setting kubeconfig missing "old-k8s-version-151788" context setting]
	I0603 13:50:38.505614 1143678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/kubeconfig: {Name:mk082a4c41fd0f4876b4085806e1bc5ef6533b14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:50:38.562691 1143678 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 13:50:38.573839 1143678 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.65
	I0603 13:50:38.573889 1143678 kubeadm.go:1154] stopping kube-system containers ...
	I0603 13:50:38.573905 1143678 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0603 13:50:38.573987 1143678 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:50:38.615876 1143678 cri.go:89] found id: ""
	I0603 13:50:38.615972 1143678 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 13:50:38.633568 1143678 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 13:50:38.645197 1143678 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 13:50:38.645229 1143678 kubeadm.go:156] found existing configuration files:
	
	I0603 13:50:38.645291 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 13:50:38.655344 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 13:50:38.655423 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 13:50:38.665789 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 13:50:38.674765 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 13:50:38.674842 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 13:50:38.684268 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 13:50:38.693586 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 13:50:38.693650 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 13:50:38.703313 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 13:50:38.712523 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 13:50:38.712597 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 13:50:38.722362 1143678 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 13:50:38.732190 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:38.875545 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:39.722534 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:39.970226 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:40.090817 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:40.193178 1143678 api_server.go:52] waiting for apiserver process to appear ...
	I0603 13:50:40.193485 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:40.693580 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:41.193579 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:41.693608 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:42.193587 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:39.318177 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:41.818337 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:41.373738 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:43.870381 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:41.597745 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:41.598343 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:41.598372 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:41.598280 1144889 retry.go:31] will retry after 2.47307834s: waiting for machine to come up
	I0603 13:50:44.074548 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:44.075009 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:44.075037 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:44.074970 1144889 retry.go:31] will retry after 3.055733752s: waiting for machine to come up
	I0603 13:50:42.693593 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:43.194448 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:43.693645 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:44.193587 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:44.694583 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:45.194065 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:45.694138 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:46.194173 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:46.694344 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:47.194063 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:44.316348 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:46.317245 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:47.133727 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.134266 1142862 main.go:141] libmachine: (no-preload-817450) Found IP for machine: 192.168.72.125
	I0603 13:50:47.134301 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has current primary IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.134308 1142862 main.go:141] libmachine: (no-preload-817450) Reserving static IP address...
	I0603 13:50:47.134745 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "no-preload-817450", mac: "52:54:00:8f:cc:be", ip: "192.168.72.125"} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.134777 1142862 main.go:141] libmachine: (no-preload-817450) Reserved static IP address: 192.168.72.125
	I0603 13:50:47.134797 1142862 main.go:141] libmachine: (no-preload-817450) DBG | skip adding static IP to network mk-no-preload-817450 - found existing host DHCP lease matching {name: "no-preload-817450", mac: "52:54:00:8f:cc:be", ip: "192.168.72.125"}
	I0603 13:50:47.134816 1142862 main.go:141] libmachine: (no-preload-817450) DBG | Getting to WaitForSSH function...
	I0603 13:50:47.134858 1142862 main.go:141] libmachine: (no-preload-817450) Waiting for SSH to be available...
	I0603 13:50:47.137239 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.137669 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.137705 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.137810 1142862 main.go:141] libmachine: (no-preload-817450) DBG | Using SSH client type: external
	I0603 13:50:47.137835 1142862 main.go:141] libmachine: (no-preload-817450) DBG | Using SSH private key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa (-rw-------)
	I0603 13:50:47.137870 1142862 main.go:141] libmachine: (no-preload-817450) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 13:50:47.137879 1142862 main.go:141] libmachine: (no-preload-817450) DBG | About to run SSH command:
	I0603 13:50:47.137889 1142862 main.go:141] libmachine: (no-preload-817450) DBG | exit 0
	I0603 13:50:47.265932 1142862 main.go:141] libmachine: (no-preload-817450) DBG | SSH cmd err, output: <nil>: 
	I0603 13:50:47.266268 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetConfigRaw
	I0603 13:50:47.267007 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetIP
	I0603 13:50:47.269463 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.269849 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.269885 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.270135 1142862 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450/config.json ...
	I0603 13:50:47.270355 1142862 machine.go:94] provisionDockerMachine start ...
	I0603 13:50:47.270375 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:50:47.270589 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:47.272915 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.273307 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.273341 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.273543 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:47.273737 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.273905 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.274061 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:47.274242 1142862 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:47.274417 1142862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.125 22 <nil> <nil>}
	I0603 13:50:47.274429 1142862 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 13:50:47.380760 1142862 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 13:50:47.380789 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetMachineName
	I0603 13:50:47.381068 1142862 buildroot.go:166] provisioning hostname "no-preload-817450"
	I0603 13:50:47.381095 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetMachineName
	I0603 13:50:47.381314 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:47.384093 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.384460 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.384482 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.384627 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:47.384798 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.384938 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.385099 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:47.385276 1142862 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:47.385533 1142862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.125 22 <nil> <nil>}
	I0603 13:50:47.385562 1142862 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-817450 && echo "no-preload-817450" | sudo tee /etc/hostname
	I0603 13:50:47.505203 1142862 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-817450
	
	I0603 13:50:47.505231 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:47.508267 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.508696 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.508721 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.508877 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:47.509066 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.509281 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.509437 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:47.509606 1142862 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:47.509780 1142862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.125 22 <nil> <nil>}
	I0603 13:50:47.509795 1142862 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-817450' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-817450/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-817450' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 13:50:47.618705 1142862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 13:50:47.618757 1142862 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19011-1078924/.minikube CaCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19011-1078924/.minikube}
	I0603 13:50:47.618822 1142862 buildroot.go:174] setting up certificates
	I0603 13:50:47.618835 1142862 provision.go:84] configureAuth start
	I0603 13:50:47.618854 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetMachineName
	I0603 13:50:47.619166 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetIP
	I0603 13:50:47.621974 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.622512 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.622548 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.622652 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:47.624950 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.625275 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.625302 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.625419 1142862 provision.go:143] copyHostCerts
	I0603 13:50:47.625504 1142862 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem, removing ...
	I0603 13:50:47.625520 1142862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 13:50:47.625591 1142862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem (1675 bytes)
	I0603 13:50:47.625697 1142862 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem, removing ...
	I0603 13:50:47.625706 1142862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 13:50:47.625725 1142862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem (1078 bytes)
	I0603 13:50:47.625790 1142862 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem, removing ...
	I0603 13:50:47.625800 1142862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 13:50:47.625826 1142862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem (1123 bytes)
	I0603 13:50:47.625891 1142862 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem org=jenkins.no-preload-817450 san=[127.0.0.1 192.168.72.125 localhost minikube no-preload-817450]
	I0603 13:50:47.733710 1142862 provision.go:177] copyRemoteCerts
	I0603 13:50:47.733769 1142862 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 13:50:47.733801 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:47.736326 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.736657 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.736686 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.736844 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:47.737036 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.737222 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:47.737341 1142862 sshutil.go:53] new ssh client: &{IP:192.168.72.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa Username:docker}
	I0603 13:50:47.821893 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 13:50:47.848085 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0603 13:50:47.875891 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 13:50:47.900761 1142862 provision.go:87] duration metric: took 281.906702ms to configureAuth
	I0603 13:50:47.900795 1142862 buildroot.go:189] setting minikube options for container-runtime
	I0603 13:50:47.900986 1142862 config.go:182] Loaded profile config "no-preload-817450": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:50:47.901072 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:47.904128 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.904551 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.904581 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.904802 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:47.905018 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.905203 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.905413 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:47.905609 1142862 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:47.905816 1142862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.125 22 <nil> <nil>}
	I0603 13:50:47.905839 1142862 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 13:50:48.176290 1142862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 13:50:48.176321 1142862 machine.go:97] duration metric: took 905.950732ms to provisionDockerMachine
	I0603 13:50:48.176333 1142862 start.go:293] postStartSetup for "no-preload-817450" (driver="kvm2")
	I0603 13:50:48.176344 1142862 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 13:50:48.176361 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:50:48.176689 1142862 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 13:50:48.176712 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:48.179595 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.179994 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:48.180020 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.180186 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:48.180398 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:48.180561 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:48.180704 1142862 sshutil.go:53] new ssh client: &{IP:192.168.72.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa Username:docker}
	I0603 13:50:48.267996 1142862 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 13:50:48.272936 1142862 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 13:50:48.272970 1142862 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/addons for local assets ...
	I0603 13:50:48.273044 1142862 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/files for local assets ...
	I0603 13:50:48.273141 1142862 filesync.go:149] local asset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> 10862512.pem in /etc/ssl/certs
	I0603 13:50:48.273285 1142862 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 13:50:48.283984 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:50:48.310846 1142862 start.go:296] duration metric: took 134.495139ms for postStartSetup
	I0603 13:50:48.310899 1142862 fix.go:56] duration metric: took 18.512032449s for fixHost
	I0603 13:50:48.310928 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:48.313969 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.314331 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:48.314358 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.314627 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:48.314896 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:48.315086 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:48.315258 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:48.315442 1142862 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:48.315681 1142862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.125 22 <nil> <nil>}
	I0603 13:50:48.315698 1142862 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 13:50:48.422576 1142862 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717422648.390814282
	
	I0603 13:50:48.422599 1142862 fix.go:216] guest clock: 1717422648.390814282
	I0603 13:50:48.422606 1142862 fix.go:229] Guest: 2024-06-03 13:50:48.390814282 +0000 UTC Remote: 2024-06-03 13:50:48.310904217 +0000 UTC m=+357.796105522 (delta=79.910065ms)
	I0603 13:50:48.422636 1142862 fix.go:200] guest clock delta is within tolerance: 79.910065ms
	I0603 13:50:48.422642 1142862 start.go:83] releasing machines lock for "no-preload-817450", held for 18.623816039s
	I0603 13:50:48.422659 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:50:48.422954 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetIP
	I0603 13:50:48.426261 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.426671 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:48.426701 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.426864 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:50:48.427460 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:50:48.427661 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:50:48.427762 1142862 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 13:50:48.427827 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:48.427878 1142862 ssh_runner.go:195] Run: cat /version.json
	I0603 13:50:48.427914 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:48.430586 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.430830 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.430965 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:48.430993 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.431177 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:48.431326 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:48.431355 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.431387 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:48.431516 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:48.431584 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:48.431676 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:48.431751 1142862 sshutil.go:53] new ssh client: &{IP:192.168.72.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa Username:docker}
	I0603 13:50:48.431798 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:48.431936 1142862 sshutil.go:53] new ssh client: &{IP:192.168.72.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa Username:docker}
	I0603 13:50:48.506899 1142862 ssh_runner.go:195] Run: systemctl --version
	I0603 13:50:48.545903 1142862 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 13:50:48.700235 1142862 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 13:50:48.706614 1142862 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 13:50:48.706704 1142862 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 13:50:48.724565 1142862 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 13:50:48.724592 1142862 start.go:494] detecting cgroup driver to use...
	I0603 13:50:48.724664 1142862 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 13:50:48.741006 1142862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 13:50:48.758824 1142862 docker.go:217] disabling cri-docker service (if available) ...
	I0603 13:50:48.758899 1142862 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 13:50:48.773280 1142862 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 13:50:48.791049 1142862 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 13:50:48.917847 1142862 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 13:50:49.081837 1142862 docker.go:233] disabling docker service ...
	I0603 13:50:49.081927 1142862 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 13:50:49.097577 1142862 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 13:50:49.112592 1142862 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 13:50:49.228447 1142862 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 13:50:49.350782 1142862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 13:50:49.366017 1142862 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 13:50:49.385685 1142862 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 13:50:49.385765 1142862 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:49.396361 1142862 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 13:50:49.396432 1142862 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:49.408606 1142862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:49.419642 1142862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:49.430431 1142862 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 13:50:49.441378 1142862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:49.451810 1142862 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:49.469080 1142862 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:49.480054 1142862 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 13:50:49.489742 1142862 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 13:50:49.489814 1142862 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 13:50:49.502889 1142862 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 13:50:49.512414 1142862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:50:49.639903 1142862 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 13:50:49.786388 1142862 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 13:50:49.786486 1142862 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 13:50:49.791642 1142862 start.go:562] Will wait 60s for crictl version
	I0603 13:50:49.791711 1142862 ssh_runner.go:195] Run: which crictl
	I0603 13:50:49.796156 1142862 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 13:50:49.841667 1142862 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 13:50:49.841765 1142862 ssh_runner.go:195] Run: crio --version
	I0603 13:50:49.872213 1142862 ssh_runner.go:195] Run: crio --version
	I0603 13:50:49.910979 1142862 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 13:50:46.370749 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:48.870860 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:49.912417 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetIP
	I0603 13:50:49.915368 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:49.915731 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:49.915759 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:49.915913 1142862 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0603 13:50:49.920247 1142862 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:50:49.933231 1142862 kubeadm.go:877] updating cluster {Name:no-preload-817450 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:no-preload-817450 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.125 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 13:50:49.933358 1142862 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 13:50:49.933388 1142862 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:50:49.970029 1142862 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0603 13:50:49.970059 1142862 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.1 registry.k8s.io/kube-controller-manager:v1.30.1 registry.k8s.io/kube-scheduler:v1.30.1 registry.k8s.io/kube-proxy:v1.30.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0603 13:50:49.970118 1142862 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:49.970147 1142862 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.1
	I0603 13:50:49.970163 1142862 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0603 13:50:49.970198 1142862 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 13:50:49.970239 1142862 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.1
	I0603 13:50:49.970316 1142862 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.1
	I0603 13:50:49.970328 1142862 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0603 13:50:49.970379 1142862 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0603 13:50:49.971809 1142862 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0603 13:50:49.971837 1142862 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0603 13:50:49.971841 1142862 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.1
	I0603 13:50:49.971809 1142862 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0603 13:50:49.971808 1142862 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.1
	I0603 13:50:49.971876 1142862 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 13:50:49.971816 1142862 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.1
	I0603 13:50:49.971813 1142862 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:50.126557 1142862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0603 13:50:50.146394 1142862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.1
	I0603 13:50:50.149455 1142862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.1
	I0603 13:50:50.149755 1142862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0603 13:50:50.154990 1142862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0603 13:50:50.162983 1142862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:50.177520 1142862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 13:50:50.188703 1142862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.1
	I0603 13:50:50.299288 1142862 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.1" does not exist at hash "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a" in container runtime
	I0603 13:50:50.299312 1142862 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.1" does not exist at hash "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035" in container runtime
	I0603 13:50:50.299345 1142862 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.1
	I0603 13:50:50.299350 1142862 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.1
	I0603 13:50:50.299389 1142862 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0603 13:50:50.299406 1142862 ssh_runner.go:195] Run: which crictl
	I0603 13:50:50.299413 1142862 ssh_runner.go:195] Run: which crictl
	I0603 13:50:50.299422 1142862 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0603 13:50:50.299488 1142862 ssh_runner.go:195] Run: which crictl
	I0603 13:50:50.353368 1142862 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0603 13:50:50.353431 1142862 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:50.353485 1142862 ssh_runner.go:195] Run: which crictl
	I0603 13:50:50.353506 1142862 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0603 13:50:50.353543 1142862 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0603 13:50:50.353591 1142862 ssh_runner.go:195] Run: which crictl
	I0603 13:50:50.379011 1142862 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.1" needs transfer: "registry.k8s.io/kube-proxy:v1.30.1" does not exist at hash "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd" in container runtime
	I0603 13:50:50.379028 1142862 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.1" does not exist at hash "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c" in container runtime
	I0603 13:50:50.379054 1142862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.1
	I0603 13:50:50.379062 1142862 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.1
	I0603 13:50:50.379105 1142862 ssh_runner.go:195] Run: which crictl
	I0603 13:50:50.379075 1142862 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 13:50:50.379146 1142862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.1
	I0603 13:50:50.379181 1142862 ssh_runner.go:195] Run: which crictl
	I0603 13:50:50.379212 1142862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0603 13:50:50.379229 1142862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0603 13:50:50.379239 1142862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:50.482204 1142862 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1
	I0603 13:50:50.482210 1142862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.1
	I0603 13:50:50.482332 1142862 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0603 13:50:50.511560 1142862 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0603 13:50:50.511671 1142862 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1
	I0603 13:50:50.511721 1142862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 13:50:50.511769 1142862 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0603 13:50:50.511772 1142862 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0603 13:50:50.511682 1142862 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0603 13:50:50.511868 1142862 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0603 13:50:50.512290 1142862 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0603 13:50:50.512360 1142862 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0603 13:50:50.549035 1142862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.1 (exists)
	I0603 13:50:50.549061 1142862 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1
	I0603 13:50:50.549066 1142862 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0603 13:50:50.549156 1142862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0603 13:50:50.549166 1142862 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.1
	I0603 13:50:47.693584 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:48.193894 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:48.694053 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:49.193587 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:49.694081 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:50.194053 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:50.694265 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:51.193572 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:51.694283 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:52.194444 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:48.321194 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:50.816679 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:52.818121 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:51.372716 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:53.372880 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:50.573615 1142862 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1
	I0603 13:50:50.573661 1142862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0603 13:50:50.573708 1142862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.1 (exists)
	I0603 13:50:50.573737 1142862 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0603 13:50:50.573754 1142862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0603 13:50:50.573816 1142862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0603 13:50:50.573839 1142862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.1 (exists)
	I0603 13:50:52.739312 1142862 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1: (2.190102069s)
	I0603 13:50:52.739333 1142862 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.1: (2.165569436s)
	I0603 13:50:52.739354 1142862 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1 from cache
	I0603 13:50:52.739365 1142862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.1 (exists)
	I0603 13:50:52.739372 1142862 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0603 13:50:52.739420 1142862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0603 13:50:54.995960 1142862 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1: (2.256502953s)
	I0603 13:50:54.996000 1142862 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1 from cache
	I0603 13:50:54.996019 1142862 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0603 13:50:54.996076 1142862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0603 13:50:52.694071 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:53.193597 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:53.694503 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:54.193609 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:54.694446 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:55.193856 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:55.693583 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:56.194271 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:56.693558 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:57.194427 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:55.317668 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:57.318423 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:55.872030 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:58.376034 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:55.844775 1142862 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0603 13:50:55.844853 1142862 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0603 13:50:55.844967 1142862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0603 13:50:58.110074 1142862 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1: (2.265068331s)
	I0603 13:50:58.110103 1142862 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1 from cache
	I0603 13:50:58.110115 1142862 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0603 13:50:58.110169 1142862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0603 13:50:59.979789 1142862 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.869594477s)
	I0603 13:50:59.979817 1142862 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0603 13:50:59.979832 1142862 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0603 13:50:59.979875 1142862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0603 13:50:57.694027 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:58.193718 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:58.693488 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:59.193725 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:59.694310 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:00.194455 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:00.694182 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:01.193916 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:01.693504 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:02.194236 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:59.816444 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:01.817757 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:00.872105 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:03.373427 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:04.067476 1142862 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.087571936s)
	I0603 13:51:04.067529 1142862 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0603 13:51:04.067549 1142862 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.1
	I0603 13:51:04.067605 1142862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1
	I0603 13:51:02.694248 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:03.194094 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:03.694072 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:04.194494 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:04.693899 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:05.193578 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:05.693584 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:06.193934 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:06.693586 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:07.193993 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:04.316979 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:06.318105 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:05.871061 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:08.371377 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:05.819264 1142862 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1: (1.75162069s)
	I0603 13:51:05.819302 1142862 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1 from cache
	I0603 13:51:05.819334 1142862 cache_images.go:123] Successfully loaded all cached images
	I0603 13:51:05.819341 1142862 cache_images.go:92] duration metric: took 15.849267186s to LoadCachedImages
	I0603 13:51:05.819352 1142862 kubeadm.go:928] updating node { 192.168.72.125 8443 v1.30.1 crio true true} ...
	I0603 13:51:05.819549 1142862 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-817450 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:no-preload-817450 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 13:51:05.819636 1142862 ssh_runner.go:195] Run: crio config
	I0603 13:51:05.874089 1142862 cni.go:84] Creating CNI manager for ""
	I0603 13:51:05.874114 1142862 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:51:05.874127 1142862 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 13:51:05.874152 1142862 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.125 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-817450 NodeName:no-preload-817450 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 13:51:05.874339 1142862 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.125
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-817450"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 13:51:05.874411 1142862 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 13:51:05.886116 1142862 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 13:51:05.886185 1142862 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 13:51:05.896269 1142862 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0603 13:51:05.914746 1142862 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 13:51:05.931936 1142862 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0603 13:51:05.949151 1142862 ssh_runner.go:195] Run: grep 192.168.72.125	control-plane.minikube.internal$ /etc/hosts
	I0603 13:51:05.953180 1142862 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.125	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:51:05.966675 1142862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:51:06.107517 1142862 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:51:06.129233 1142862 certs.go:68] Setting up /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450 for IP: 192.168.72.125
	I0603 13:51:06.129264 1142862 certs.go:194] generating shared ca certs ...
	I0603 13:51:06.129280 1142862 certs.go:226] acquiring lock for ca certs: {Name:mkeec5aabce7c9540fcb31b78e4f96c2851d54f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:51:06.129517 1142862 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key
	I0603 13:51:06.129583 1142862 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key
	I0603 13:51:06.129597 1142862 certs.go:256] generating profile certs ...
	I0603 13:51:06.129686 1142862 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450/client.key
	I0603 13:51:06.129746 1142862 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450/apiserver.key.e8ec030b
	I0603 13:51:06.129779 1142862 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450/proxy-client.key
	I0603 13:51:06.129885 1142862 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem (1338 bytes)
	W0603 13:51:06.129912 1142862 certs.go:480] ignoring /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251_empty.pem, impossibly tiny 0 bytes
	I0603 13:51:06.129919 1142862 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 13:51:06.129939 1142862 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem (1078 bytes)
	I0603 13:51:06.129965 1142862 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem (1123 bytes)
	I0603 13:51:06.129991 1142862 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem (1675 bytes)
	I0603 13:51:06.130028 1142862 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:51:06.130817 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 13:51:06.171348 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 13:51:06.206270 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 13:51:06.240508 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0603 13:51:06.292262 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0603 13:51:06.320406 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 13:51:06.346655 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 13:51:06.375908 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 13:51:06.401723 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 13:51:06.425992 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem --> /usr/share/ca-certificates/1086251.pem (1338 bytes)
	I0603 13:51:06.450484 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /usr/share/ca-certificates/10862512.pem (1708 bytes)
	I0603 13:51:06.475206 1142862 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 13:51:06.492795 1142862 ssh_runner.go:195] Run: openssl version
	I0603 13:51:06.499759 1142862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 13:51:06.511760 1142862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:51:06.516690 1142862 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:24 /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:51:06.516763 1142862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:51:06.523284 1142862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 13:51:06.535250 1142862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1086251.pem && ln -fs /usr/share/ca-certificates/1086251.pem /etc/ssl/certs/1086251.pem"
	I0603 13:51:06.545921 1142862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1086251.pem
	I0603 13:51:06.550765 1142862 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:37 /usr/share/ca-certificates/1086251.pem
	I0603 13:51:06.550823 1142862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1086251.pem
	I0603 13:51:06.556898 1142862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1086251.pem /etc/ssl/certs/51391683.0"
	I0603 13:51:06.567717 1142862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10862512.pem && ln -fs /usr/share/ca-certificates/10862512.pem /etc/ssl/certs/10862512.pem"
	I0603 13:51:06.578662 1142862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10862512.pem
	I0603 13:51:06.584084 1142862 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:37 /usr/share/ca-certificates/10862512.pem
	I0603 13:51:06.584153 1142862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10862512.pem
	I0603 13:51:06.591566 1142862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10862512.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 13:51:06.603554 1142862 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 13:51:06.608323 1142862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 13:51:06.614939 1142862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 13:51:06.621519 1142862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 13:51:06.627525 1142862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 13:51:06.633291 1142862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 13:51:06.639258 1142862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 13:51:06.644789 1142862 kubeadm.go:391] StartCluster: {Name:no-preload-817450 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:no-preload-817450 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.125 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:51:06.644876 1142862 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 13:51:06.644928 1142862 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:51:06.694731 1142862 cri.go:89] found id: ""
	I0603 13:51:06.694811 1142862 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0603 13:51:06.709773 1142862 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 13:51:06.709804 1142862 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 13:51:06.709812 1142862 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 13:51:06.709875 1142862 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 13:51:06.721095 1142862 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 13:51:06.722256 1142862 kubeconfig.go:125] found "no-preload-817450" server: "https://192.168.72.125:8443"
	I0603 13:51:06.724877 1142862 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 13:51:06.735753 1142862 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.125
	I0603 13:51:06.735789 1142862 kubeadm.go:1154] stopping kube-system containers ...
	I0603 13:51:06.735802 1142862 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0603 13:51:06.735847 1142862 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:51:06.776650 1142862 cri.go:89] found id: ""
	I0603 13:51:06.776743 1142862 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 13:51:06.796259 1142862 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 13:51:06.809765 1142862 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 13:51:06.809785 1142862 kubeadm.go:156] found existing configuration files:
	
	I0603 13:51:06.809839 1142862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 13:51:06.819821 1142862 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 13:51:06.819878 1142862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 13:51:06.829960 1142862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 13:51:06.839510 1142862 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 13:51:06.839561 1142862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 13:51:06.849346 1142862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 13:51:06.858834 1142862 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 13:51:06.858886 1142862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 13:51:06.869159 1142862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 13:51:06.879672 1142862 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 13:51:06.879739 1142862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 13:51:06.889393 1142862 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 13:51:06.899309 1142862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:51:07.021375 1142862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:51:08.119929 1142862 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.098510185s)
	I0603 13:51:08.119959 1142862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:51:08.318752 1142862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:51:08.396713 1142862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:51:08.506285 1142862 api_server.go:52] waiting for apiserver process to appear ...
	I0603 13:51:08.506384 1142862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:09.006865 1142862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:09.506528 1142862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:09.582432 1142862 api_server.go:72] duration metric: took 1.076134659s to wait for apiserver process to appear ...
	I0603 13:51:09.582463 1142862 api_server.go:88] waiting for apiserver healthz status ...
	I0603 13:51:09.582507 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:07.693540 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:08.194490 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:08.694498 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:09.194496 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:09.694286 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:10.193605 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:10.694326 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:11.193904 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:11.694504 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:12.194093 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:08.318739 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:10.817309 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:10.371622 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:12.372640 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:14.871007 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:12.049693 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 13:51:12.049731 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 13:51:12.049748 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:12.084495 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 13:51:12.084526 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 13:51:12.084541 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:12.141515 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:51:12.141555 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:51:12.582630 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:12.590238 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:51:12.590279 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:51:13.082813 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:13.097350 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:51:13.097380 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:51:13.582895 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:13.587479 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:51:13.587511 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:51:14.083076 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:14.087531 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:51:14.087561 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:51:14.583203 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:14.587735 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:51:14.587781 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:51:15.082844 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:15.087403 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:51:15.087438 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:51:15.583226 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:15.590238 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 200:
	ok
	I0603 13:51:15.601732 1142862 api_server.go:141] control plane version: v1.30.1
	I0603 13:51:15.601762 1142862 api_server.go:131] duration metric: took 6.019291333s to wait for apiserver health ...
	I0603 13:51:15.601775 1142862 cni.go:84] Creating CNI manager for ""
	I0603 13:51:15.601784 1142862 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:51:15.603654 1142862 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 13:51:12.694356 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:13.194219 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:13.693546 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:14.193588 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:14.694003 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:15.193572 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:15.694012 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:16.193567 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:16.694014 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:17.193554 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:13.320666 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:15.818073 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:17.369593 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:19.369916 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:15.605291 1142862 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 13:51:15.618333 1142862 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 13:51:15.640539 1142862 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 13:51:15.651042 1142862 system_pods.go:59] 8 kube-system pods found
	I0603 13:51:15.651086 1142862 system_pods.go:61] "coredns-7db6d8ff4d-s562v" [be995d41-2b25-4839-a36b-212a507e7db7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 13:51:15.651102 1142862 system_pods.go:61] "etcd-no-preload-817450" [1b21708b-d81b-4594-a186-546437467c26] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0603 13:51:15.651117 1142862 system_pods.go:61] "kube-apiserver-no-preload-817450" [0741a4bf-3161-4cf3-a9c6-36af2a0c4fde] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0603 13:51:15.651126 1142862 system_pods.go:61] "kube-controller-manager-no-preload-817450" [43713383-9197-4874-8aa9-7b1b1f05e4b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0603 13:51:15.651133 1142862 system_pods.go:61] "kube-proxy-2j4sg" [112657ad-311a-46ee-b5c0-6f544991465e] Running
	I0603 13:51:15.651145 1142862 system_pods.go:61] "kube-scheduler-no-preload-817450" [40db5c40-dc01-4fd3-a5e0-06a6ee1fd0a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0603 13:51:15.651152 1142862 system_pods.go:61] "metrics-server-569cc877fc-mtvrq" [00cb7657-2564-4d25-8faa-b6f618e61115] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:51:15.651163 1142862 system_pods.go:61] "storage-provisioner" [913d3120-32ce-4212-84be-9e3b99f2a894] Running
	I0603 13:51:15.651171 1142862 system_pods.go:74] duration metric: took 10.608401ms to wait for pod list to return data ...
	I0603 13:51:15.651181 1142862 node_conditions.go:102] verifying NodePressure condition ...
	I0603 13:51:15.654759 1142862 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 13:51:15.654784 1142862 node_conditions.go:123] node cpu capacity is 2
	I0603 13:51:15.654795 1142862 node_conditions.go:105] duration metric: took 3.608137ms to run NodePressure ...
	I0603 13:51:15.654813 1142862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:51:15.940085 1142862 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0603 13:51:15.944785 1142862 kubeadm.go:733] kubelet initialised
	I0603 13:51:15.944808 1142862 kubeadm.go:734] duration metric: took 4.692827ms waiting for restarted kubelet to initialise ...
	I0603 13:51:15.944817 1142862 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:51:15.950113 1142862 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-s562v" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:17.958330 1142862 pod_ready.go:102] pod "coredns-7db6d8ff4d-s562v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:20.456029 1142862 pod_ready.go:102] pod "coredns-7db6d8ff4d-s562v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:17.693856 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:18.193853 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:18.693858 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:19.193568 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:19.693680 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:20.193556 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:20.694129 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:21.193662 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:21.694445 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:22.193668 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:18.317128 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:20.317375 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:22.317530 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:21.371070 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:23.871400 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:21.958183 1142862 pod_ready.go:92] pod "coredns-7db6d8ff4d-s562v" in "kube-system" namespace has status "Ready":"True"
	I0603 13:51:21.958208 1142862 pod_ready.go:81] duration metric: took 6.008058251s for pod "coredns-7db6d8ff4d-s562v" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:21.958220 1142862 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:23.964785 1142862 pod_ready.go:102] pod "etcd-no-preload-817450" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:22.694004 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:23.193793 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:23.694340 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:24.194411 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:24.694314 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:25.194501 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:25.693545 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:26.194255 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:26.694312 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:27.194453 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:24.817165 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:27.317176 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:26.369665 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:28.370392 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:25.966060 1142862 pod_ready.go:102] pod "etcd-no-preload-817450" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:27.965236 1142862 pod_ready.go:92] pod "etcd-no-preload-817450" in "kube-system" namespace has status "Ready":"True"
	I0603 13:51:27.965267 1142862 pod_ready.go:81] duration metric: took 6.007038184s for pod "etcd-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.965281 1142862 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.969898 1142862 pod_ready.go:92] pod "kube-apiserver-no-preload-817450" in "kube-system" namespace has status "Ready":"True"
	I0603 13:51:27.969920 1142862 pod_ready.go:81] duration metric: took 4.630357ms for pod "kube-apiserver-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.969932 1142862 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.974500 1142862 pod_ready.go:92] pod "kube-controller-manager-no-preload-817450" in "kube-system" namespace has status "Ready":"True"
	I0603 13:51:27.974517 1142862 pod_ready.go:81] duration metric: took 4.577117ms for pod "kube-controller-manager-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.974526 1142862 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2j4sg" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.978510 1142862 pod_ready.go:92] pod "kube-proxy-2j4sg" in "kube-system" namespace has status "Ready":"True"
	I0603 13:51:27.978530 1142862 pod_ready.go:81] duration metric: took 3.997645ms for pod "kube-proxy-2j4sg" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.978537 1142862 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.982488 1142862 pod_ready.go:92] pod "kube-scheduler-no-preload-817450" in "kube-system" namespace has status "Ready":"True"
	I0603 13:51:27.982507 1142862 pod_ready.go:81] duration metric: took 3.962666ms for pod "kube-scheduler-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.982518 1142862 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:29.989265 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:27.694334 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:28.193809 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:28.693744 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:29.193608 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:29.693584 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:30.194111 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:30.694213 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:31.193588 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:31.694336 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:32.193716 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:29.317483 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:31.324199 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:30.370435 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:32.870510 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:34.872543 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:31.990649 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:34.488899 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:32.693501 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:33.194174 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:33.693995 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:34.194242 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:34.693961 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:35.194052 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:35.693730 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:36.193559 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:36.693763 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:37.194274 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:33.816533 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:36.316832 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:37.371543 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:39.372034 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:36.489364 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:38.490431 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:40.490888 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:37.693590 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:38.194328 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:38.694296 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:39.194272 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:39.693607 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:40.193595 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:51:40.193691 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:51:40.237747 1143678 cri.go:89] found id: ""
	I0603 13:51:40.237776 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.237785 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:51:40.237792 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:51:40.237854 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:51:40.275924 1143678 cri.go:89] found id: ""
	I0603 13:51:40.275964 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.275975 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:51:40.275983 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:51:40.276049 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:51:40.314827 1143678 cri.go:89] found id: ""
	I0603 13:51:40.314857 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.314870 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:51:40.314877 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:51:40.314939 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:51:40.359040 1143678 cri.go:89] found id: ""
	I0603 13:51:40.359072 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.359084 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:51:40.359092 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:51:40.359154 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:51:40.396136 1143678 cri.go:89] found id: ""
	I0603 13:51:40.396170 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.396185 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:51:40.396194 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:51:40.396261 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:51:40.436766 1143678 cri.go:89] found id: ""
	I0603 13:51:40.436803 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.436814 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:51:40.436828 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:51:40.436902 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:51:40.477580 1143678 cri.go:89] found id: ""
	I0603 13:51:40.477606 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.477615 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:51:40.477621 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:51:40.477713 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:51:40.518920 1143678 cri.go:89] found id: ""
	I0603 13:51:40.518960 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.518972 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:51:40.518984 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:51:40.519001 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:51:40.659881 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:51:40.659913 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:51:40.659932 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:51:40.727850 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:51:40.727894 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:51:40.774153 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:51:40.774189 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:40.828054 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:51:40.828094 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:51:38.820985 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:41.322044 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:41.870717 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:43.872112 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:42.988898 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:44.989384 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:43.342659 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:43.357063 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:51:43.357131 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:51:43.398000 1143678 cri.go:89] found id: ""
	I0603 13:51:43.398036 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.398045 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:51:43.398051 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:51:43.398106 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:51:43.436761 1143678 cri.go:89] found id: ""
	I0603 13:51:43.436805 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.436814 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:51:43.436820 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:51:43.436872 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:51:43.478122 1143678 cri.go:89] found id: ""
	I0603 13:51:43.478154 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.478164 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:51:43.478172 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:51:43.478243 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:51:43.514473 1143678 cri.go:89] found id: ""
	I0603 13:51:43.514511 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.514523 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:51:43.514532 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:51:43.514600 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:51:43.552354 1143678 cri.go:89] found id: ""
	I0603 13:51:43.552390 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.552399 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:51:43.552405 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:51:43.552489 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:51:43.590637 1143678 cri.go:89] found id: ""
	I0603 13:51:43.590665 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.590677 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:51:43.590685 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:51:43.590745 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:51:43.633958 1143678 cri.go:89] found id: ""
	I0603 13:51:43.634001 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.634013 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:51:43.634021 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:51:43.634088 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:51:43.672640 1143678 cri.go:89] found id: ""
	I0603 13:51:43.672683 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.672695 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:51:43.672716 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:51:43.672733 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:43.725880 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:51:43.725937 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:51:43.743736 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:51:43.743771 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:51:43.831757 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:51:43.831785 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:51:43.831801 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:51:43.905062 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:51:43.905114 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:51:46.459588 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:46.472911 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:51:46.472983 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:51:46.513723 1143678 cri.go:89] found id: ""
	I0603 13:51:46.513757 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.513768 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:51:46.513776 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:51:46.513845 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:51:46.549205 1143678 cri.go:89] found id: ""
	I0603 13:51:46.549234 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.549242 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:51:46.549251 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:51:46.549311 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:51:46.585004 1143678 cri.go:89] found id: ""
	I0603 13:51:46.585042 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.585053 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:51:46.585063 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:51:46.585120 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:51:46.620534 1143678 cri.go:89] found id: ""
	I0603 13:51:46.620571 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.620582 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:51:46.620590 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:51:46.620661 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:51:46.655974 1143678 cri.go:89] found id: ""
	I0603 13:51:46.656005 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.656014 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:51:46.656020 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:51:46.656091 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:51:46.693078 1143678 cri.go:89] found id: ""
	I0603 13:51:46.693141 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.693158 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:51:46.693168 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:51:46.693244 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:51:46.729177 1143678 cri.go:89] found id: ""
	I0603 13:51:46.729213 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.729223 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:51:46.729232 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:51:46.729300 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:51:46.766899 1143678 cri.go:89] found id: ""
	I0603 13:51:46.766929 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.766937 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:51:46.766946 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:51:46.766959 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:46.826715 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:51:46.826757 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:51:46.841461 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:51:46.841504 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:51:46.914505 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:51:46.914533 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:51:46.914551 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:51:46.989886 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:51:46.989928 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:51:43.817456 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:45.817576 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:46.370927 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:48.371196 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:46.990440 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:49.489483 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:49.532804 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:49.547359 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:51:49.547438 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:51:49.584262 1143678 cri.go:89] found id: ""
	I0603 13:51:49.584299 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.584311 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:51:49.584319 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:51:49.584389 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:51:49.622332 1143678 cri.go:89] found id: ""
	I0603 13:51:49.622372 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.622384 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:51:49.622393 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:51:49.622488 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:51:49.664339 1143678 cri.go:89] found id: ""
	I0603 13:51:49.664378 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.664390 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:51:49.664399 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:51:49.664468 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:51:49.712528 1143678 cri.go:89] found id: ""
	I0603 13:51:49.712558 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.712565 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:51:49.712574 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:51:49.712640 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:51:49.767343 1143678 cri.go:89] found id: ""
	I0603 13:51:49.767374 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.767382 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:51:49.767388 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:51:49.767450 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:51:49.822457 1143678 cri.go:89] found id: ""
	I0603 13:51:49.822491 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.822499 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:51:49.822505 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:51:49.822561 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:51:49.867823 1143678 cri.go:89] found id: ""
	I0603 13:51:49.867855 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.867867 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:51:49.867875 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:51:49.867936 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:51:49.906765 1143678 cri.go:89] found id: ""
	I0603 13:51:49.906797 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.906805 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:51:49.906816 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:51:49.906829 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:51:49.921731 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:51:49.921764 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:51:49.993832 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:51:49.993860 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:51:49.993878 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:51:50.070080 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:51:50.070125 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:51:50.112323 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:51:50.112357 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:48.317830 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:50.816577 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:52.817035 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:50.871664 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:52.871865 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:51.990258 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:54.489037 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:52.666289 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:52.680475 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:51:52.680550 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:51:52.722025 1143678 cri.go:89] found id: ""
	I0603 13:51:52.722063 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.722075 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:51:52.722083 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:51:52.722145 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:51:52.759709 1143678 cri.go:89] found id: ""
	I0603 13:51:52.759742 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.759754 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:51:52.759762 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:51:52.759838 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:51:52.797131 1143678 cri.go:89] found id: ""
	I0603 13:51:52.797162 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.797171 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:51:52.797176 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:51:52.797231 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:51:52.832921 1143678 cri.go:89] found id: ""
	I0603 13:51:52.832951 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.832959 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:51:52.832965 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:51:52.833024 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:51:52.869361 1143678 cri.go:89] found id: ""
	I0603 13:51:52.869389 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.869399 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:51:52.869422 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:51:52.869495 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:51:52.905863 1143678 cri.go:89] found id: ""
	I0603 13:51:52.905897 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.905909 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:51:52.905917 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:51:52.905985 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:51:52.940407 1143678 cri.go:89] found id: ""
	I0603 13:51:52.940438 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.940446 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:51:52.940452 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:51:52.940517 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:51:52.982079 1143678 cri.go:89] found id: ""
	I0603 13:51:52.982115 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.982126 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:51:52.982138 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:51:52.982155 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:51:53.066897 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:51:53.066942 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:51:53.108016 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:51:53.108056 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:53.164105 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:51:53.164151 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:51:53.178708 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:51:53.178743 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:51:53.257441 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:51:55.758633 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:55.774241 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:51:55.774329 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:51:55.809373 1143678 cri.go:89] found id: ""
	I0603 13:51:55.809436 1143678 logs.go:276] 0 containers: []
	W0603 13:51:55.809450 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:51:55.809467 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:51:55.809539 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:51:55.849741 1143678 cri.go:89] found id: ""
	I0603 13:51:55.849768 1143678 logs.go:276] 0 containers: []
	W0603 13:51:55.849776 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:51:55.849783 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:51:55.849834 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:51:55.893184 1143678 cri.go:89] found id: ""
	I0603 13:51:55.893216 1143678 logs.go:276] 0 containers: []
	W0603 13:51:55.893228 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:51:55.893238 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:51:55.893307 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:51:55.931572 1143678 cri.go:89] found id: ""
	I0603 13:51:55.931618 1143678 logs.go:276] 0 containers: []
	W0603 13:51:55.931632 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:51:55.931642 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:51:55.931713 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:51:55.969490 1143678 cri.go:89] found id: ""
	I0603 13:51:55.969527 1143678 logs.go:276] 0 containers: []
	W0603 13:51:55.969538 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:51:55.969546 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:51:55.969614 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:51:56.009266 1143678 cri.go:89] found id: ""
	I0603 13:51:56.009301 1143678 logs.go:276] 0 containers: []
	W0603 13:51:56.009313 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:51:56.009321 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:51:56.009394 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:51:56.049471 1143678 cri.go:89] found id: ""
	I0603 13:51:56.049520 1143678 logs.go:276] 0 containers: []
	W0603 13:51:56.049540 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:51:56.049547 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:51:56.049616 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:51:56.090176 1143678 cri.go:89] found id: ""
	I0603 13:51:56.090213 1143678 logs.go:276] 0 containers: []
	W0603 13:51:56.090228 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:51:56.090241 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:51:56.090266 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:51:56.175692 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:51:56.175737 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:51:56.222642 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:51:56.222683 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:56.276258 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:51:56.276301 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:51:56.291703 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:51:56.291739 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:51:56.364788 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:51:55.316604 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:57.816804 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:55.370917 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:57.372903 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:59.870783 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:56.489636 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:58.990006 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:58.865558 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:58.879983 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:51:58.880074 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:51:58.917422 1143678 cri.go:89] found id: ""
	I0603 13:51:58.917461 1143678 logs.go:276] 0 containers: []
	W0603 13:51:58.917473 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:51:58.917480 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:51:58.917535 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:51:58.953900 1143678 cri.go:89] found id: ""
	I0603 13:51:58.953933 1143678 logs.go:276] 0 containers: []
	W0603 13:51:58.953943 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:51:58.953959 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:51:58.954030 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:51:58.988677 1143678 cri.go:89] found id: ""
	I0603 13:51:58.988704 1143678 logs.go:276] 0 containers: []
	W0603 13:51:58.988713 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:51:58.988721 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:51:58.988783 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:51:59.023436 1143678 cri.go:89] found id: ""
	I0603 13:51:59.023474 1143678 logs.go:276] 0 containers: []
	W0603 13:51:59.023486 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:51:59.023494 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:51:59.023570 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:51:59.061357 1143678 cri.go:89] found id: ""
	I0603 13:51:59.061386 1143678 logs.go:276] 0 containers: []
	W0603 13:51:59.061394 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:51:59.061400 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:51:59.061487 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:51:59.102995 1143678 cri.go:89] found id: ""
	I0603 13:51:59.103025 1143678 logs.go:276] 0 containers: []
	W0603 13:51:59.103038 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:51:59.103047 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:51:59.103124 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:51:59.141443 1143678 cri.go:89] found id: ""
	I0603 13:51:59.141480 1143678 logs.go:276] 0 containers: []
	W0603 13:51:59.141492 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:51:59.141499 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:51:59.141586 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:51:59.182909 1143678 cri.go:89] found id: ""
	I0603 13:51:59.182943 1143678 logs.go:276] 0 containers: []
	W0603 13:51:59.182953 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:51:59.182967 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:51:59.182984 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:51:59.259533 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:51:59.259580 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:51:59.308976 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:51:59.309016 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:59.362092 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:51:59.362142 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:51:59.378836 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:51:59.378887 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:51:59.454524 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:01.954939 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:01.969968 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:01.970039 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:02.014226 1143678 cri.go:89] found id: ""
	I0603 13:52:02.014267 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.014280 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:02.014289 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:02.014361 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:02.051189 1143678 cri.go:89] found id: ""
	I0603 13:52:02.051244 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.051259 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:02.051268 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:02.051349 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:02.093509 1143678 cri.go:89] found id: ""
	I0603 13:52:02.093548 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.093575 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:02.093586 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:02.093718 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:02.132069 1143678 cri.go:89] found id: ""
	I0603 13:52:02.132113 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.132129 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:02.132138 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:02.132299 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:02.168043 1143678 cri.go:89] found id: ""
	I0603 13:52:02.168071 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.168079 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:02.168085 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:02.168138 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:02.207029 1143678 cri.go:89] found id: ""
	I0603 13:52:02.207064 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.207074 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:02.207081 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:02.207134 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:02.247669 1143678 cri.go:89] found id: ""
	I0603 13:52:02.247719 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.247728 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:02.247734 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:02.247848 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:02.285780 1143678 cri.go:89] found id: ""
	I0603 13:52:02.285817 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.285829 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:02.285841 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:02.285863 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:59.817887 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:01.818381 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:01.871338 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:04.371052 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:00.990263 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:02.990651 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:05.490343 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:02.348775 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:02.349776 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:02.364654 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:02.364691 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:02.447948 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:02.447978 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:02.447992 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:02.534039 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:02.534100 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:05.080437 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:05.094169 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:05.094245 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:05.132312 1143678 cri.go:89] found id: ""
	I0603 13:52:05.132339 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.132346 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:05.132352 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:05.132423 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:05.168941 1143678 cri.go:89] found id: ""
	I0603 13:52:05.168979 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.168990 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:05.168999 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:05.169068 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:05.207151 1143678 cri.go:89] found id: ""
	I0603 13:52:05.207188 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.207196 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:05.207202 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:05.207272 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:05.258807 1143678 cri.go:89] found id: ""
	I0603 13:52:05.258839 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.258850 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:05.258859 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:05.259004 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:05.298250 1143678 cri.go:89] found id: ""
	I0603 13:52:05.298285 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.298297 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:05.298306 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:05.298381 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:05.340922 1143678 cri.go:89] found id: ""
	I0603 13:52:05.340951 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.340959 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:05.340966 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:05.341027 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:05.382680 1143678 cri.go:89] found id: ""
	I0603 13:52:05.382707 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.382715 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:05.382722 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:05.382777 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:05.426774 1143678 cri.go:89] found id: ""
	I0603 13:52:05.426801 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.426811 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:05.426822 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:05.426836 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:05.483042 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:05.483091 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:05.499119 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:05.499159 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:05.580933 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:05.580962 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:05.580983 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:05.660395 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:05.660437 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:03.818676 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:06.316881 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:06.371515 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:08.871174 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:07.490662 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:09.992709 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:08.200887 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:08.215113 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:08.215203 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:08.252367 1143678 cri.go:89] found id: ""
	I0603 13:52:08.252404 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.252417 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:08.252427 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:08.252500 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:08.289249 1143678 cri.go:89] found id: ""
	I0603 13:52:08.289279 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.289290 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:08.289298 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:08.289364 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:08.331155 1143678 cri.go:89] found id: ""
	I0603 13:52:08.331181 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.331195 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:08.331201 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:08.331258 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:08.371376 1143678 cri.go:89] found id: ""
	I0603 13:52:08.371400 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.371408 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:08.371415 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:08.371477 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:08.408009 1143678 cri.go:89] found id: ""
	I0603 13:52:08.408045 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.408057 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:08.408065 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:08.408119 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:08.446377 1143678 cri.go:89] found id: ""
	I0603 13:52:08.446413 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.446421 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:08.446429 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:08.446504 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:08.485429 1143678 cri.go:89] found id: ""
	I0603 13:52:08.485461 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.485471 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:08.485479 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:08.485546 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:08.527319 1143678 cri.go:89] found id: ""
	I0603 13:52:08.527363 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.527375 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:08.527388 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:08.527414 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:08.602347 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:08.602371 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:08.602384 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:08.683855 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:08.683902 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:08.724402 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:08.724443 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:08.781154 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:08.781202 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:11.297827 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:11.313927 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:11.314006 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:11.352622 1143678 cri.go:89] found id: ""
	I0603 13:52:11.352660 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.352671 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:11.352678 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:11.352755 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:11.395301 1143678 cri.go:89] found id: ""
	I0603 13:52:11.395338 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.395351 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:11.395360 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:11.395442 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:11.431104 1143678 cri.go:89] found id: ""
	I0603 13:52:11.431143 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.431155 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:11.431170 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:11.431234 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:11.470177 1143678 cri.go:89] found id: ""
	I0603 13:52:11.470212 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.470223 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:11.470241 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:11.470309 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:11.508741 1143678 cri.go:89] found id: ""
	I0603 13:52:11.508779 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.508803 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:11.508810 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:11.508906 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:11.544970 1143678 cri.go:89] found id: ""
	I0603 13:52:11.545002 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.545012 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:11.545022 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:11.545093 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:11.583606 1143678 cri.go:89] found id: ""
	I0603 13:52:11.583636 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.583653 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:11.583666 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:11.583739 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:11.624770 1143678 cri.go:89] found id: ""
	I0603 13:52:11.624806 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.624815 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:11.624824 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:11.624841 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:11.680251 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:11.680298 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:11.695656 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:11.695695 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:11.770414 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:11.770478 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:11.770497 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:11.850812 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:11.850871 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:08.318447 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:10.817734 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:11.372533 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:13.871822 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:12.490666 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:14.988752 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:14.398649 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:14.411591 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:14.411689 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:14.447126 1143678 cri.go:89] found id: ""
	I0603 13:52:14.447158 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.447170 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:14.447178 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:14.447245 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:14.486681 1143678 cri.go:89] found id: ""
	I0603 13:52:14.486716 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.486728 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:14.486735 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:14.486799 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:14.521297 1143678 cri.go:89] found id: ""
	I0603 13:52:14.521326 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.521337 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:14.521343 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:14.521443 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:14.565086 1143678 cri.go:89] found id: ""
	I0603 13:52:14.565121 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.565130 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:14.565136 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:14.565196 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:14.601947 1143678 cri.go:89] found id: ""
	I0603 13:52:14.601975 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.601984 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:14.601990 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:14.602044 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:14.638332 1143678 cri.go:89] found id: ""
	I0603 13:52:14.638359 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.638366 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:14.638374 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:14.638435 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:14.675254 1143678 cri.go:89] found id: ""
	I0603 13:52:14.675284 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.675293 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:14.675299 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:14.675354 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:14.712601 1143678 cri.go:89] found id: ""
	I0603 13:52:14.712631 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.712639 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:14.712649 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:14.712663 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:14.787026 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:14.787068 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:14.836534 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:14.836564 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:14.889682 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:14.889729 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:14.905230 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:14.905264 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:14.979090 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:13.317070 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:15.317490 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:17.816412 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:15.871901 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:18.370626 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:16.989195 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:18.990108 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:17.479590 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:17.495088 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:17.495250 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:17.530832 1143678 cri.go:89] found id: ""
	I0603 13:52:17.530871 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.530883 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:17.530891 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:17.530966 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:17.567183 1143678 cri.go:89] found id: ""
	I0603 13:52:17.567213 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.567224 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:17.567232 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:17.567305 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:17.602424 1143678 cri.go:89] found id: ""
	I0603 13:52:17.602458 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.602469 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:17.602493 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:17.602570 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:17.641148 1143678 cri.go:89] found id: ""
	I0603 13:52:17.641184 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.641197 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:17.641205 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:17.641273 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:17.679004 1143678 cri.go:89] found id: ""
	I0603 13:52:17.679031 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.679039 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:17.679045 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:17.679102 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:17.717667 1143678 cri.go:89] found id: ""
	I0603 13:52:17.717698 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.717707 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:17.717715 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:17.717786 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:17.760262 1143678 cri.go:89] found id: ""
	I0603 13:52:17.760300 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.760323 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:17.760331 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:17.760416 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:17.796910 1143678 cri.go:89] found id: ""
	I0603 13:52:17.796943 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.796960 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:17.796976 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:17.796990 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:17.811733 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:17.811768 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:17.891891 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:17.891920 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:17.891939 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:17.969495 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:17.969535 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:18.032622 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:18.032654 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:20.586079 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:20.599118 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:20.599202 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:20.633732 1143678 cri.go:89] found id: ""
	I0603 13:52:20.633770 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.633780 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:20.633787 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:20.633841 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:20.668126 1143678 cri.go:89] found id: ""
	I0603 13:52:20.668155 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.668163 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:20.668169 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:20.668231 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:20.704144 1143678 cri.go:89] found id: ""
	I0603 13:52:20.704177 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.704187 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:20.704194 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:20.704251 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:20.745562 1143678 cri.go:89] found id: ""
	I0603 13:52:20.745594 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.745602 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:20.745608 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:20.745663 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:20.788998 1143678 cri.go:89] found id: ""
	I0603 13:52:20.789041 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.789053 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:20.789075 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:20.789152 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:20.832466 1143678 cri.go:89] found id: ""
	I0603 13:52:20.832495 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.832503 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:20.832510 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:20.832575 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:20.875212 1143678 cri.go:89] found id: ""
	I0603 13:52:20.875248 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.875258 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:20.875267 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:20.875336 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:20.912957 1143678 cri.go:89] found id: ""
	I0603 13:52:20.912989 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.912999 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:20.913011 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:20.913030 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:20.963655 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:20.963700 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:20.978619 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:20.978658 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:21.057136 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:21.057163 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:21.057185 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:21.136368 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:21.136415 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:19.817227 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:21.817625 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:20.871465 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:23.370757 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:21.488564 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:23.991662 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:23.676222 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:23.691111 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:23.691213 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:23.733282 1143678 cri.go:89] found id: ""
	I0603 13:52:23.733319 1143678 logs.go:276] 0 containers: []
	W0603 13:52:23.733332 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:23.733341 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:23.733438 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:23.780841 1143678 cri.go:89] found id: ""
	I0603 13:52:23.780873 1143678 logs.go:276] 0 containers: []
	W0603 13:52:23.780882 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:23.780894 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:23.780947 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:23.820521 1143678 cri.go:89] found id: ""
	I0603 13:52:23.820553 1143678 logs.go:276] 0 containers: []
	W0603 13:52:23.820565 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:23.820573 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:23.820636 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:23.857684 1143678 cri.go:89] found id: ""
	I0603 13:52:23.857728 1143678 logs.go:276] 0 containers: []
	W0603 13:52:23.857739 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:23.857747 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:23.857818 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:23.896800 1143678 cri.go:89] found id: ""
	I0603 13:52:23.896829 1143678 logs.go:276] 0 containers: []
	W0603 13:52:23.896842 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:23.896850 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:23.896914 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:23.935511 1143678 cri.go:89] found id: ""
	I0603 13:52:23.935538 1143678 logs.go:276] 0 containers: []
	W0603 13:52:23.935547 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:23.935554 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:23.935608 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:23.973858 1143678 cri.go:89] found id: ""
	I0603 13:52:23.973885 1143678 logs.go:276] 0 containers: []
	W0603 13:52:23.973895 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:23.973901 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:23.973961 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:24.012491 1143678 cri.go:89] found id: ""
	I0603 13:52:24.012521 1143678 logs.go:276] 0 containers: []
	W0603 13:52:24.012532 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:24.012545 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:24.012569 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:24.064274 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:24.064319 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:24.079382 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:24.079420 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:24.153708 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:24.153733 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:24.153749 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:24.233104 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:24.233148 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:26.774771 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:26.789853 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:26.789924 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:26.830089 1143678 cri.go:89] found id: ""
	I0603 13:52:26.830129 1143678 logs.go:276] 0 containers: []
	W0603 13:52:26.830167 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:26.830176 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:26.830251 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:26.866907 1143678 cri.go:89] found id: ""
	I0603 13:52:26.866941 1143678 logs.go:276] 0 containers: []
	W0603 13:52:26.866952 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:26.866960 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:26.867031 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:26.915028 1143678 cri.go:89] found id: ""
	I0603 13:52:26.915061 1143678 logs.go:276] 0 containers: []
	W0603 13:52:26.915070 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:26.915079 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:26.915151 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:26.962044 1143678 cri.go:89] found id: ""
	I0603 13:52:26.962075 1143678 logs.go:276] 0 containers: []
	W0603 13:52:26.962083 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:26.962088 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:26.962154 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:26.996156 1143678 cri.go:89] found id: ""
	I0603 13:52:26.996188 1143678 logs.go:276] 0 containers: []
	W0603 13:52:26.996196 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:26.996202 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:26.996265 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:27.038593 1143678 cri.go:89] found id: ""
	I0603 13:52:27.038627 1143678 logs.go:276] 0 containers: []
	W0603 13:52:27.038636 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:27.038642 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:27.038708 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:27.076116 1143678 cri.go:89] found id: ""
	I0603 13:52:27.076144 1143678 logs.go:276] 0 containers: []
	W0603 13:52:27.076153 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:27.076159 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:27.076228 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:27.110653 1143678 cri.go:89] found id: ""
	I0603 13:52:27.110688 1143678 logs.go:276] 0 containers: []
	W0603 13:52:27.110700 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:27.110714 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:27.110733 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:27.193718 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:27.193743 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:27.193756 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:27.269423 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:27.269483 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:27.307899 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:27.307939 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:24.317663 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:26.817148 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:25.371861 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:27.870070 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:29.870299 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:26.488753 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:28.489065 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:30.489568 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:27.363830 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:27.363878 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:29.879016 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:29.893482 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:29.893553 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:29.932146 1143678 cri.go:89] found id: ""
	I0603 13:52:29.932190 1143678 logs.go:276] 0 containers: []
	W0603 13:52:29.932199 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:29.932205 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:29.932259 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:29.968986 1143678 cri.go:89] found id: ""
	I0603 13:52:29.969020 1143678 logs.go:276] 0 containers: []
	W0603 13:52:29.969032 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:29.969040 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:29.969097 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:30.007190 1143678 cri.go:89] found id: ""
	I0603 13:52:30.007228 1143678 logs.go:276] 0 containers: []
	W0603 13:52:30.007238 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:30.007244 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:30.007303 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:30.044607 1143678 cri.go:89] found id: ""
	I0603 13:52:30.044638 1143678 logs.go:276] 0 containers: []
	W0603 13:52:30.044646 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:30.044652 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:30.044706 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:30.083103 1143678 cri.go:89] found id: ""
	I0603 13:52:30.083179 1143678 logs.go:276] 0 containers: []
	W0603 13:52:30.083193 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:30.083204 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:30.083280 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:30.124125 1143678 cri.go:89] found id: ""
	I0603 13:52:30.124152 1143678 logs.go:276] 0 containers: []
	W0603 13:52:30.124160 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:30.124167 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:30.124234 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:30.164293 1143678 cri.go:89] found id: ""
	I0603 13:52:30.164329 1143678 logs.go:276] 0 containers: []
	W0603 13:52:30.164345 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:30.164353 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:30.164467 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:30.219980 1143678 cri.go:89] found id: ""
	I0603 13:52:30.220015 1143678 logs.go:276] 0 containers: []
	W0603 13:52:30.220028 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:30.220042 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:30.220063 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:30.313282 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:30.313305 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:30.313323 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:30.393759 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:30.393801 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:30.441384 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:30.441434 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:30.493523 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:30.493558 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:28.817554 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:31.317629 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:31.870659 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:33.870954 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:32.990340 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:35.495665 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:33.009114 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:33.023177 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:33.023278 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:33.065346 1143678 cri.go:89] found id: ""
	I0603 13:52:33.065388 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.065400 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:33.065424 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:33.065506 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:33.108513 1143678 cri.go:89] found id: ""
	I0603 13:52:33.108549 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.108561 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:33.108569 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:33.108640 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:33.146053 1143678 cri.go:89] found id: ""
	I0603 13:52:33.146082 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.146089 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:33.146107 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:33.146165 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:33.187152 1143678 cri.go:89] found id: ""
	I0603 13:52:33.187195 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.187206 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:33.187216 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:33.187302 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:33.223887 1143678 cri.go:89] found id: ""
	I0603 13:52:33.223920 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.223932 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:33.223941 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:33.224010 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:33.263902 1143678 cri.go:89] found id: ""
	I0603 13:52:33.263958 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.263971 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:33.263980 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:33.264048 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:33.302753 1143678 cri.go:89] found id: ""
	I0603 13:52:33.302785 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.302796 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:33.302805 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:33.302859 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:33.340711 1143678 cri.go:89] found id: ""
	I0603 13:52:33.340745 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.340754 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:33.340763 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:33.340780 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:33.400226 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:33.400271 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:33.414891 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:33.414923 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:33.498121 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:33.498156 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:33.498172 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:33.575682 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:33.575731 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:36.116930 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:36.133001 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:36.133070 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:36.182727 1143678 cri.go:89] found id: ""
	I0603 13:52:36.182763 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.182774 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:36.182782 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:36.182851 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:36.228804 1143678 cri.go:89] found id: ""
	I0603 13:52:36.228841 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.228854 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:36.228862 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:36.228929 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:36.279320 1143678 cri.go:89] found id: ""
	I0603 13:52:36.279359 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.279370 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:36.279378 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:36.279461 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:36.319725 1143678 cri.go:89] found id: ""
	I0603 13:52:36.319751 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.319759 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:36.319765 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:36.319819 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:36.356657 1143678 cri.go:89] found id: ""
	I0603 13:52:36.356685 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.356693 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:36.356703 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:36.356760 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:36.393397 1143678 cri.go:89] found id: ""
	I0603 13:52:36.393448 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.393459 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:36.393467 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:36.393545 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:36.429211 1143678 cri.go:89] found id: ""
	I0603 13:52:36.429246 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.429254 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:36.429260 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:36.429324 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:36.466796 1143678 cri.go:89] found id: ""
	I0603 13:52:36.466831 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.466839 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:36.466849 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:36.466862 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:36.509871 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:36.509900 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:36.562167 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:36.562206 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:36.577014 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:36.577047 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:36.657581 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:36.657604 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:36.657625 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:33.817495 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:35.820854 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:36.371645 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:38.871484 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:37.989038 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:39.989986 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:39.242339 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:39.257985 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:39.258072 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:39.300153 1143678 cri.go:89] found id: ""
	I0603 13:52:39.300185 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.300197 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:39.300205 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:39.300304 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:39.336117 1143678 cri.go:89] found id: ""
	I0603 13:52:39.336152 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.336162 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:39.336175 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:39.336307 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:39.375945 1143678 cri.go:89] found id: ""
	I0603 13:52:39.375979 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.375990 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:39.375998 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:39.376066 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:39.417207 1143678 cri.go:89] found id: ""
	I0603 13:52:39.417242 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.417253 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:39.417261 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:39.417340 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:39.456259 1143678 cri.go:89] found id: ""
	I0603 13:52:39.456295 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.456307 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:39.456315 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:39.456377 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:39.494879 1143678 cri.go:89] found id: ""
	I0603 13:52:39.494904 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.494913 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:39.494919 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:39.494979 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:39.532129 1143678 cri.go:89] found id: ""
	I0603 13:52:39.532157 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.532168 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:39.532177 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:39.532267 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:39.570662 1143678 cri.go:89] found id: ""
	I0603 13:52:39.570693 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.570703 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:39.570717 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:39.570734 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:39.622008 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:39.622057 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:39.636849 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:39.636884 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:39.719914 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:39.719948 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:39.719967 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:39.801723 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:39.801769 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:38.317321 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:40.817649 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:42.819652 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:41.370965 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:43.371900 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:42.490311 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:44.988731 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:42.348936 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:42.363663 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:42.363735 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:42.400584 1143678 cri.go:89] found id: ""
	I0603 13:52:42.400616 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.400625 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:42.400631 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:42.400685 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:42.438853 1143678 cri.go:89] found id: ""
	I0603 13:52:42.438885 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.438893 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:42.438899 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:42.438954 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:42.474980 1143678 cri.go:89] found id: ""
	I0603 13:52:42.475013 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.475025 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:42.475032 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:42.475086 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:42.511027 1143678 cri.go:89] found id: ""
	I0603 13:52:42.511056 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.511068 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:42.511077 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:42.511237 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:42.545333 1143678 cri.go:89] found id: ""
	I0603 13:52:42.545367 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.545378 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:42.545386 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:42.545468 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:42.583392 1143678 cri.go:89] found id: ""
	I0603 13:52:42.583438 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.583556 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:42.583591 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:42.583656 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:42.620886 1143678 cri.go:89] found id: ""
	I0603 13:52:42.620916 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.620924 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:42.620930 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:42.620985 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:42.656265 1143678 cri.go:89] found id: ""
	I0603 13:52:42.656301 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.656313 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:42.656327 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:42.656344 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:42.711078 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:42.711124 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:42.727751 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:42.727788 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:42.802330 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:42.802356 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:42.802370 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:42.883700 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:42.883742 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:45.424591 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:45.440797 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:45.440883 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:45.483664 1143678 cri.go:89] found id: ""
	I0603 13:52:45.483698 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.483709 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:45.483717 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:45.483789 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:45.523147 1143678 cri.go:89] found id: ""
	I0603 13:52:45.523182 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.523193 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:45.523201 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:45.523273 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:45.563483 1143678 cri.go:89] found id: ""
	I0603 13:52:45.563516 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.563527 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:45.563536 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:45.563598 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:45.603574 1143678 cri.go:89] found id: ""
	I0603 13:52:45.603603 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.603618 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:45.603625 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:45.603680 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:45.642664 1143678 cri.go:89] found id: ""
	I0603 13:52:45.642694 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.642705 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:45.642714 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:45.642793 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:45.679961 1143678 cri.go:89] found id: ""
	I0603 13:52:45.679998 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.680011 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:45.680026 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:45.680100 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:45.716218 1143678 cri.go:89] found id: ""
	I0603 13:52:45.716255 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.716263 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:45.716270 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:45.716364 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:45.752346 1143678 cri.go:89] found id: ""
	I0603 13:52:45.752374 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.752382 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:45.752391 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:45.752405 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:45.793992 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:45.794029 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:45.844930 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:45.844973 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:45.859594 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:45.859633 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:45.936469 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:45.936498 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:45.936515 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:45.317705 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:47.818994 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:45.870780 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:47.871003 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:49.871625 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:46.990866 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:49.488680 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:48.514959 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:48.528331 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:48.528401 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:48.565671 1143678 cri.go:89] found id: ""
	I0603 13:52:48.565703 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.565715 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:48.565724 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:48.565786 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:48.603938 1143678 cri.go:89] found id: ""
	I0603 13:52:48.603973 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.603991 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:48.604000 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:48.604068 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:48.643521 1143678 cri.go:89] found id: ""
	I0603 13:52:48.643550 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.643562 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:48.643571 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:48.643627 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:48.678264 1143678 cri.go:89] found id: ""
	I0603 13:52:48.678301 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.678312 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:48.678320 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:48.678407 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:48.714974 1143678 cri.go:89] found id: ""
	I0603 13:52:48.715014 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.715026 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:48.715034 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:48.715138 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:48.750364 1143678 cri.go:89] found id: ""
	I0603 13:52:48.750396 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.750408 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:48.750416 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:48.750482 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:48.788203 1143678 cri.go:89] found id: ""
	I0603 13:52:48.788238 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.788249 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:48.788258 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:48.788345 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:48.826891 1143678 cri.go:89] found id: ""
	I0603 13:52:48.826920 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.826928 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:48.826938 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:48.826951 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:48.877271 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:48.877315 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:48.892155 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:48.892187 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:48.973433 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:48.973459 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:48.973473 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:49.062819 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:49.062888 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:51.614261 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:51.628056 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:51.628142 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:51.662894 1143678 cri.go:89] found id: ""
	I0603 13:52:51.662924 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.662935 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:51.662942 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:51.663009 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:51.701847 1143678 cri.go:89] found id: ""
	I0603 13:52:51.701878 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.701889 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:51.701896 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:51.701963 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:51.737702 1143678 cri.go:89] found id: ""
	I0603 13:52:51.737741 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.737752 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:51.737760 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:51.737833 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:51.772913 1143678 cri.go:89] found id: ""
	I0603 13:52:51.772944 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.772956 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:51.772964 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:51.773034 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:51.810268 1143678 cri.go:89] found id: ""
	I0603 13:52:51.810298 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.810307 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:51.810312 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:51.810377 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:51.848575 1143678 cri.go:89] found id: ""
	I0603 13:52:51.848612 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.848624 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:51.848633 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:51.848696 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:51.886500 1143678 cri.go:89] found id: ""
	I0603 13:52:51.886536 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.886549 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:51.886560 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:51.886617 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:51.924070 1143678 cri.go:89] found id: ""
	I0603 13:52:51.924104 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.924115 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:51.924128 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:51.924146 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:51.940324 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:51.940355 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:52.019958 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:52.019997 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:52.020015 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:52.095953 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:52.095999 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:52.141070 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:52.141102 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:50.317008 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:52.317142 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:51.872275 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:54.376761 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:51.490098 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:53.491292 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:54.694651 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:54.708508 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:54.708597 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:54.745708 1143678 cri.go:89] found id: ""
	I0603 13:52:54.745748 1143678 logs.go:276] 0 containers: []
	W0603 13:52:54.745762 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:54.745770 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:54.745842 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:54.783335 1143678 cri.go:89] found id: ""
	I0603 13:52:54.783369 1143678 logs.go:276] 0 containers: []
	W0603 13:52:54.783381 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:54.783389 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:54.783465 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:54.824111 1143678 cri.go:89] found id: ""
	I0603 13:52:54.824140 1143678 logs.go:276] 0 containers: []
	W0603 13:52:54.824151 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:54.824159 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:54.824230 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:54.868676 1143678 cri.go:89] found id: ""
	I0603 13:52:54.868710 1143678 logs.go:276] 0 containers: []
	W0603 13:52:54.868721 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:54.868730 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:54.868801 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:54.906180 1143678 cri.go:89] found id: ""
	I0603 13:52:54.906216 1143678 logs.go:276] 0 containers: []
	W0603 13:52:54.906227 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:54.906235 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:54.906310 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:54.945499 1143678 cri.go:89] found id: ""
	I0603 13:52:54.945532 1143678 logs.go:276] 0 containers: []
	W0603 13:52:54.945544 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:54.945552 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:54.945619 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:54.986785 1143678 cri.go:89] found id: ""
	I0603 13:52:54.986812 1143678 logs.go:276] 0 containers: []
	W0603 13:52:54.986820 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:54.986826 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:54.986888 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:55.035290 1143678 cri.go:89] found id: ""
	I0603 13:52:55.035320 1143678 logs.go:276] 0 containers: []
	W0603 13:52:55.035329 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:55.035338 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:55.035352 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:55.085384 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:55.085451 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:55.100699 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:55.100733 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:55.171587 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:55.171614 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:55.171638 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:55.249078 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:55.249123 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:54.317435 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:56.318657 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:56.869954 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:58.872728 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:55.990512 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:58.489578 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:00.490668 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:57.791538 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:57.804373 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:57.804437 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:57.843969 1143678 cri.go:89] found id: ""
	I0603 13:52:57.844007 1143678 logs.go:276] 0 containers: []
	W0603 13:52:57.844016 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:57.844022 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:57.844077 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:57.881201 1143678 cri.go:89] found id: ""
	I0603 13:52:57.881239 1143678 logs.go:276] 0 containers: []
	W0603 13:52:57.881252 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:57.881261 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:57.881336 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:57.917572 1143678 cri.go:89] found id: ""
	I0603 13:52:57.917601 1143678 logs.go:276] 0 containers: []
	W0603 13:52:57.917610 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:57.917617 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:57.917671 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:57.951603 1143678 cri.go:89] found id: ""
	I0603 13:52:57.951642 1143678 logs.go:276] 0 containers: []
	W0603 13:52:57.951654 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:57.951661 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:57.951716 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:57.992833 1143678 cri.go:89] found id: ""
	I0603 13:52:57.992863 1143678 logs.go:276] 0 containers: []
	W0603 13:52:57.992874 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:57.992881 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:57.992945 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:58.031595 1143678 cri.go:89] found id: ""
	I0603 13:52:58.031636 1143678 logs.go:276] 0 containers: []
	W0603 13:52:58.031648 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:58.031657 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:58.031723 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:58.068947 1143678 cri.go:89] found id: ""
	I0603 13:52:58.068985 1143678 logs.go:276] 0 containers: []
	W0603 13:52:58.068996 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:58.069005 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:58.069077 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:58.106559 1143678 cri.go:89] found id: ""
	I0603 13:52:58.106587 1143678 logs.go:276] 0 containers: []
	W0603 13:52:58.106598 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:58.106623 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:58.106640 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:58.162576 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:58.162623 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:58.177104 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:58.177155 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:58.250279 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:58.250312 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:58.250329 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:58.330876 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:58.330920 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:00.871443 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:00.885505 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:00.885589 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:00.923878 1143678 cri.go:89] found id: ""
	I0603 13:53:00.923910 1143678 logs.go:276] 0 containers: []
	W0603 13:53:00.923920 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:00.923928 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:00.923995 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:00.960319 1143678 cri.go:89] found id: ""
	I0603 13:53:00.960362 1143678 logs.go:276] 0 containers: []
	W0603 13:53:00.960375 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:00.960384 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:00.960449 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:00.998806 1143678 cri.go:89] found id: ""
	I0603 13:53:00.998845 1143678 logs.go:276] 0 containers: []
	W0603 13:53:00.998857 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:00.998866 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:00.998929 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:01.033211 1143678 cri.go:89] found id: ""
	I0603 13:53:01.033245 1143678 logs.go:276] 0 containers: []
	W0603 13:53:01.033256 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:01.033265 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:01.033341 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:01.072852 1143678 cri.go:89] found id: ""
	I0603 13:53:01.072883 1143678 logs.go:276] 0 containers: []
	W0603 13:53:01.072891 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:01.072898 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:01.072950 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:01.115667 1143678 cri.go:89] found id: ""
	I0603 13:53:01.115699 1143678 logs.go:276] 0 containers: []
	W0603 13:53:01.115711 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:01.115719 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:01.115824 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:01.153676 1143678 cri.go:89] found id: ""
	I0603 13:53:01.153717 1143678 logs.go:276] 0 containers: []
	W0603 13:53:01.153733 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:01.153741 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:01.153815 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:01.188970 1143678 cri.go:89] found id: ""
	I0603 13:53:01.189003 1143678 logs.go:276] 0 containers: []
	W0603 13:53:01.189017 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:01.189031 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:01.189049 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:01.233151 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:01.233214 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:01.287218 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:01.287269 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:01.302370 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:01.302408 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:01.378414 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:01.378444 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:01.378463 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:58.817003 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:01.317698 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:01.371257 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:03.872917 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:02.989133 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:04.990930 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:03.957327 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:03.971246 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:03.971340 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:04.007299 1143678 cri.go:89] found id: ""
	I0603 13:53:04.007335 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.007347 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:04.007356 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:04.007427 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:04.046364 1143678 cri.go:89] found id: ""
	I0603 13:53:04.046396 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.046405 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:04.046411 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:04.046469 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:04.082094 1143678 cri.go:89] found id: ""
	I0603 13:53:04.082127 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.082139 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:04.082148 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:04.082209 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:04.117389 1143678 cri.go:89] found id: ""
	I0603 13:53:04.117434 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.117446 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:04.117454 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:04.117530 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:04.150560 1143678 cri.go:89] found id: ""
	I0603 13:53:04.150596 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.150606 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:04.150614 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:04.150678 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:04.184808 1143678 cri.go:89] found id: ""
	I0603 13:53:04.184845 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.184857 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:04.184865 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:04.184935 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:04.220286 1143678 cri.go:89] found id: ""
	I0603 13:53:04.220317 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.220326 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:04.220332 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:04.220385 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:04.258898 1143678 cri.go:89] found id: ""
	I0603 13:53:04.258929 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.258941 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:04.258955 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:04.258972 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:04.312151 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:04.312198 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:04.329908 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:04.329943 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:04.402075 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:04.402106 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:04.402138 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:04.482873 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:04.482936 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:07.049978 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:07.063072 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:07.063140 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:07.097703 1143678 cri.go:89] found id: ""
	I0603 13:53:07.097737 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.097748 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:07.097755 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:07.097811 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:07.134826 1143678 cri.go:89] found id: ""
	I0603 13:53:07.134865 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.134878 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:07.134886 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:07.134955 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:07.178015 1143678 cri.go:89] found id: ""
	I0603 13:53:07.178050 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.178061 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:07.178068 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:07.178138 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:07.215713 1143678 cri.go:89] found id: ""
	I0603 13:53:07.215753 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.215764 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:07.215777 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:07.215840 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:07.251787 1143678 cri.go:89] found id: ""
	I0603 13:53:07.251815 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.251824 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:07.251830 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:07.251897 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:07.293357 1143678 cri.go:89] found id: ""
	I0603 13:53:07.293387 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.293398 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:07.293427 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:07.293496 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:07.329518 1143678 cri.go:89] found id: ""
	I0603 13:53:07.329551 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.329561 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:07.329569 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:07.329650 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:03.819203 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:06.317653 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:06.370539 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:08.370701 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:07.490706 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:09.990002 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:07.369534 1143678 cri.go:89] found id: ""
	I0603 13:53:07.369576 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.369587 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:07.369601 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:07.369617 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:07.424211 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:07.424260 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:07.439135 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:07.439172 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:07.511325 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:07.511360 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:07.511378 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:07.588348 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:07.588393 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:10.129812 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:10.143977 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:10.144057 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:10.181873 1143678 cri.go:89] found id: ""
	I0603 13:53:10.181906 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.181918 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:10.181926 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:10.181981 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:10.218416 1143678 cri.go:89] found id: ""
	I0603 13:53:10.218460 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.218473 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:10.218482 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:10.218562 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:10.253580 1143678 cri.go:89] found id: ""
	I0603 13:53:10.253618 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.253630 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:10.253646 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:10.253717 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:10.302919 1143678 cri.go:89] found id: ""
	I0603 13:53:10.302949 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.302957 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:10.302964 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:10.303024 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:10.343680 1143678 cri.go:89] found id: ""
	I0603 13:53:10.343709 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.343721 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:10.343729 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:10.343798 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:10.379281 1143678 cri.go:89] found id: ""
	I0603 13:53:10.379307 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.379315 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:10.379322 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:10.379374 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:10.420197 1143678 cri.go:89] found id: ""
	I0603 13:53:10.420225 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.420233 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:10.420239 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:10.420322 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:10.458578 1143678 cri.go:89] found id: ""
	I0603 13:53:10.458609 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.458618 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:10.458629 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:10.458642 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:10.511785 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:10.511828 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:10.526040 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:10.526081 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:10.603721 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:10.603749 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:10.603766 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:10.684153 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:10.684204 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:08.816447 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:11.318264 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:10.374788 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:12.871019 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:14.871064 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:11.992127 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:14.488866 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:13.227605 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:13.241131 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:13.241228 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:13.284636 1143678 cri.go:89] found id: ""
	I0603 13:53:13.284667 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.284675 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:13.284681 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:13.284737 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:13.322828 1143678 cri.go:89] found id: ""
	I0603 13:53:13.322861 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.322873 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:13.322881 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:13.322945 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:13.360061 1143678 cri.go:89] found id: ""
	I0603 13:53:13.360089 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.360097 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:13.360103 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:13.360176 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:13.397115 1143678 cri.go:89] found id: ""
	I0603 13:53:13.397149 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.397158 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:13.397164 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:13.397234 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:13.434086 1143678 cri.go:89] found id: ""
	I0603 13:53:13.434118 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.434127 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:13.434135 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:13.434194 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:13.470060 1143678 cri.go:89] found id: ""
	I0603 13:53:13.470089 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.470101 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:13.470113 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:13.470189 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:13.508423 1143678 cri.go:89] found id: ""
	I0603 13:53:13.508464 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.508480 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:13.508487 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:13.508552 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:13.546713 1143678 cri.go:89] found id: ""
	I0603 13:53:13.546752 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.546765 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:13.546778 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:13.546796 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:13.632984 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:13.633027 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:13.679169 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:13.679216 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:13.735765 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:13.735812 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:13.750175 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:13.750210 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:13.826571 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:16.327185 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:16.340163 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:16.340253 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:16.380260 1143678 cri.go:89] found id: ""
	I0603 13:53:16.380292 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.380300 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:16.380307 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:16.380373 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:16.420408 1143678 cri.go:89] found id: ""
	I0603 13:53:16.420438 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.420449 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:16.420457 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:16.420534 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:16.459250 1143678 cri.go:89] found id: ""
	I0603 13:53:16.459285 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.459297 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:16.459307 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:16.459377 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:16.496395 1143678 cri.go:89] found id: ""
	I0603 13:53:16.496427 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.496436 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:16.496444 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:16.496516 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:16.534402 1143678 cri.go:89] found id: ""
	I0603 13:53:16.534433 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.534442 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:16.534449 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:16.534514 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:16.571550 1143678 cri.go:89] found id: ""
	I0603 13:53:16.571577 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.571584 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:16.571591 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:16.571659 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:16.608425 1143678 cri.go:89] found id: ""
	I0603 13:53:16.608457 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.608468 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:16.608482 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:16.608549 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:16.647282 1143678 cri.go:89] found id: ""
	I0603 13:53:16.647315 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.647324 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:16.647334 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:16.647351 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:16.728778 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:16.728814 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:16.728831 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:16.822702 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:16.822747 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:16.868816 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:16.868845 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:16.922262 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:16.922301 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:13.818935 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:16.316865 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:17.370681 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:19.371232 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:16.489494 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:18.490176 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:20.491433 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:19.438231 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:19.452520 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:19.452603 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:19.488089 1143678 cri.go:89] found id: ""
	I0603 13:53:19.488121 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.488133 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:19.488141 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:19.488216 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:19.524494 1143678 cri.go:89] found id: ""
	I0603 13:53:19.524527 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.524537 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:19.524543 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:19.524595 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:19.561288 1143678 cri.go:89] found id: ""
	I0603 13:53:19.561323 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.561333 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:19.561341 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:19.561420 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:19.597919 1143678 cri.go:89] found id: ""
	I0603 13:53:19.597965 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.597976 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:19.597984 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:19.598056 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:19.634544 1143678 cri.go:89] found id: ""
	I0603 13:53:19.634579 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.634591 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:19.634599 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:19.634668 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:19.671473 1143678 cri.go:89] found id: ""
	I0603 13:53:19.671506 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.671518 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:19.671527 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:19.671598 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:19.707968 1143678 cri.go:89] found id: ""
	I0603 13:53:19.708000 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.708011 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:19.708019 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:19.708119 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:19.745555 1143678 cri.go:89] found id: ""
	I0603 13:53:19.745593 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.745604 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:19.745617 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:19.745631 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:19.830765 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:19.830812 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:19.875160 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:19.875197 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:19.927582 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:19.927627 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:19.942258 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:19.942289 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:20.016081 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:18.820067 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:21.319103 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:21.871214 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:24.371680 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:22.990210 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:24.990605 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:22.516859 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:22.534973 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:22.535040 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:22.593003 1143678 cri.go:89] found id: ""
	I0603 13:53:22.593043 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.593051 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:22.593058 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:22.593121 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:22.649916 1143678 cri.go:89] found id: ""
	I0603 13:53:22.649951 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.649963 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:22.649971 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:22.650030 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:22.689397 1143678 cri.go:89] found id: ""
	I0603 13:53:22.689449 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.689459 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:22.689465 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:22.689521 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:22.725109 1143678 cri.go:89] found id: ""
	I0603 13:53:22.725149 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.725161 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:22.725169 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:22.725250 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:22.761196 1143678 cri.go:89] found id: ""
	I0603 13:53:22.761225 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.761237 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:22.761245 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:22.761311 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:22.804065 1143678 cri.go:89] found id: ""
	I0603 13:53:22.804103 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.804112 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:22.804119 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:22.804189 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:22.840456 1143678 cri.go:89] found id: ""
	I0603 13:53:22.840485 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.840493 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:22.840499 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:22.840553 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:22.876796 1143678 cri.go:89] found id: ""
	I0603 13:53:22.876831 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.876842 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:22.876854 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:22.876869 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:22.957274 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:22.957317 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:22.998360 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:22.998394 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:23.054895 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:23.054942 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:23.070107 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:23.070141 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:23.147460 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:25.647727 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:25.663603 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:25.663691 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:25.698102 1143678 cri.go:89] found id: ""
	I0603 13:53:25.698139 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.698150 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:25.698159 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:25.698227 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:25.738601 1143678 cri.go:89] found id: ""
	I0603 13:53:25.738641 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.738648 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:25.738655 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:25.738718 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:25.780622 1143678 cri.go:89] found id: ""
	I0603 13:53:25.780657 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.780670 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:25.780678 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:25.780751 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:25.816950 1143678 cri.go:89] found id: ""
	I0603 13:53:25.816978 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.816989 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:25.816997 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:25.817060 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:25.860011 1143678 cri.go:89] found id: ""
	I0603 13:53:25.860051 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.860063 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:25.860072 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:25.860138 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:25.898832 1143678 cri.go:89] found id: ""
	I0603 13:53:25.898866 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.898878 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:25.898886 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:25.898959 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:25.937483 1143678 cri.go:89] found id: ""
	I0603 13:53:25.937518 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.937533 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:25.937541 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:25.937607 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:25.973972 1143678 cri.go:89] found id: ""
	I0603 13:53:25.974008 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.974021 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:25.974034 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:25.974065 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:25.989188 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:25.989227 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:26.065521 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:26.065546 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:26.065560 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:26.147852 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:26.147899 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:26.191395 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:26.191431 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:23.816928 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:25.818534 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:26.872084 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:28.872558 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:27.489951 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:29.989352 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:28.751041 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:28.764764 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:28.764826 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:28.808232 1143678 cri.go:89] found id: ""
	I0603 13:53:28.808271 1143678 logs.go:276] 0 containers: []
	W0603 13:53:28.808285 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:28.808293 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:28.808369 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:28.849058 1143678 cri.go:89] found id: ""
	I0603 13:53:28.849094 1143678 logs.go:276] 0 containers: []
	W0603 13:53:28.849107 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:28.849114 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:28.849187 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:28.892397 1143678 cri.go:89] found id: ""
	I0603 13:53:28.892427 1143678 logs.go:276] 0 containers: []
	W0603 13:53:28.892441 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:28.892447 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:28.892515 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:28.932675 1143678 cri.go:89] found id: ""
	I0603 13:53:28.932715 1143678 logs.go:276] 0 containers: []
	W0603 13:53:28.932727 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:28.932735 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:28.932840 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:28.969732 1143678 cri.go:89] found id: ""
	I0603 13:53:28.969769 1143678 logs.go:276] 0 containers: []
	W0603 13:53:28.969781 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:28.969789 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:28.969857 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:29.007765 1143678 cri.go:89] found id: ""
	I0603 13:53:29.007791 1143678 logs.go:276] 0 containers: []
	W0603 13:53:29.007798 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:29.007804 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:29.007865 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:29.044616 1143678 cri.go:89] found id: ""
	I0603 13:53:29.044652 1143678 logs.go:276] 0 containers: []
	W0603 13:53:29.044664 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:29.044675 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:29.044734 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:29.081133 1143678 cri.go:89] found id: ""
	I0603 13:53:29.081166 1143678 logs.go:276] 0 containers: []
	W0603 13:53:29.081187 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:29.081198 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:29.081213 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:29.095753 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:29.095783 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:29.174472 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:29.174496 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:29.174516 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:29.251216 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:29.251262 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:29.289127 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:29.289168 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:31.845335 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:31.860631 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:31.860720 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:31.904507 1143678 cri.go:89] found id: ""
	I0603 13:53:31.904544 1143678 logs.go:276] 0 containers: []
	W0603 13:53:31.904556 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:31.904564 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:31.904633 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:31.940795 1143678 cri.go:89] found id: ""
	I0603 13:53:31.940832 1143678 logs.go:276] 0 containers: []
	W0603 13:53:31.940845 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:31.940852 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:31.940921 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:31.978447 1143678 cri.go:89] found id: ""
	I0603 13:53:31.978481 1143678 logs.go:276] 0 containers: []
	W0603 13:53:31.978499 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:31.978507 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:31.978569 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:32.017975 1143678 cri.go:89] found id: ""
	I0603 13:53:32.018009 1143678 logs.go:276] 0 containers: []
	W0603 13:53:32.018018 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:32.018025 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:32.018089 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:32.053062 1143678 cri.go:89] found id: ""
	I0603 13:53:32.053091 1143678 logs.go:276] 0 containers: []
	W0603 13:53:32.053099 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:32.053106 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:32.053181 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:32.089822 1143678 cri.go:89] found id: ""
	I0603 13:53:32.089856 1143678 logs.go:276] 0 containers: []
	W0603 13:53:32.089868 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:32.089877 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:32.089944 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:32.126243 1143678 cri.go:89] found id: ""
	I0603 13:53:32.126280 1143678 logs.go:276] 0 containers: []
	W0603 13:53:32.126291 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:32.126299 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:32.126358 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:32.163297 1143678 cri.go:89] found id: ""
	I0603 13:53:32.163346 1143678 logs.go:276] 0 containers: []
	W0603 13:53:32.163357 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:32.163370 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:32.163386 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:32.218452 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:32.218495 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:32.233688 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:32.233731 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:32.318927 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:32.318947 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:32.318963 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:28.317046 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:30.317308 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:32.318273 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:31.370654 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:33.371038 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:31.991594 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:34.492142 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:32.403734 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:32.403786 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:34.947857 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:34.961894 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:34.961983 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:35.006279 1143678 cri.go:89] found id: ""
	I0603 13:53:35.006308 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.006318 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:35.006326 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:35.006398 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:35.042765 1143678 cri.go:89] found id: ""
	I0603 13:53:35.042794 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.042807 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:35.042815 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:35.042877 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:35.084332 1143678 cri.go:89] found id: ""
	I0603 13:53:35.084365 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.084375 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:35.084381 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:35.084448 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:35.121306 1143678 cri.go:89] found id: ""
	I0603 13:53:35.121337 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.121348 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:35.121358 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:35.121444 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:35.155952 1143678 cri.go:89] found id: ""
	I0603 13:53:35.155994 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.156008 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:35.156016 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:35.156089 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:35.196846 1143678 cri.go:89] found id: ""
	I0603 13:53:35.196881 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.196893 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:35.196902 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:35.196972 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:35.232396 1143678 cri.go:89] found id: ""
	I0603 13:53:35.232429 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.232440 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:35.232449 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:35.232528 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:35.269833 1143678 cri.go:89] found id: ""
	I0603 13:53:35.269862 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.269872 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:35.269885 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:35.269902 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:35.357754 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:35.357794 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:35.399793 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:35.399822 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:35.453742 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:35.453782 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:35.468431 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:35.468465 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:35.547817 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:34.816178 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:36.817093 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:35.373072 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:37.870173 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:36.989364 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:38.990163 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:38.048517 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:38.063481 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:38.063569 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:38.100487 1143678 cri.go:89] found id: ""
	I0603 13:53:38.100523 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.100535 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:38.100543 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:38.100612 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:38.137627 1143678 cri.go:89] found id: ""
	I0603 13:53:38.137665 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.137678 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:38.137686 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:38.137754 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:38.176138 1143678 cri.go:89] found id: ""
	I0603 13:53:38.176172 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.176190 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:38.176199 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:38.176265 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:38.214397 1143678 cri.go:89] found id: ""
	I0603 13:53:38.214439 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.214451 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:38.214459 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:38.214528 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:38.250531 1143678 cri.go:89] found id: ""
	I0603 13:53:38.250563 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.250573 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:38.250580 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:38.250642 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:38.286558 1143678 cri.go:89] found id: ""
	I0603 13:53:38.286587 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.286595 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:38.286601 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:38.286652 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:38.327995 1143678 cri.go:89] found id: ""
	I0603 13:53:38.328043 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.328055 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:38.328062 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:38.328126 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:38.374266 1143678 cri.go:89] found id: ""
	I0603 13:53:38.374300 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.374311 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:38.374324 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:38.374341 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:38.426876 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:38.426918 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:38.443296 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:38.443340 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:38.514702 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:38.514728 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:38.514746 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:38.601536 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:38.601590 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:41.141766 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:41.155927 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:41.156006 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:41.196829 1143678 cri.go:89] found id: ""
	I0603 13:53:41.196871 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.196884 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:41.196896 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:41.196967 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:41.231729 1143678 cri.go:89] found id: ""
	I0603 13:53:41.231780 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.231802 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:41.231812 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:41.231900 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:41.266663 1143678 cri.go:89] found id: ""
	I0603 13:53:41.266699 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.266711 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:41.266720 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:41.266783 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:41.305251 1143678 cri.go:89] found id: ""
	I0603 13:53:41.305278 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.305286 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:41.305292 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:41.305351 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:41.342527 1143678 cri.go:89] found id: ""
	I0603 13:53:41.342556 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.342568 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:41.342575 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:41.342637 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:41.379950 1143678 cri.go:89] found id: ""
	I0603 13:53:41.379982 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.379992 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:41.379999 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:41.380068 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:41.414930 1143678 cri.go:89] found id: ""
	I0603 13:53:41.414965 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.414973 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:41.414980 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:41.415043 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:41.449265 1143678 cri.go:89] found id: ""
	I0603 13:53:41.449299 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.449310 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:41.449324 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:41.449343 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:41.502525 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:41.502560 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:41.519357 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:41.519390 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:41.591443 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:41.591471 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:41.591485 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:41.668758 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:41.668802 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:39.317333 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:41.317598 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:40.370844 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:42.871161 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:41.489574 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:43.989620 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:44.211768 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:44.226789 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:44.226869 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:44.265525 1143678 cri.go:89] found id: ""
	I0603 13:53:44.265553 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.265561 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:44.265568 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:44.265646 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:44.304835 1143678 cri.go:89] found id: ""
	I0603 13:53:44.304866 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.304874 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:44.304880 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:44.304935 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:44.345832 1143678 cri.go:89] found id: ""
	I0603 13:53:44.345875 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.345885 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:44.345891 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:44.345950 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:44.386150 1143678 cri.go:89] found id: ""
	I0603 13:53:44.386186 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.386198 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:44.386207 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:44.386268 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:44.423662 1143678 cri.go:89] found id: ""
	I0603 13:53:44.423697 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.423709 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:44.423719 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:44.423788 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:44.462437 1143678 cri.go:89] found id: ""
	I0603 13:53:44.462464 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.462473 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:44.462481 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:44.462567 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:44.501007 1143678 cri.go:89] found id: ""
	I0603 13:53:44.501062 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.501074 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:44.501081 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:44.501138 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:44.535501 1143678 cri.go:89] found id: ""
	I0603 13:53:44.535543 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.535554 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:44.535567 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:44.535585 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:44.587114 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:44.587157 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:44.602151 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:44.602180 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:44.674065 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:44.674104 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:44.674122 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:44.757443 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:44.757488 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:47.306481 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:47.319895 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:47.319958 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:43.818030 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:46.316852 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:45.370762 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:47.371799 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:49.871512 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:46.488076 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:48.488472 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:50.488892 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:47.356975 1143678 cri.go:89] found id: ""
	I0603 13:53:47.357013 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.357026 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:47.357034 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:47.357106 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:47.393840 1143678 cri.go:89] found id: ""
	I0603 13:53:47.393869 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.393877 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:47.393884 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:47.393936 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:47.428455 1143678 cri.go:89] found id: ""
	I0603 13:53:47.428493 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.428506 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:47.428514 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:47.428597 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:47.463744 1143678 cri.go:89] found id: ""
	I0603 13:53:47.463777 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.463788 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:47.463795 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:47.463855 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:47.498134 1143678 cri.go:89] found id: ""
	I0603 13:53:47.498159 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.498167 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:47.498173 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:47.498245 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:47.534153 1143678 cri.go:89] found id: ""
	I0603 13:53:47.534195 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.534206 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:47.534219 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:47.534272 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:47.567148 1143678 cri.go:89] found id: ""
	I0603 13:53:47.567179 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.567187 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:47.567194 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:47.567249 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:47.605759 1143678 cri.go:89] found id: ""
	I0603 13:53:47.605790 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.605798 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:47.605810 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:47.605824 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:47.683651 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:47.683692 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:47.683705 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:47.763810 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:47.763848 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:47.806092 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:47.806131 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:47.859637 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:47.859677 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:50.377538 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:50.391696 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:50.391776 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:50.433968 1143678 cri.go:89] found id: ""
	I0603 13:53:50.434001 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.434013 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:50.434020 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:50.434080 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:50.470561 1143678 cri.go:89] found id: ""
	I0603 13:53:50.470589 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.470596 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:50.470603 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:50.470662 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:50.510699 1143678 cri.go:89] found id: ""
	I0603 13:53:50.510727 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.510735 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:50.510741 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:50.510808 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:50.553386 1143678 cri.go:89] found id: ""
	I0603 13:53:50.553433 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.553445 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:50.553452 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:50.553533 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:50.589731 1143678 cri.go:89] found id: ""
	I0603 13:53:50.589779 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.589792 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:50.589801 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:50.589885 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:50.625144 1143678 cri.go:89] found id: ""
	I0603 13:53:50.625180 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.625192 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:50.625201 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:50.625274 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:50.669021 1143678 cri.go:89] found id: ""
	I0603 13:53:50.669053 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.669061 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:50.669067 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:50.669121 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:50.714241 1143678 cri.go:89] found id: ""
	I0603 13:53:50.714270 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.714284 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:50.714297 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:50.714314 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:50.766290 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:50.766333 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:50.797242 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:50.797275 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:50.866589 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:50.866616 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:50.866637 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:50.948808 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:50.948854 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:48.318282 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:50.817445 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:52.370798 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:54.377027 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:52.490719 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:54.989907 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:53.496797 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:53.511944 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:53.512021 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:53.549028 1143678 cri.go:89] found id: ""
	I0603 13:53:53.549057 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.549066 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:53.549072 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:53.549128 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:53.583533 1143678 cri.go:89] found id: ""
	I0603 13:53:53.583566 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.583578 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:53.583586 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:53.583652 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:53.618578 1143678 cri.go:89] found id: ""
	I0603 13:53:53.618609 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.618618 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:53.618626 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:53.618701 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:53.653313 1143678 cri.go:89] found id: ""
	I0603 13:53:53.653347 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.653358 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:53.653364 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:53.653442 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:53.689805 1143678 cri.go:89] found id: ""
	I0603 13:53:53.689839 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.689849 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:53.689857 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:53.689931 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:53.725538 1143678 cri.go:89] found id: ""
	I0603 13:53:53.725571 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.725584 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:53.725592 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:53.725648 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:53.762284 1143678 cri.go:89] found id: ""
	I0603 13:53:53.762325 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.762336 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:53.762345 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:53.762419 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:53.799056 1143678 cri.go:89] found id: ""
	I0603 13:53:53.799083 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.799092 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:53.799102 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:53.799115 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:53.873743 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:53.873809 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:53.919692 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:53.919724 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:53.969068 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:53.969109 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:53.983840 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:53.983866 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:54.054842 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:56.555587 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:56.570014 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:56.570076 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:56.604352 1143678 cri.go:89] found id: ""
	I0603 13:53:56.604386 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.604400 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:56.604408 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:56.604479 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:56.648126 1143678 cri.go:89] found id: ""
	I0603 13:53:56.648161 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.648171 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:56.648177 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:56.648231 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:56.685621 1143678 cri.go:89] found id: ""
	I0603 13:53:56.685658 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.685670 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:56.685678 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:56.685763 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:56.721860 1143678 cri.go:89] found id: ""
	I0603 13:53:56.721891 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.721913 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:56.721921 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:56.721989 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:56.757950 1143678 cri.go:89] found id: ""
	I0603 13:53:56.757982 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.757995 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:56.758002 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:56.758068 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:56.794963 1143678 cri.go:89] found id: ""
	I0603 13:53:56.794991 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.794999 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:56.795007 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:56.795072 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:56.831795 1143678 cri.go:89] found id: ""
	I0603 13:53:56.831827 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.831839 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:56.831846 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:56.831913 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:56.869263 1143678 cri.go:89] found id: ""
	I0603 13:53:56.869293 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.869303 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:56.869314 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:56.869331 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:56.945068 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:56.945096 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:56.945110 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:57.028545 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:57.028582 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:57.069973 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:57.070009 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:57.126395 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:57.126436 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:53.316616 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:55.316981 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:57.317295 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:56.870680 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:59.371553 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:56.990964 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:59.489616 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:59.644870 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:59.658547 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:59.658634 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:59.694625 1143678 cri.go:89] found id: ""
	I0603 13:53:59.694656 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.694665 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:59.694673 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:59.694740 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:59.730475 1143678 cri.go:89] found id: ""
	I0603 13:53:59.730573 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.730590 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:59.730599 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:59.730696 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:59.768533 1143678 cri.go:89] found id: ""
	I0603 13:53:59.768567 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.768580 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:59.768590 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:59.768662 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:59.804913 1143678 cri.go:89] found id: ""
	I0603 13:53:59.804944 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.804953 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:59.804960 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:59.805014 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:59.850331 1143678 cri.go:89] found id: ""
	I0603 13:53:59.850363 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.850376 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:59.850385 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:59.850466 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:59.890777 1143678 cri.go:89] found id: ""
	I0603 13:53:59.890814 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.890826 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:59.890834 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:59.890909 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:59.931233 1143678 cri.go:89] found id: ""
	I0603 13:53:59.931268 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.931277 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:59.931283 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:59.931354 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:59.966267 1143678 cri.go:89] found id: ""
	I0603 13:53:59.966307 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.966319 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:59.966333 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:59.966356 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:00.019884 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:00.019924 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:00.034936 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:00.034982 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:00.115002 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:00.115035 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:00.115053 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:00.189992 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:00.190035 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:59.818065 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:02.316183 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:01.870679 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:03.872563 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:01.490213 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:03.988699 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:02.737387 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:02.752131 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:02.752220 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:02.787863 1143678 cri.go:89] found id: ""
	I0603 13:54:02.787893 1143678 logs.go:276] 0 containers: []
	W0603 13:54:02.787902 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:02.787908 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:02.787974 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:02.824938 1143678 cri.go:89] found id: ""
	I0603 13:54:02.824973 1143678 logs.go:276] 0 containers: []
	W0603 13:54:02.824983 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:02.824989 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:02.825061 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:02.861425 1143678 cri.go:89] found id: ""
	I0603 13:54:02.861461 1143678 logs.go:276] 0 containers: []
	W0603 13:54:02.861469 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:02.861476 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:02.861546 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:02.907417 1143678 cri.go:89] found id: ""
	I0603 13:54:02.907453 1143678 logs.go:276] 0 containers: []
	W0603 13:54:02.907475 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:02.907483 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:02.907553 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:02.953606 1143678 cri.go:89] found id: ""
	I0603 13:54:02.953640 1143678 logs.go:276] 0 containers: []
	W0603 13:54:02.953649 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:02.953655 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:02.953728 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:03.007785 1143678 cri.go:89] found id: ""
	I0603 13:54:03.007816 1143678 logs.go:276] 0 containers: []
	W0603 13:54:03.007824 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:03.007830 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:03.007896 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:03.058278 1143678 cri.go:89] found id: ""
	I0603 13:54:03.058316 1143678 logs.go:276] 0 containers: []
	W0603 13:54:03.058329 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:03.058338 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:03.058404 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:03.094766 1143678 cri.go:89] found id: ""
	I0603 13:54:03.094800 1143678 logs.go:276] 0 containers: []
	W0603 13:54:03.094811 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:03.094824 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:03.094840 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:03.163663 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:03.163690 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:03.163704 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:03.250751 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:03.250802 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:03.292418 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:03.292466 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:03.344552 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:03.344600 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:05.859965 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:05.875255 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:05.875340 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:05.918590 1143678 cri.go:89] found id: ""
	I0603 13:54:05.918619 1143678 logs.go:276] 0 containers: []
	W0603 13:54:05.918630 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:05.918637 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:05.918706 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:05.953932 1143678 cri.go:89] found id: ""
	I0603 13:54:05.953969 1143678 logs.go:276] 0 containers: []
	W0603 13:54:05.953980 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:05.953988 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:05.954056 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:05.993319 1143678 cri.go:89] found id: ""
	I0603 13:54:05.993348 1143678 logs.go:276] 0 containers: []
	W0603 13:54:05.993359 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:05.993368 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:05.993468 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:06.033047 1143678 cri.go:89] found id: ""
	I0603 13:54:06.033079 1143678 logs.go:276] 0 containers: []
	W0603 13:54:06.033087 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:06.033100 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:06.033156 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:06.072607 1143678 cri.go:89] found id: ""
	I0603 13:54:06.072631 1143678 logs.go:276] 0 containers: []
	W0603 13:54:06.072640 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:06.072647 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:06.072698 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:06.109944 1143678 cri.go:89] found id: ""
	I0603 13:54:06.109990 1143678 logs.go:276] 0 containers: []
	W0603 13:54:06.109999 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:06.110007 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:06.110071 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:06.150235 1143678 cri.go:89] found id: ""
	I0603 13:54:06.150266 1143678 logs.go:276] 0 containers: []
	W0603 13:54:06.150276 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:06.150284 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:06.150349 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:06.193963 1143678 cri.go:89] found id: ""
	I0603 13:54:06.193992 1143678 logs.go:276] 0 containers: []
	W0603 13:54:06.194004 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:06.194017 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:06.194035 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:06.235790 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:06.235827 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:06.289940 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:06.289980 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:06.305205 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:06.305240 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:06.381170 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:06.381191 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:06.381206 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:04.316812 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:06.317759 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:06.370944 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:08.371668 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:05.989346 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:08.492021 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:08.958985 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:08.973364 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:08.973462 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:09.015050 1143678 cri.go:89] found id: ""
	I0603 13:54:09.015087 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.015099 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:09.015107 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:09.015187 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:09.054474 1143678 cri.go:89] found id: ""
	I0603 13:54:09.054508 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.054521 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:09.054533 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:09.054590 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:09.090867 1143678 cri.go:89] found id: ""
	I0603 13:54:09.090905 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.090917 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:09.090926 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:09.090995 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:09.128401 1143678 cri.go:89] found id: ""
	I0603 13:54:09.128433 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.128441 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:09.128447 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:09.128511 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:09.162952 1143678 cri.go:89] found id: ""
	I0603 13:54:09.162992 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.163005 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:09.163013 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:09.163078 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:09.200375 1143678 cri.go:89] found id: ""
	I0603 13:54:09.200402 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.200410 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:09.200416 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:09.200495 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:09.244694 1143678 cri.go:89] found id: ""
	I0603 13:54:09.244729 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.244740 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:09.244749 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:09.244818 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:09.281633 1143678 cri.go:89] found id: ""
	I0603 13:54:09.281666 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.281675 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:09.281686 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:09.281700 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:09.341287 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:09.341331 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:09.355379 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:09.355415 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:09.435934 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:09.435960 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:09.435979 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:09.518203 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:09.518248 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:12.061538 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:12.076939 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:12.077020 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:12.114308 1143678 cri.go:89] found id: ""
	I0603 13:54:12.114344 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.114353 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:12.114359 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:12.114427 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:12.150336 1143678 cri.go:89] found id: ""
	I0603 13:54:12.150368 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.150383 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:12.150390 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:12.150455 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:12.189881 1143678 cri.go:89] found id: ""
	I0603 13:54:12.189934 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.189946 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:12.189954 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:12.190020 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:12.226361 1143678 cri.go:89] found id: ""
	I0603 13:54:12.226396 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.226407 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:12.226415 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:12.226488 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:12.264216 1143678 cri.go:89] found id: ""
	I0603 13:54:12.264257 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.264265 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:12.264271 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:12.264341 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:12.306563 1143678 cri.go:89] found id: ""
	I0603 13:54:12.306600 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.306612 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:12.306620 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:12.306690 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:12.347043 1143678 cri.go:89] found id: ""
	I0603 13:54:12.347082 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.347094 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:12.347105 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:12.347170 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:08.317824 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:10.816743 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:12.816776 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:10.372079 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:12.872314 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:10.990240 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:13.489762 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:12.383947 1143678 cri.go:89] found id: ""
	I0603 13:54:12.383978 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.383989 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:12.384001 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:12.384018 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:12.464306 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:12.464348 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:12.505079 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:12.505110 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:12.563631 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:12.563666 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:12.578328 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:12.578357 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:12.646015 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:15.147166 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:15.163786 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:15.163865 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:15.202249 1143678 cri.go:89] found id: ""
	I0603 13:54:15.202286 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.202296 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:15.202304 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:15.202372 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:15.236305 1143678 cri.go:89] found id: ""
	I0603 13:54:15.236345 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.236359 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:15.236368 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:15.236459 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:15.273457 1143678 cri.go:89] found id: ""
	I0603 13:54:15.273493 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.273510 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:15.273521 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:15.273592 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:15.314917 1143678 cri.go:89] found id: ""
	I0603 13:54:15.314951 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.314963 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:15.314984 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:15.315055 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:15.353060 1143678 cri.go:89] found id: ""
	I0603 13:54:15.353098 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.353112 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:15.353118 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:15.353197 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:15.390412 1143678 cri.go:89] found id: ""
	I0603 13:54:15.390448 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.390460 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:15.390469 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:15.390534 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:15.427735 1143678 cri.go:89] found id: ""
	I0603 13:54:15.427771 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.427782 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:15.427789 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:15.427854 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:15.467134 1143678 cri.go:89] found id: ""
	I0603 13:54:15.467165 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.467175 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:15.467184 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:15.467199 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:15.517924 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:15.517973 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:15.531728 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:15.531760 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:15.608397 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:15.608421 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:15.608444 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:15.688976 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:15.689016 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:15.319250 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:16.817018 1143252 pod_ready.go:81] duration metric: took 4m0.00664589s for pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace to be "Ready" ...
	E0603 13:54:16.817042 1143252 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0603 13:54:16.817049 1143252 pod_ready.go:38] duration metric: took 4m6.670583216s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:54:16.817081 1143252 api_server.go:52] waiting for apiserver process to appear ...
	I0603 13:54:16.817110 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:16.817158 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:16.871314 1143252 cri.go:89] found id: "45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a"
	I0603 13:54:16.871339 1143252 cri.go:89] found id: ""
	I0603 13:54:16.871350 1143252 logs.go:276] 1 containers: [45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a]
	I0603 13:54:16.871405 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:16.876249 1143252 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:16.876319 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:16.917267 1143252 cri.go:89] found id: "114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05"
	I0603 13:54:16.917298 1143252 cri.go:89] found id: ""
	I0603 13:54:16.917310 1143252 logs.go:276] 1 containers: [114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05]
	I0603 13:54:16.917374 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:16.923290 1143252 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:16.923374 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:16.963598 1143252 cri.go:89] found id: "f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574"
	I0603 13:54:16.963619 1143252 cri.go:89] found id: ""
	I0603 13:54:16.963628 1143252 logs.go:276] 1 containers: [f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574]
	I0603 13:54:16.963689 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:16.968201 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:16.968277 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:17.008229 1143252 cri.go:89] found id: "f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87"
	I0603 13:54:17.008264 1143252 cri.go:89] found id: ""
	I0603 13:54:17.008274 1143252 logs.go:276] 1 containers: [f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87]
	I0603 13:54:17.008341 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:17.012719 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:17.012795 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:17.048353 1143252 cri.go:89] found id: "c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1"
	I0603 13:54:17.048384 1143252 cri.go:89] found id: ""
	I0603 13:54:17.048394 1143252 logs.go:276] 1 containers: [c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1]
	I0603 13:54:17.048459 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:17.053094 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:17.053162 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:17.088475 1143252 cri.go:89] found id: "a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d"
	I0603 13:54:17.088507 1143252 cri.go:89] found id: ""
	I0603 13:54:17.088518 1143252 logs.go:276] 1 containers: [a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d]
	I0603 13:54:17.088583 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:17.093293 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:17.093373 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:17.130335 1143252 cri.go:89] found id: ""
	I0603 13:54:17.130370 1143252 logs.go:276] 0 containers: []
	W0603 13:54:17.130381 1143252 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:17.130389 1143252 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0603 13:54:17.130472 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0603 13:54:17.176283 1143252 cri.go:89] found id: "e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b"
	I0603 13:54:17.176317 1143252 cri.go:89] found id: "141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88"
	I0603 13:54:17.176324 1143252 cri.go:89] found id: ""
	I0603 13:54:17.176335 1143252 logs.go:276] 2 containers: [e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b 141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88]
	I0603 13:54:17.176409 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:17.181455 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:17.185881 1143252 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:17.185902 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:17.239636 1143252 logs.go:123] Gathering logs for kube-apiserver [45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a] ...
	I0603 13:54:17.239680 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a"
	I0603 13:54:17.309488 1143252 logs.go:123] Gathering logs for etcd [114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05] ...
	I0603 13:54:17.309532 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05"
	I0603 13:54:17.362243 1143252 logs.go:123] Gathering logs for coredns [f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574] ...
	I0603 13:54:17.362282 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574"
	I0603 13:54:17.401389 1143252 logs.go:123] Gathering logs for kube-proxy [c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1] ...
	I0603 13:54:17.401440 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1"
	I0603 13:54:17.442095 1143252 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:17.442127 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:17.923198 1143252 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:17.923247 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:17.939968 1143252 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:17.940000 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 13:54:18.075054 1143252 logs.go:123] Gathering logs for kube-scheduler [f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87] ...
	I0603 13:54:18.075098 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87"
	I0603 13:54:18.113954 1143252 logs.go:123] Gathering logs for kube-controller-manager [a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d] ...
	I0603 13:54:18.113994 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d"
	I0603 13:54:18.181862 1143252 logs.go:123] Gathering logs for storage-provisioner [e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b] ...
	I0603 13:54:18.181906 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b"
	I0603 13:54:18.227105 1143252 logs.go:123] Gathering logs for storage-provisioner [141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88] ...
	I0603 13:54:18.227137 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88"
	I0603 13:54:18.272684 1143252 logs.go:123] Gathering logs for container status ...
	I0603 13:54:18.272721 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:15.371753 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:17.870321 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:19.879331 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:15.990326 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:18.489960 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:18.228279 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:18.242909 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:18.242985 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:18.285400 1143678 cri.go:89] found id: ""
	I0603 13:54:18.285445 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.285455 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:18.285461 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:18.285521 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:18.321840 1143678 cri.go:89] found id: ""
	I0603 13:54:18.321868 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.321877 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:18.321884 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:18.321943 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:18.358856 1143678 cri.go:89] found id: ""
	I0603 13:54:18.358888 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.358902 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:18.358911 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:18.358979 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:18.395638 1143678 cri.go:89] found id: ""
	I0603 13:54:18.395678 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.395691 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:18.395699 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:18.395766 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:18.435541 1143678 cri.go:89] found id: ""
	I0603 13:54:18.435570 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.435581 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:18.435589 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:18.435653 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:18.469491 1143678 cri.go:89] found id: ""
	I0603 13:54:18.469527 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.469538 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:18.469545 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:18.469615 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:18.507986 1143678 cri.go:89] found id: ""
	I0603 13:54:18.508018 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.508030 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:18.508039 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:18.508106 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:18.542311 1143678 cri.go:89] found id: ""
	I0603 13:54:18.542343 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.542351 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:18.542361 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:18.542375 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:18.619295 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:18.619337 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:18.662500 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:18.662540 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:18.714392 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:18.714432 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:18.728750 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:18.728785 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:18.800786 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:21.301554 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:21.315880 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:21.315944 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:21.358178 1143678 cri.go:89] found id: ""
	I0603 13:54:21.358208 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.358217 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:21.358227 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:21.358289 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:21.395873 1143678 cri.go:89] found id: ""
	I0603 13:54:21.395969 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.395995 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:21.396014 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:21.396111 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:21.431781 1143678 cri.go:89] found id: ""
	I0603 13:54:21.431810 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.431822 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:21.431831 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:21.431906 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:21.472840 1143678 cri.go:89] found id: ""
	I0603 13:54:21.472872 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.472885 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:21.472893 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:21.472955 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:21.512296 1143678 cri.go:89] found id: ""
	I0603 13:54:21.512333 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.512346 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:21.512353 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:21.512421 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:21.547555 1143678 cri.go:89] found id: ""
	I0603 13:54:21.547588 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.547599 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:21.547609 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:21.547670 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:21.584972 1143678 cri.go:89] found id: ""
	I0603 13:54:21.585005 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.585013 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:21.585019 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:21.585085 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:21.621566 1143678 cri.go:89] found id: ""
	I0603 13:54:21.621599 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.621610 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:21.621623 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:21.621639 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:21.637223 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:21.637263 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:21.712272 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:21.712294 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:21.712310 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:21.800453 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:21.800490 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:21.841477 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:21.841525 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:20.819740 1143252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:20.836917 1143252 api_server.go:72] duration metric: took 4m15.913250824s to wait for apiserver process to appear ...
	I0603 13:54:20.836947 1143252 api_server.go:88] waiting for apiserver healthz status ...
	I0603 13:54:20.836988 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:20.837038 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:20.874034 1143252 cri.go:89] found id: "45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a"
	I0603 13:54:20.874064 1143252 cri.go:89] found id: ""
	I0603 13:54:20.874076 1143252 logs.go:276] 1 containers: [45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a]
	I0603 13:54:20.874146 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:20.878935 1143252 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:20.879020 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:20.920390 1143252 cri.go:89] found id: "114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05"
	I0603 13:54:20.920417 1143252 cri.go:89] found id: ""
	I0603 13:54:20.920425 1143252 logs.go:276] 1 containers: [114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05]
	I0603 13:54:20.920494 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:20.924858 1143252 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:20.924934 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:20.966049 1143252 cri.go:89] found id: "f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574"
	I0603 13:54:20.966077 1143252 cri.go:89] found id: ""
	I0603 13:54:20.966088 1143252 logs.go:276] 1 containers: [f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574]
	I0603 13:54:20.966174 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:20.970734 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:20.970812 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:21.010892 1143252 cri.go:89] found id: "f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87"
	I0603 13:54:21.010918 1143252 cri.go:89] found id: ""
	I0603 13:54:21.010929 1143252 logs.go:276] 1 containers: [f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87]
	I0603 13:54:21.010994 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:21.016274 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:21.016347 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:21.055294 1143252 cri.go:89] found id: "c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1"
	I0603 13:54:21.055318 1143252 cri.go:89] found id: ""
	I0603 13:54:21.055327 1143252 logs.go:276] 1 containers: [c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1]
	I0603 13:54:21.055375 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:21.060007 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:21.060069 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:21.099200 1143252 cri.go:89] found id: "a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d"
	I0603 13:54:21.099225 1143252 cri.go:89] found id: ""
	I0603 13:54:21.099236 1143252 logs.go:276] 1 containers: [a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d]
	I0603 13:54:21.099309 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:21.103590 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:21.103662 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:21.140375 1143252 cri.go:89] found id: ""
	I0603 13:54:21.140409 1143252 logs.go:276] 0 containers: []
	W0603 13:54:21.140422 1143252 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:21.140431 1143252 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0603 13:54:21.140498 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0603 13:54:21.180709 1143252 cri.go:89] found id: "e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b"
	I0603 13:54:21.180735 1143252 cri.go:89] found id: "141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88"
	I0603 13:54:21.180739 1143252 cri.go:89] found id: ""
	I0603 13:54:21.180747 1143252 logs.go:276] 2 containers: [e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b 141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88]
	I0603 13:54:21.180814 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:21.184952 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:21.189111 1143252 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:21.189140 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:21.663768 1143252 logs.go:123] Gathering logs for kube-apiserver [45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a] ...
	I0603 13:54:21.663807 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a"
	I0603 13:54:21.719542 1143252 logs.go:123] Gathering logs for etcd [114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05] ...
	I0603 13:54:21.719573 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05"
	I0603 13:54:21.786686 1143252 logs.go:123] Gathering logs for coredns [f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574] ...
	I0603 13:54:21.786725 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574"
	I0603 13:54:21.824908 1143252 logs.go:123] Gathering logs for kube-scheduler [f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87] ...
	I0603 13:54:21.824948 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87"
	I0603 13:54:21.864778 1143252 logs.go:123] Gathering logs for kube-proxy [c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1] ...
	I0603 13:54:21.864818 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1"
	I0603 13:54:21.904450 1143252 logs.go:123] Gathering logs for storage-provisioner [e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b] ...
	I0603 13:54:21.904480 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b"
	I0603 13:54:21.942006 1143252 logs.go:123] Gathering logs for storage-provisioner [141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88] ...
	I0603 13:54:21.942040 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88"
	I0603 13:54:21.979636 1143252 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:21.979673 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:22.033943 1143252 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:22.033980 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:22.048545 1143252 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:22.048578 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 13:54:22.154866 1143252 logs.go:123] Gathering logs for kube-controller-manager [a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d] ...
	I0603 13:54:22.154906 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d"
	I0603 13:54:22.218033 1143252 logs.go:123] Gathering logs for container status ...
	I0603 13:54:22.218073 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:22.374700 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:24.871898 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:20.989874 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:23.489083 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:24.394864 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:24.408416 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:24.408527 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:24.444572 1143678 cri.go:89] found id: ""
	I0603 13:54:24.444603 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.444612 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:24.444618 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:24.444672 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:24.483710 1143678 cri.go:89] found id: ""
	I0603 13:54:24.483744 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.483755 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:24.483763 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:24.483837 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:24.522396 1143678 cri.go:89] found id: ""
	I0603 13:54:24.522437 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.522450 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:24.522457 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:24.522520 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:24.560865 1143678 cri.go:89] found id: ""
	I0603 13:54:24.560896 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.560905 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:24.560911 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:24.560964 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:24.598597 1143678 cri.go:89] found id: ""
	I0603 13:54:24.598632 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.598643 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:24.598657 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:24.598722 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:24.638854 1143678 cri.go:89] found id: ""
	I0603 13:54:24.638885 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.638897 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:24.638908 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:24.638979 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:24.678039 1143678 cri.go:89] found id: ""
	I0603 13:54:24.678076 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.678088 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:24.678096 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:24.678166 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:24.712836 1143678 cri.go:89] found id: ""
	I0603 13:54:24.712871 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.712883 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:24.712896 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:24.712913 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:24.763503 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:24.763545 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:24.779383 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:24.779416 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:24.867254 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:24.867287 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:24.867307 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:24.944920 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:24.944957 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:24.768551 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:54:24.774942 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 200:
	ok
	I0603 13:54:24.776278 1143252 api_server.go:141] control plane version: v1.30.1
	I0603 13:54:24.776301 1143252 api_server.go:131] duration metric: took 3.939347802s to wait for apiserver health ...
	I0603 13:54:24.776310 1143252 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 13:54:24.776334 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:24.776386 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:24.827107 1143252 cri.go:89] found id: "45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a"
	I0603 13:54:24.827139 1143252 cri.go:89] found id: ""
	I0603 13:54:24.827152 1143252 logs.go:276] 1 containers: [45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a]
	I0603 13:54:24.827210 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:24.831681 1143252 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:24.831752 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:24.875645 1143252 cri.go:89] found id: "114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05"
	I0603 13:54:24.875689 1143252 cri.go:89] found id: ""
	I0603 13:54:24.875711 1143252 logs.go:276] 1 containers: [114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05]
	I0603 13:54:24.875778 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:24.880157 1143252 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:24.880256 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:24.932131 1143252 cri.go:89] found id: "f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574"
	I0603 13:54:24.932157 1143252 cri.go:89] found id: ""
	I0603 13:54:24.932167 1143252 logs.go:276] 1 containers: [f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574]
	I0603 13:54:24.932262 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:24.938104 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:24.938168 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:24.980289 1143252 cri.go:89] found id: "f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87"
	I0603 13:54:24.980318 1143252 cri.go:89] found id: ""
	I0603 13:54:24.980327 1143252 logs.go:276] 1 containers: [f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87]
	I0603 13:54:24.980389 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:24.985608 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:24.985687 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:25.033726 1143252 cri.go:89] found id: "c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1"
	I0603 13:54:25.033749 1143252 cri.go:89] found id: ""
	I0603 13:54:25.033757 1143252 logs.go:276] 1 containers: [c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1]
	I0603 13:54:25.033811 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:25.038493 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:25.038561 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:25.077447 1143252 cri.go:89] found id: "a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d"
	I0603 13:54:25.077474 1143252 cri.go:89] found id: ""
	I0603 13:54:25.077485 1143252 logs.go:276] 1 containers: [a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d]
	I0603 13:54:25.077545 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:25.081701 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:25.081770 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:25.120216 1143252 cri.go:89] found id: ""
	I0603 13:54:25.120246 1143252 logs.go:276] 0 containers: []
	W0603 13:54:25.120254 1143252 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:25.120261 1143252 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0603 13:54:25.120313 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0603 13:54:25.162562 1143252 cri.go:89] found id: "e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b"
	I0603 13:54:25.162596 1143252 cri.go:89] found id: "141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88"
	I0603 13:54:25.162602 1143252 cri.go:89] found id: ""
	I0603 13:54:25.162613 1143252 logs.go:276] 2 containers: [e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b 141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88]
	I0603 13:54:25.162678 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:25.167179 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:25.171531 1143252 logs.go:123] Gathering logs for container status ...
	I0603 13:54:25.171558 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:25.223749 1143252 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:25.223787 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:25.290251 1143252 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:25.290293 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:25.315271 1143252 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:25.315302 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 13:54:25.433219 1143252 logs.go:123] Gathering logs for coredns [f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574] ...
	I0603 13:54:25.433257 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574"
	I0603 13:54:25.473156 1143252 logs.go:123] Gathering logs for kube-scheduler [f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87] ...
	I0603 13:54:25.473194 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87"
	I0603 13:54:25.513988 1143252 logs.go:123] Gathering logs for kube-controller-manager [a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d] ...
	I0603 13:54:25.514015 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d"
	I0603 13:54:25.587224 1143252 logs.go:123] Gathering logs for kube-apiserver [45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a] ...
	I0603 13:54:25.587260 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a"
	I0603 13:54:25.638872 1143252 logs.go:123] Gathering logs for etcd [114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05] ...
	I0603 13:54:25.638909 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05"
	I0603 13:54:25.687323 1143252 logs.go:123] Gathering logs for kube-proxy [c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1] ...
	I0603 13:54:25.687372 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1"
	I0603 13:54:25.739508 1143252 logs.go:123] Gathering logs for storage-provisioner [e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b] ...
	I0603 13:54:25.739539 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b"
	I0603 13:54:25.775066 1143252 logs.go:123] Gathering logs for storage-provisioner [141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88] ...
	I0603 13:54:25.775096 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88"
	I0603 13:54:25.811982 1143252 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:25.812016 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:28.685228 1143252 system_pods.go:59] 8 kube-system pods found
	I0603 13:54:28.685261 1143252 system_pods.go:61] "coredns-7db6d8ff4d-qdjrv" [9a490ea5-c189-4d28-bd6b-509610d35f37] Running
	I0603 13:54:28.685265 1143252 system_pods.go:61] "etcd-embed-certs-223260" [97807b62-195b-4d94-a7f8-754f68ad4f03] Running
	I0603 13:54:28.685269 1143252 system_pods.go:61] "kube-apiserver-embed-certs-223260" [df2f6cde-407c-4ed2-8fec-5fa61a428a88] Running
	I0603 13:54:28.685272 1143252 system_pods.go:61] "kube-controller-manager-embed-certs-223260" [9b8bc1b7-3f43-4626-b9ee-37f5176b7fd6] Running
	I0603 13:54:28.685276 1143252 system_pods.go:61] "kube-proxy-s5vdl" [4c515f67-d265-4140-82ec-ba9ac4ddda80] Running
	I0603 13:54:28.685279 1143252 system_pods.go:61] "kube-scheduler-embed-certs-223260" [d23001bf-d971-42d2-a901-b2ec4b4db649] Running
	I0603 13:54:28.685285 1143252 system_pods.go:61] "metrics-server-569cc877fc-v7d9t" [e89c698d-7aab-4acd-a9b3-5ba0315ad681] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:54:28.685290 1143252 system_pods.go:61] "storage-provisioner" [6ff65744-2d90-4589-a97f-d6b4d792eab4] Running
	I0603 13:54:28.685298 1143252 system_pods.go:74] duration metric: took 3.908982484s to wait for pod list to return data ...
	I0603 13:54:28.685305 1143252 default_sa.go:34] waiting for default service account to be created ...
	I0603 13:54:28.687914 1143252 default_sa.go:45] found service account: "default"
	I0603 13:54:28.687939 1143252 default_sa.go:55] duration metric: took 2.627402ms for default service account to be created ...
	I0603 13:54:28.687947 1143252 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 13:54:28.693336 1143252 system_pods.go:86] 8 kube-system pods found
	I0603 13:54:28.693369 1143252 system_pods.go:89] "coredns-7db6d8ff4d-qdjrv" [9a490ea5-c189-4d28-bd6b-509610d35f37] Running
	I0603 13:54:28.693375 1143252 system_pods.go:89] "etcd-embed-certs-223260" [97807b62-195b-4d94-a7f8-754f68ad4f03] Running
	I0603 13:54:28.693379 1143252 system_pods.go:89] "kube-apiserver-embed-certs-223260" [df2f6cde-407c-4ed2-8fec-5fa61a428a88] Running
	I0603 13:54:28.693385 1143252 system_pods.go:89] "kube-controller-manager-embed-certs-223260" [9b8bc1b7-3f43-4626-b9ee-37f5176b7fd6] Running
	I0603 13:54:28.693389 1143252 system_pods.go:89] "kube-proxy-s5vdl" [4c515f67-d265-4140-82ec-ba9ac4ddda80] Running
	I0603 13:54:28.693393 1143252 system_pods.go:89] "kube-scheduler-embed-certs-223260" [d23001bf-d971-42d2-a901-b2ec4b4db649] Running
	I0603 13:54:28.693401 1143252 system_pods.go:89] "metrics-server-569cc877fc-v7d9t" [e89c698d-7aab-4acd-a9b3-5ba0315ad681] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:54:28.693418 1143252 system_pods.go:89] "storage-provisioner" [6ff65744-2d90-4589-a97f-d6b4d792eab4] Running
	I0603 13:54:28.693438 1143252 system_pods.go:126] duration metric: took 5.484487ms to wait for k8s-apps to be running ...
	I0603 13:54:28.693450 1143252 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 13:54:28.693497 1143252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:54:28.710364 1143252 system_svc.go:56] duration metric: took 16.901982ms WaitForService to wait for kubelet
	I0603 13:54:28.710399 1143252 kubeadm.go:576] duration metric: took 4m23.786738812s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 13:54:28.710444 1143252 node_conditions.go:102] verifying NodePressure condition ...
	I0603 13:54:28.713300 1143252 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 13:54:28.713328 1143252 node_conditions.go:123] node cpu capacity is 2
	I0603 13:54:28.713362 1143252 node_conditions.go:105] duration metric: took 2.909242ms to run NodePressure ...
	I0603 13:54:28.713382 1143252 start.go:240] waiting for startup goroutines ...
	I0603 13:54:28.713392 1143252 start.go:245] waiting for cluster config update ...
	I0603 13:54:28.713424 1143252 start.go:254] writing updated cluster config ...
	I0603 13:54:28.713798 1143252 ssh_runner.go:195] Run: rm -f paused
	I0603 13:54:28.767538 1143252 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 13:54:28.769737 1143252 out.go:177] * Done! kubectl is now configured to use "embed-certs-223260" cluster and "default" namespace by default
	I0603 13:54:27.370695 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:29.870214 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:25.990136 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:28.489276 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:30.489392 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:27.495908 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:27.509885 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:27.509968 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:27.545591 1143678 cri.go:89] found id: ""
	I0603 13:54:27.545626 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.545635 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:27.545641 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:27.545695 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:27.583699 1143678 cri.go:89] found id: ""
	I0603 13:54:27.583728 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.583740 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:27.583748 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:27.583835 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:27.623227 1143678 cri.go:89] found id: ""
	I0603 13:54:27.623268 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.623277 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:27.623283 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:27.623341 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:27.663057 1143678 cri.go:89] found id: ""
	I0603 13:54:27.663090 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.663102 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:27.663109 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:27.663187 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:27.708448 1143678 cri.go:89] found id: ""
	I0603 13:54:27.708481 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.708489 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:27.708495 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:27.708551 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:27.743629 1143678 cri.go:89] found id: ""
	I0603 13:54:27.743663 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.743674 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:27.743682 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:27.743748 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:27.778094 1143678 cri.go:89] found id: ""
	I0603 13:54:27.778128 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.778137 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:27.778147 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:27.778210 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:27.813137 1143678 cri.go:89] found id: ""
	I0603 13:54:27.813170 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.813180 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:27.813192 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:27.813208 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:27.861100 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:27.861136 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:27.914752 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:27.914794 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:27.929479 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:27.929511 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:28.002898 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:28.002926 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:28.002942 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:30.581890 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:30.595982 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:30.596068 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:30.638804 1143678 cri.go:89] found id: ""
	I0603 13:54:30.638841 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.638853 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:30.638862 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:30.638942 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:30.677202 1143678 cri.go:89] found id: ""
	I0603 13:54:30.677242 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.677253 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:30.677262 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:30.677329 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:30.717382 1143678 cri.go:89] found id: ""
	I0603 13:54:30.717436 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.717446 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:30.717455 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:30.717523 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:30.753691 1143678 cri.go:89] found id: ""
	I0603 13:54:30.753719 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.753728 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:30.753734 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:30.753798 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:30.790686 1143678 cri.go:89] found id: ""
	I0603 13:54:30.790714 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.790723 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:30.790729 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:30.790783 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:30.830196 1143678 cri.go:89] found id: ""
	I0603 13:54:30.830224 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.830237 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:30.830245 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:30.830299 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:30.865952 1143678 cri.go:89] found id: ""
	I0603 13:54:30.865980 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.865992 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:30.866000 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:30.866066 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:30.901561 1143678 cri.go:89] found id: ""
	I0603 13:54:30.901592 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.901601 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:30.901610 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:30.901627 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:30.979416 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:30.979459 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:31.035024 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:31.035061 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:31.089005 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:31.089046 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:31.105176 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:31.105210 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:31.172862 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:32.371040 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:34.870810 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:32.989041 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:34.989599 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:33.674069 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:33.688423 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:33.688499 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:33.729840 1143678 cri.go:89] found id: ""
	I0603 13:54:33.729876 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.729886 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:33.729893 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:33.729945 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:33.764984 1143678 cri.go:89] found id: ""
	I0603 13:54:33.765010 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.765018 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:33.765025 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:33.765075 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:33.798411 1143678 cri.go:89] found id: ""
	I0603 13:54:33.798446 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.798459 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:33.798468 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:33.798547 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:33.831565 1143678 cri.go:89] found id: ""
	I0603 13:54:33.831600 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.831611 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:33.831620 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:33.831688 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:33.869701 1143678 cri.go:89] found id: ""
	I0603 13:54:33.869727 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.869735 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:33.869741 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:33.869802 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:33.906108 1143678 cri.go:89] found id: ""
	I0603 13:54:33.906134 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.906144 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:33.906153 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:33.906218 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:33.946577 1143678 cri.go:89] found id: ""
	I0603 13:54:33.946607 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.946615 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:33.946621 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:33.946673 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:33.986691 1143678 cri.go:89] found id: ""
	I0603 13:54:33.986724 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.986743 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:33.986757 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:33.986775 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:34.044068 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:34.044110 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:34.059686 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:34.059724 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:34.141490 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:34.141514 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:34.141531 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:34.227890 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:34.227930 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:36.778969 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:36.792527 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:36.792612 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:36.828044 1143678 cri.go:89] found id: ""
	I0603 13:54:36.828083 1143678 logs.go:276] 0 containers: []
	W0603 13:54:36.828096 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:36.828102 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:36.828166 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:36.863869 1143678 cri.go:89] found id: ""
	I0603 13:54:36.863905 1143678 logs.go:276] 0 containers: []
	W0603 13:54:36.863917 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:36.863926 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:36.863996 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:36.899610 1143678 cri.go:89] found id: ""
	I0603 13:54:36.899649 1143678 logs.go:276] 0 containers: []
	W0603 13:54:36.899661 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:36.899669 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:36.899742 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:36.938627 1143678 cri.go:89] found id: ""
	I0603 13:54:36.938664 1143678 logs.go:276] 0 containers: []
	W0603 13:54:36.938675 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:36.938683 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:36.938739 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:36.973810 1143678 cri.go:89] found id: ""
	I0603 13:54:36.973842 1143678 logs.go:276] 0 containers: []
	W0603 13:54:36.973857 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:36.973863 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:36.973915 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:37.013759 1143678 cri.go:89] found id: ""
	I0603 13:54:37.013792 1143678 logs.go:276] 0 containers: []
	W0603 13:54:37.013805 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:37.013813 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:37.013881 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:37.049665 1143678 cri.go:89] found id: ""
	I0603 13:54:37.049697 1143678 logs.go:276] 0 containers: []
	W0603 13:54:37.049706 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:37.049712 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:37.049787 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:37.087405 1143678 cri.go:89] found id: ""
	I0603 13:54:37.087436 1143678 logs.go:276] 0 containers: []
	W0603 13:54:37.087446 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:37.087457 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:37.087470 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:37.126443 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:37.126476 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:37.177976 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:37.178015 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:37.192821 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:37.192860 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:37.267895 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:37.267926 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:37.267945 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:36.871536 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:37.371048 1143450 pod_ready.go:81] duration metric: took 4m0.007102739s for pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace to be "Ready" ...
	E0603 13:54:37.371080 1143450 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0603 13:54:37.371092 1143450 pod_ready.go:38] duration metric: took 4m5.236838117s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:54:37.371111 1143450 api_server.go:52] waiting for apiserver process to appear ...
	I0603 13:54:37.371145 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:37.371202 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:37.428454 1143450 cri.go:89] found id: "50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836"
	I0603 13:54:37.428487 1143450 cri.go:89] found id: ""
	I0603 13:54:37.428498 1143450 logs.go:276] 1 containers: [50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836]
	I0603 13:54:37.428564 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:37.434473 1143450 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:37.434552 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:37.476251 1143450 cri.go:89] found id: "c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d"
	I0603 13:54:37.476288 1143450 cri.go:89] found id: ""
	I0603 13:54:37.476300 1143450 logs.go:276] 1 containers: [c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d]
	I0603 13:54:37.476368 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:37.483190 1143450 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:37.483280 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:37.528660 1143450 cri.go:89] found id: "bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d"
	I0603 13:54:37.528693 1143450 cri.go:89] found id: ""
	I0603 13:54:37.528704 1143450 logs.go:276] 1 containers: [bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d]
	I0603 13:54:37.528797 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:37.533716 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:37.533809 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:37.573995 1143450 cri.go:89] found id: "7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a"
	I0603 13:54:37.574016 1143450 cri.go:89] found id: ""
	I0603 13:54:37.574025 1143450 logs.go:276] 1 containers: [7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a]
	I0603 13:54:37.574071 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:37.578385 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:37.578465 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:37.616468 1143450 cri.go:89] found id: "9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b"
	I0603 13:54:37.616511 1143450 cri.go:89] found id: ""
	I0603 13:54:37.616522 1143450 logs.go:276] 1 containers: [9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b]
	I0603 13:54:37.616603 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:37.621204 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:37.621277 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:37.661363 1143450 cri.go:89] found id: "b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7"
	I0603 13:54:37.661390 1143450 cri.go:89] found id: ""
	I0603 13:54:37.661401 1143450 logs.go:276] 1 containers: [b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7]
	I0603 13:54:37.661507 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:37.665969 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:37.666055 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:37.705096 1143450 cri.go:89] found id: ""
	I0603 13:54:37.705128 1143450 logs.go:276] 0 containers: []
	W0603 13:54:37.705136 1143450 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:37.705142 1143450 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0603 13:54:37.705210 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0603 13:54:37.746365 1143450 cri.go:89] found id: "969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240"
	I0603 13:54:37.746400 1143450 cri.go:89] found id: "bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4"
	I0603 13:54:37.746404 1143450 cri.go:89] found id: ""
	I0603 13:54:37.746412 1143450 logs.go:276] 2 containers: [969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240 bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4]
	I0603 13:54:37.746470 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:37.750874 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:37.755146 1143450 logs.go:123] Gathering logs for kube-controller-manager [b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7] ...
	I0603 13:54:37.755175 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7"
	I0603 13:54:37.811365 1143450 logs.go:123] Gathering logs for storage-provisioner [bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4] ...
	I0603 13:54:37.811403 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4"
	I0603 13:54:37.849687 1143450 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:37.849729 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:37.904870 1143450 logs.go:123] Gathering logs for etcd [c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d] ...
	I0603 13:54:37.904909 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d"
	I0603 13:54:37.955448 1143450 logs.go:123] Gathering logs for coredns [bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d] ...
	I0603 13:54:37.955497 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d"
	I0603 13:54:37.996659 1143450 logs.go:123] Gathering logs for kube-proxy [9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b] ...
	I0603 13:54:37.996687 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b"
	I0603 13:54:38.047501 1143450 logs.go:123] Gathering logs for storage-provisioner [969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240] ...
	I0603 13:54:38.047540 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240"
	I0603 13:54:38.090932 1143450 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:38.090969 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:38.606612 1143450 logs.go:123] Gathering logs for container status ...
	I0603 13:54:38.606672 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:38.652732 1143450 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:38.652774 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:38.670570 1143450 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:38.670620 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 13:54:38.812156 1143450 logs.go:123] Gathering logs for kube-apiserver [50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836] ...
	I0603 13:54:38.812208 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836"
	I0603 13:54:38.862940 1143450 logs.go:123] Gathering logs for kube-scheduler [7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a] ...
	I0603 13:54:38.862988 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a"
	I0603 13:54:37.491134 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:39.990379 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:39.846505 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:39.860426 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:39.860514 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:39.896684 1143678 cri.go:89] found id: ""
	I0603 13:54:39.896712 1143678 logs.go:276] 0 containers: []
	W0603 13:54:39.896726 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:39.896736 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:39.896801 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:39.932437 1143678 cri.go:89] found id: ""
	I0603 13:54:39.932482 1143678 logs.go:276] 0 containers: []
	W0603 13:54:39.932494 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:39.932503 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:39.932571 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:39.967850 1143678 cri.go:89] found id: ""
	I0603 13:54:39.967883 1143678 logs.go:276] 0 containers: []
	W0603 13:54:39.967891 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:39.967898 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:39.967952 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:40.003255 1143678 cri.go:89] found id: ""
	I0603 13:54:40.003284 1143678 logs.go:276] 0 containers: []
	W0603 13:54:40.003292 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:40.003298 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:40.003351 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:40.045865 1143678 cri.go:89] found id: ""
	I0603 13:54:40.045892 1143678 logs.go:276] 0 containers: []
	W0603 13:54:40.045904 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:40.045912 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:40.045976 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:40.082469 1143678 cri.go:89] found id: ""
	I0603 13:54:40.082498 1143678 logs.go:276] 0 containers: []
	W0603 13:54:40.082507 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:40.082513 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:40.082584 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:40.117181 1143678 cri.go:89] found id: ""
	I0603 13:54:40.117231 1143678 logs.go:276] 0 containers: []
	W0603 13:54:40.117242 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:40.117250 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:40.117320 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:40.157776 1143678 cri.go:89] found id: ""
	I0603 13:54:40.157813 1143678 logs.go:276] 0 containers: []
	W0603 13:54:40.157822 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:40.157832 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:40.157848 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:40.213374 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:40.213437 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:40.228298 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:40.228330 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:40.305450 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:40.305485 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:40.305503 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:40.393653 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:40.393704 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:41.405129 1143450 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:41.423234 1143450 api_server.go:72] duration metric: took 4m14.998447047s to wait for apiserver process to appear ...
	I0603 13:54:41.423266 1143450 api_server.go:88] waiting for apiserver healthz status ...
	I0603 13:54:41.423312 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:41.423374 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:41.463540 1143450 cri.go:89] found id: "50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836"
	I0603 13:54:41.463562 1143450 cri.go:89] found id: ""
	I0603 13:54:41.463570 1143450 logs.go:276] 1 containers: [50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836]
	I0603 13:54:41.463620 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:41.468145 1143450 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:41.468226 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:41.511977 1143450 cri.go:89] found id: "c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d"
	I0603 13:54:41.512000 1143450 cri.go:89] found id: ""
	I0603 13:54:41.512017 1143450 logs.go:276] 1 containers: [c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d]
	I0603 13:54:41.512081 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:41.516600 1143450 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:41.516674 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:41.554392 1143450 cri.go:89] found id: "bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d"
	I0603 13:54:41.554420 1143450 cri.go:89] found id: ""
	I0603 13:54:41.554443 1143450 logs.go:276] 1 containers: [bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d]
	I0603 13:54:41.554508 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:41.558983 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:41.559039 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:41.597710 1143450 cri.go:89] found id: "7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a"
	I0603 13:54:41.597737 1143450 cri.go:89] found id: ""
	I0603 13:54:41.597747 1143450 logs.go:276] 1 containers: [7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a]
	I0603 13:54:41.597811 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:41.602164 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:41.602227 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:41.639422 1143450 cri.go:89] found id: "9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b"
	I0603 13:54:41.639452 1143450 cri.go:89] found id: ""
	I0603 13:54:41.639462 1143450 logs.go:276] 1 containers: [9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b]
	I0603 13:54:41.639532 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:41.644093 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:41.644171 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:41.682475 1143450 cri.go:89] found id: "b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7"
	I0603 13:54:41.682506 1143450 cri.go:89] found id: ""
	I0603 13:54:41.682515 1143450 logs.go:276] 1 containers: [b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7]
	I0603 13:54:41.682578 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:41.687654 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:41.687734 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:41.724804 1143450 cri.go:89] found id: ""
	I0603 13:54:41.724839 1143450 logs.go:276] 0 containers: []
	W0603 13:54:41.724850 1143450 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:41.724858 1143450 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0603 13:54:41.724928 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0603 13:54:41.764625 1143450 cri.go:89] found id: "969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240"
	I0603 13:54:41.764653 1143450 cri.go:89] found id: "bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4"
	I0603 13:54:41.764659 1143450 cri.go:89] found id: ""
	I0603 13:54:41.764670 1143450 logs.go:276] 2 containers: [969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240 bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4]
	I0603 13:54:41.764736 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:41.769499 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:41.773782 1143450 logs.go:123] Gathering logs for container status ...
	I0603 13:54:41.773806 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:41.816486 1143450 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:41.816520 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:41.833538 1143450 logs.go:123] Gathering logs for kube-apiserver [50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836] ...
	I0603 13:54:41.833569 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836"
	I0603 13:54:41.877958 1143450 logs.go:123] Gathering logs for etcd [c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d] ...
	I0603 13:54:41.878004 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d"
	I0603 13:54:41.922575 1143450 logs.go:123] Gathering logs for kube-controller-manager [b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7] ...
	I0603 13:54:41.922612 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7"
	I0603 13:54:41.983865 1143450 logs.go:123] Gathering logs for storage-provisioner [969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240] ...
	I0603 13:54:41.983900 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240"
	I0603 13:54:42.032746 1143450 logs.go:123] Gathering logs for storage-provisioner [bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4] ...
	I0603 13:54:42.032773 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4"
	I0603 13:54:42.076129 1143450 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:42.076166 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:42.129061 1143450 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:42.129099 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 13:54:42.248179 1143450 logs.go:123] Gathering logs for coredns [bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d] ...
	I0603 13:54:42.248213 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d"
	I0603 13:54:42.292179 1143450 logs.go:123] Gathering logs for kube-scheduler [7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a] ...
	I0603 13:54:42.292288 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a"
	I0603 13:54:42.340447 1143450 logs.go:123] Gathering logs for kube-proxy [9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b] ...
	I0603 13:54:42.340493 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b"
	I0603 13:54:42.381993 1143450 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:42.382024 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:42.488926 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:44.990221 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:42.934691 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:42.948505 1143678 kubeadm.go:591] duration metric: took 4m4.45791317s to restartPrimaryControlPlane
	W0603 13:54:42.948592 1143678 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0603 13:54:42.948629 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 13:54:48.316951 1143678 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.36829775s)
	I0603 13:54:48.317039 1143678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:54:48.333630 1143678 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 13:54:48.345772 1143678 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 13:54:48.357359 1143678 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 13:54:48.357386 1143678 kubeadm.go:156] found existing configuration files:
	
	I0603 13:54:48.357477 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 13:54:48.367844 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 13:54:48.367917 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 13:54:48.379349 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 13:54:48.389684 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 13:54:48.389760 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 13:54:48.401562 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 13:54:48.412670 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 13:54:48.412743 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 13:54:48.424261 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 13:54:48.434598 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 13:54:48.434674 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 13:54:48.446187 1143678 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 13:54:48.527873 1143678 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0603 13:54:48.528073 1143678 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 13:54:48.695244 1143678 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 13:54:48.695401 1143678 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 13:54:48.695581 1143678 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 13:54:48.930141 1143678 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 13:54:45.281199 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:54:45.286305 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 200:
	ok
	I0603 13:54:45.287421 1143450 api_server.go:141] control plane version: v1.30.1
	I0603 13:54:45.287444 1143450 api_server.go:131] duration metric: took 3.864171356s to wait for apiserver health ...
	I0603 13:54:45.287455 1143450 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 13:54:45.287486 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:45.287540 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:45.328984 1143450 cri.go:89] found id: "50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836"
	I0603 13:54:45.329012 1143450 cri.go:89] found id: ""
	I0603 13:54:45.329022 1143450 logs.go:276] 1 containers: [50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836]
	I0603 13:54:45.329075 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:45.334601 1143450 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:45.334683 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:45.382942 1143450 cri.go:89] found id: "c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d"
	I0603 13:54:45.382967 1143450 cri.go:89] found id: ""
	I0603 13:54:45.382978 1143450 logs.go:276] 1 containers: [c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d]
	I0603 13:54:45.383039 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:45.387904 1143450 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:45.387969 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:45.431948 1143450 cri.go:89] found id: "bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d"
	I0603 13:54:45.431981 1143450 cri.go:89] found id: ""
	I0603 13:54:45.431992 1143450 logs.go:276] 1 containers: [bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d]
	I0603 13:54:45.432052 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:45.440993 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:45.441074 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:45.490086 1143450 cri.go:89] found id: "7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a"
	I0603 13:54:45.490114 1143450 cri.go:89] found id: ""
	I0603 13:54:45.490125 1143450 logs.go:276] 1 containers: [7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a]
	I0603 13:54:45.490194 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:45.494628 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:45.494688 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:45.532264 1143450 cri.go:89] found id: "9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b"
	I0603 13:54:45.532296 1143450 cri.go:89] found id: ""
	I0603 13:54:45.532307 1143450 logs.go:276] 1 containers: [9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b]
	I0603 13:54:45.532374 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:45.536914 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:45.536985 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:45.576641 1143450 cri.go:89] found id: "b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7"
	I0603 13:54:45.576663 1143450 cri.go:89] found id: ""
	I0603 13:54:45.576671 1143450 logs.go:276] 1 containers: [b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7]
	I0603 13:54:45.576720 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:45.580872 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:45.580926 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:45.628834 1143450 cri.go:89] found id: ""
	I0603 13:54:45.628864 1143450 logs.go:276] 0 containers: []
	W0603 13:54:45.628872 1143450 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:45.628879 1143450 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0603 13:54:45.628931 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0603 13:54:45.671689 1143450 cri.go:89] found id: "969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240"
	I0603 13:54:45.671719 1143450 cri.go:89] found id: "bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4"
	I0603 13:54:45.671727 1143450 cri.go:89] found id: ""
	I0603 13:54:45.671740 1143450 logs.go:276] 2 containers: [969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240 bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4]
	I0603 13:54:45.671799 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:45.677161 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:45.682179 1143450 logs.go:123] Gathering logs for container status ...
	I0603 13:54:45.682219 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:45.731155 1143450 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:45.731192 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 13:54:45.846365 1143450 logs.go:123] Gathering logs for kube-apiserver [50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836] ...
	I0603 13:54:45.846411 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836"
	I0603 13:54:45.907694 1143450 logs.go:123] Gathering logs for coredns [bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d] ...
	I0603 13:54:45.907733 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d"
	I0603 13:54:45.952881 1143450 logs.go:123] Gathering logs for kube-scheduler [7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a] ...
	I0603 13:54:45.952919 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a"
	I0603 13:54:45.998674 1143450 logs.go:123] Gathering logs for kube-controller-manager [b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7] ...
	I0603 13:54:45.998722 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7"
	I0603 13:54:46.061902 1143450 logs.go:123] Gathering logs for storage-provisioner [969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240] ...
	I0603 13:54:46.061949 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240"
	I0603 13:54:46.106017 1143450 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:46.106056 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:46.473915 1143450 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:46.473981 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:46.530212 1143450 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:46.530260 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:46.545954 1143450 logs.go:123] Gathering logs for etcd [c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d] ...
	I0603 13:54:46.545996 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d"
	I0603 13:54:46.595057 1143450 logs.go:123] Gathering logs for kube-proxy [9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b] ...
	I0603 13:54:46.595097 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b"
	I0603 13:54:46.637835 1143450 logs.go:123] Gathering logs for storage-provisioner [bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4] ...
	I0603 13:54:46.637872 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4"
	I0603 13:54:49.190539 1143450 system_pods.go:59] 8 kube-system pods found
	I0603 13:54:49.190572 1143450 system_pods.go:61] "coredns-7db6d8ff4d-flxqj" [a116f363-ca50-4e2d-8c77-e99498c81e36] Running
	I0603 13:54:49.190577 1143450 system_pods.go:61] "etcd-default-k8s-diff-port-030870" [4134b8e4-b7c4-4571-ae7f-f1eff2be2427] Running
	I0603 13:54:49.190582 1143450 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-030870" [38fe3d48-9d20-448a-b8d1-7c3af8ab1d2b] Running
	I0603 13:54:49.190586 1143450 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-030870" [5c8f2fc4-fc4f-48f8-8d81-3b64aa9a93c3] Running
	I0603 13:54:49.190590 1143450 system_pods.go:61] "kube-proxy-thsrx" [96df5442-b343-47c8-a561-681a2d568d50] Running
	I0603 13:54:49.190593 1143450 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-030870" [1f2c23a1-1c2c-463f-a5f0-e8f1bb8956f6] Running
	I0603 13:54:49.190602 1143450 system_pods.go:61] "metrics-server-569cc877fc-8xw9v" [4ab08177-2171-493b-928c-456d8a21fd68] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:54:49.190609 1143450 system_pods.go:61] "storage-provisioner" [64d080e5-d582-4ee5-adbc-a652e8e2b820] Running
	I0603 13:54:49.190620 1143450 system_pods.go:74] duration metric: took 3.903157143s to wait for pod list to return data ...
	I0603 13:54:49.190633 1143450 default_sa.go:34] waiting for default service account to be created ...
	I0603 13:54:49.193192 1143450 default_sa.go:45] found service account: "default"
	I0603 13:54:49.193219 1143450 default_sa.go:55] duration metric: took 2.575016ms for default service account to be created ...
	I0603 13:54:49.193229 1143450 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 13:54:49.202028 1143450 system_pods.go:86] 8 kube-system pods found
	I0603 13:54:49.202065 1143450 system_pods.go:89] "coredns-7db6d8ff4d-flxqj" [a116f363-ca50-4e2d-8c77-e99498c81e36] Running
	I0603 13:54:49.202074 1143450 system_pods.go:89] "etcd-default-k8s-diff-port-030870" [4134b8e4-b7c4-4571-ae7f-f1eff2be2427] Running
	I0603 13:54:49.202081 1143450 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-030870" [38fe3d48-9d20-448a-b8d1-7c3af8ab1d2b] Running
	I0603 13:54:49.202088 1143450 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-030870" [5c8f2fc4-fc4f-48f8-8d81-3b64aa9a93c3] Running
	I0603 13:54:49.202094 1143450 system_pods.go:89] "kube-proxy-thsrx" [96df5442-b343-47c8-a561-681a2d568d50] Running
	I0603 13:54:49.202100 1143450 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-030870" [1f2c23a1-1c2c-463f-a5f0-e8f1bb8956f6] Running
	I0603 13:54:49.202113 1143450 system_pods.go:89] "metrics-server-569cc877fc-8xw9v" [4ab08177-2171-493b-928c-456d8a21fd68] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:54:49.202124 1143450 system_pods.go:89] "storage-provisioner" [64d080e5-d582-4ee5-adbc-a652e8e2b820] Running
	I0603 13:54:49.202135 1143450 system_pods.go:126] duration metric: took 8.899065ms to wait for k8s-apps to be running ...
	I0603 13:54:49.202152 1143450 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 13:54:49.202209 1143450 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:54:49.220199 1143450 system_svc.go:56] duration metric: took 18.025994ms WaitForService to wait for kubelet
	I0603 13:54:49.220242 1143450 kubeadm.go:576] duration metric: took 4m22.79546223s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 13:54:49.220269 1143450 node_conditions.go:102] verifying NodePressure condition ...
	I0603 13:54:49.223327 1143450 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 13:54:49.223354 1143450 node_conditions.go:123] node cpu capacity is 2
	I0603 13:54:49.223367 1143450 node_conditions.go:105] duration metric: took 3.093435ms to run NodePressure ...
	I0603 13:54:49.223383 1143450 start.go:240] waiting for startup goroutines ...
	I0603 13:54:49.223393 1143450 start.go:245] waiting for cluster config update ...
	I0603 13:54:49.223408 1143450 start.go:254] writing updated cluster config ...
	I0603 13:54:49.223704 1143450 ssh_runner.go:195] Run: rm -f paused
	I0603 13:54:49.277924 1143450 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 13:54:49.280442 1143450 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-030870" cluster and "default" namespace by default
	I0603 13:54:48.932024 1143678 out.go:204]   - Generating certificates and keys ...
	I0603 13:54:48.932110 1143678 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 13:54:48.932168 1143678 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 13:54:48.932235 1143678 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 13:54:48.932305 1143678 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 13:54:48.932481 1143678 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 13:54:48.932639 1143678 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 13:54:48.933272 1143678 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 13:54:48.933771 1143678 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 13:54:48.934251 1143678 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 13:54:48.934654 1143678 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 13:54:48.934712 1143678 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 13:54:48.934762 1143678 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 13:54:49.063897 1143678 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 13:54:49.266680 1143678 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 13:54:49.364943 1143678 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 13:54:49.628905 1143678 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 13:54:49.645861 1143678 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 13:54:49.645991 1143678 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 13:54:49.646049 1143678 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 13:54:49.795196 1143678 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 13:54:47.490336 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:49.989543 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:49.798407 1143678 out.go:204]   - Booting up control plane ...
	I0603 13:54:49.798564 1143678 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 13:54:49.800163 1143678 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 13:54:49.802226 1143678 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 13:54:49.803809 1143678 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 13:54:49.806590 1143678 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0603 13:54:52.490088 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:54.990092 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:57.488119 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:59.489775 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:01.490194 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:03.989075 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:05.990054 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:08.489226 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:10.989028 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:13.489118 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:15.489176 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:17.989008 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:20.489091 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:22.989284 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:24.990020 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:27.489326 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:27.983679 1142862 pod_ready.go:81] duration metric: took 4m0.001142992s for pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace to be "Ready" ...
	E0603 13:55:27.983708 1142862 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace to be "Ready" (will not retry!)
	I0603 13:55:27.983731 1142862 pod_ready.go:38] duration metric: took 4m12.038904247s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:55:27.983760 1142862 kubeadm.go:591] duration metric: took 4m21.273943202s to restartPrimaryControlPlane
	W0603 13:55:27.983831 1142862 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0603 13:55:27.983865 1142862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 13:55:29.807867 1143678 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0603 13:55:29.808474 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:55:29.808754 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:55:34.809455 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:55:34.809722 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:55:44.810305 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:55:44.810491 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:55:59.870853 1142862 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.886953189s)
	I0603 13:55:59.870958 1142862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:55:59.889658 1142862 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 13:55:59.901529 1142862 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 13:55:59.914241 1142862 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 13:55:59.914266 1142862 kubeadm.go:156] found existing configuration files:
	
	I0603 13:55:59.914312 1142862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 13:55:59.924884 1142862 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 13:55:59.924950 1142862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 13:55:59.935494 1142862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 13:55:59.946222 1142862 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 13:55:59.946321 1142862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 13:55:59.956749 1142862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 13:55:59.967027 1142862 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 13:55:59.967110 1142862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 13:55:59.979124 1142862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 13:55:59.989689 1142862 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 13:55:59.989751 1142862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 13:56:00.000616 1142862 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 13:56:00.230878 1142862 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 13:56:04.811725 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:56:04.811929 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:56:08.995375 1142862 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0603 13:56:08.995463 1142862 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 13:56:08.995588 1142862 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 13:56:08.995724 1142862 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 13:56:08.995874 1142862 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 13:56:08.995970 1142862 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 13:56:08.997810 1142862 out.go:204]   - Generating certificates and keys ...
	I0603 13:56:08.997914 1142862 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 13:56:08.998045 1142862 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 13:56:08.998154 1142862 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 13:56:08.998321 1142862 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 13:56:08.998423 1142862 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 13:56:08.998506 1142862 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 13:56:08.998578 1142862 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 13:56:08.998665 1142862 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 13:56:08.998764 1142862 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 13:56:08.998860 1142862 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 13:56:08.998919 1142862 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 13:56:08.999011 1142862 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 13:56:08.999111 1142862 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 13:56:08.999202 1142862 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0603 13:56:08.999275 1142862 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 13:56:08.999354 1142862 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 13:56:08.999423 1142862 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 13:56:08.999538 1142862 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 13:56:08.999692 1142862 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 13:56:09.001133 1142862 out.go:204]   - Booting up control plane ...
	I0603 13:56:09.001218 1142862 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 13:56:09.001293 1142862 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 13:56:09.001354 1142862 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 13:56:09.001499 1142862 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 13:56:09.001584 1142862 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 13:56:09.001637 1142862 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 13:56:09.001768 1142862 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0603 13:56:09.001881 1142862 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0603 13:56:09.001941 1142862 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.923053ms
	I0603 13:56:09.002010 1142862 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0603 13:56:09.002090 1142862 kubeadm.go:309] [api-check] The API server is healthy after 5.502208975s
	I0603 13:56:09.002224 1142862 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0603 13:56:09.002363 1142862 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0603 13:56:09.002457 1142862 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0603 13:56:09.002647 1142862 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-817450 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0603 13:56:09.002713 1142862 kubeadm.go:309] [bootstrap-token] Using token: a7hbk8.xb8is7k6ewa3l3ya
	I0603 13:56:09.004666 1142862 out.go:204]   - Configuring RBAC rules ...
	I0603 13:56:09.004792 1142862 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0603 13:56:09.004883 1142862 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0603 13:56:09.005026 1142862 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0603 13:56:09.005234 1142862 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0603 13:56:09.005389 1142862 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0603 13:56:09.005531 1142862 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0603 13:56:09.005651 1142862 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0603 13:56:09.005709 1142862 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0603 13:56:09.005779 1142862 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0603 13:56:09.005787 1142862 kubeadm.go:309] 
	I0603 13:56:09.005869 1142862 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0603 13:56:09.005885 1142862 kubeadm.go:309] 
	I0603 13:56:09.006014 1142862 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0603 13:56:09.006034 1142862 kubeadm.go:309] 
	I0603 13:56:09.006076 1142862 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0603 13:56:09.006136 1142862 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0603 13:56:09.006197 1142862 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0603 13:56:09.006203 1142862 kubeadm.go:309] 
	I0603 13:56:09.006263 1142862 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0603 13:56:09.006273 1142862 kubeadm.go:309] 
	I0603 13:56:09.006330 1142862 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0603 13:56:09.006338 1142862 kubeadm.go:309] 
	I0603 13:56:09.006393 1142862 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0603 13:56:09.006476 1142862 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0603 13:56:09.006542 1142862 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0603 13:56:09.006548 1142862 kubeadm.go:309] 
	I0603 13:56:09.006629 1142862 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0603 13:56:09.006746 1142862 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0603 13:56:09.006758 1142862 kubeadm.go:309] 
	I0603 13:56:09.006850 1142862 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token a7hbk8.xb8is7k6ewa3l3ya \
	I0603 13:56:09.006987 1142862 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c33e9516f6d05db03b44f9194bafe44692a1b8ae1d860b8bc74f77578e93fdb1 \
	I0603 13:56:09.007028 1142862 kubeadm.go:309] 	--control-plane 
	I0603 13:56:09.007037 1142862 kubeadm.go:309] 
	I0603 13:56:09.007141 1142862 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0603 13:56:09.007170 1142862 kubeadm.go:309] 
	I0603 13:56:09.007266 1142862 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token a7hbk8.xb8is7k6ewa3l3ya \
	I0603 13:56:09.007427 1142862 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c33e9516f6d05db03b44f9194bafe44692a1b8ae1d860b8bc74f77578e93fdb1 
	I0603 13:56:09.007451 1142862 cni.go:84] Creating CNI manager for ""
	I0603 13:56:09.007464 1142862 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:56:09.009292 1142862 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 13:56:09.010750 1142862 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 13:56:09.022810 1142862 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 13:56:09.052132 1142862 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 13:56:09.052150 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:09.052150 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-817450 minikube.k8s.io/updated_at=2024_06_03T13_56_09_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354 minikube.k8s.io/name=no-preload-817450 minikube.k8s.io/primary=true
	I0603 13:56:09.291610 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:09.296892 1142862 ops.go:34] apiserver oom_adj: -16
	I0603 13:56:09.792736 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:10.292471 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:10.792688 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:11.291782 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:11.792454 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:12.292056 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:12.792150 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:13.292620 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:13.792024 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:14.292501 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:14.791790 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:15.292128 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:15.792608 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:16.292106 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:16.791876 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:17.292276 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:17.791876 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:18.292644 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:18.792571 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:19.292064 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:19.791908 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:20.292511 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:20.792137 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:21.292153 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:21.791809 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:21.882178 1142862 kubeadm.go:1107] duration metric: took 12.830108615s to wait for elevateKubeSystemPrivileges
	W0603 13:56:21.882223 1142862 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0603 13:56:21.882236 1142862 kubeadm.go:393] duration metric: took 5m15.237452092s to StartCluster
	I0603 13:56:21.882260 1142862 settings.go:142] acquiring lock: {Name:mka7155af15d143794eb08b8670f7d850f44839e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:56:21.882368 1142862 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 13:56:21.883986 1142862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/kubeconfig: {Name:mk082a4c41fd0f4876b4085806e1bc5ef6533b14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:56:21.884288 1142862 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.125 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 13:56:21.885915 1142862 out.go:177] * Verifying Kubernetes components...
	I0603 13:56:21.884411 1142862 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 13:56:21.884504 1142862 config.go:182] Loaded profile config "no-preload-817450": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:56:21.887156 1142862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:56:21.887168 1142862 addons.go:69] Setting storage-provisioner=true in profile "no-preload-817450"
	I0603 13:56:21.887199 1142862 addons.go:69] Setting metrics-server=true in profile "no-preload-817450"
	I0603 13:56:21.887230 1142862 addons.go:234] Setting addon storage-provisioner=true in "no-preload-817450"
	W0603 13:56:21.887245 1142862 addons.go:243] addon storage-provisioner should already be in state true
	I0603 13:56:21.887261 1142862 addons.go:234] Setting addon metrics-server=true in "no-preload-817450"
	W0603 13:56:21.887276 1142862 addons.go:243] addon metrics-server should already be in state true
	I0603 13:56:21.887295 1142862 host.go:66] Checking if "no-preload-817450" exists ...
	I0603 13:56:21.887316 1142862 host.go:66] Checking if "no-preload-817450" exists ...
	I0603 13:56:21.887156 1142862 addons.go:69] Setting default-storageclass=true in profile "no-preload-817450"
	I0603 13:56:21.887366 1142862 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-817450"
	I0603 13:56:21.887709 1142862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:56:21.887711 1142862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:56:21.887749 1142862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:56:21.887752 1142862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:56:21.887779 1142862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:56:21.887778 1142862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:56:21.906019 1142862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37881
	I0603 13:56:21.906319 1142862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42985
	I0603 13:56:21.906563 1142862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43371
	I0603 13:56:21.906601 1142862 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:56:21.906714 1142862 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:56:21.907043 1142862 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:56:21.907126 1142862 main.go:141] libmachine: Using API Version  1
	I0603 13:56:21.907143 1142862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:56:21.907269 1142862 main.go:141] libmachine: Using API Version  1
	I0603 13:56:21.907288 1142862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:56:21.907558 1142862 main.go:141] libmachine: Using API Version  1
	I0603 13:56:21.907578 1142862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:56:21.907752 1142862 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:56:21.907891 1142862 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:56:21.908248 1142862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:56:21.908269 1142862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:56:21.908419 1142862 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:56:21.908487 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetState
	I0603 13:56:21.909150 1142862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:56:21.909175 1142862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:56:21.912898 1142862 addons.go:234] Setting addon default-storageclass=true in "no-preload-817450"
	W0603 13:56:21.912926 1142862 addons.go:243] addon default-storageclass should already be in state true
	I0603 13:56:21.912963 1142862 host.go:66] Checking if "no-preload-817450" exists ...
	I0603 13:56:21.913361 1142862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:56:21.913413 1142862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:56:21.928877 1142862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32983
	I0603 13:56:21.929336 1142862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44735
	I0603 13:56:21.929541 1142862 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:56:21.930006 1142862 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:56:21.930064 1142862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43105
	I0603 13:56:21.930161 1142862 main.go:141] libmachine: Using API Version  1
	I0603 13:56:21.930186 1142862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:56:21.930580 1142862 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:56:21.930723 1142862 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:56:21.930798 1142862 main.go:141] libmachine: Using API Version  1
	I0603 13:56:21.930812 1142862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:56:21.930891 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetState
	I0603 13:56:21.931037 1142862 main.go:141] libmachine: Using API Version  1
	I0603 13:56:21.931052 1142862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:56:21.931187 1142862 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:56:21.931369 1142862 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:56:21.931394 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetState
	I0603 13:56:21.932113 1142862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:56:21.932140 1142862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:56:21.933613 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:56:21.936068 1142862 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:56:21.934518 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:56:21.937788 1142862 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 13:56:21.937821 1142862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 13:56:21.937844 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:56:21.939174 1142862 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0603 13:56:21.940435 1142862 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0603 13:56:21.940458 1142862 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0603 13:56:21.940559 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:56:21.942628 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:56:21.943950 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:56:21.944227 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:56:21.944257 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:56:21.944449 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:56:21.944658 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:56:21.944734 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:56:21.944754 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:56:21.944780 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:56:21.944919 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:56:21.944932 1142862 sshutil.go:53] new ssh client: &{IP:192.168.72.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa Username:docker}
	I0603 13:56:21.945154 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:56:21.945309 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:56:21.945457 1142862 sshutil.go:53] new ssh client: &{IP:192.168.72.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa Username:docker}
	I0603 13:56:21.951140 1142862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45619
	I0603 13:56:21.951606 1142862 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:56:21.952125 1142862 main.go:141] libmachine: Using API Version  1
	I0603 13:56:21.952152 1142862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:56:21.952579 1142862 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:56:21.952808 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetState
	I0603 13:56:21.954505 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:56:21.954760 1142862 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 13:56:21.954781 1142862 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 13:56:21.954801 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:56:21.958298 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:56:21.958816 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:56:21.958851 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:56:21.959086 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:56:21.959325 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:56:21.959515 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:56:21.959678 1142862 sshutil.go:53] new ssh client: &{IP:192.168.72.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa Username:docker}
	I0603 13:56:22.102359 1142862 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:56:22.121380 1142862 node_ready.go:35] waiting up to 6m0s for node "no-preload-817450" to be "Ready" ...
	I0603 13:56:22.135572 1142862 node_ready.go:49] node "no-preload-817450" has status "Ready":"True"
	I0603 13:56:22.135599 1142862 node_ready.go:38] duration metric: took 14.156504ms for node "no-preload-817450" to be "Ready" ...
	I0603 13:56:22.135614 1142862 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:56:22.151036 1142862 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-f8pbl" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:22.283805 1142862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 13:56:22.288913 1142862 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0603 13:56:22.288938 1142862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0603 13:56:22.297769 1142862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 13:56:22.329187 1142862 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0603 13:56:22.329221 1142862 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0603 13:56:22.393569 1142862 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 13:56:22.393594 1142862 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0603 13:56:22.435605 1142862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 13:56:23.470078 1142862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.18622743s)
	I0603 13:56:23.470155 1142862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.172344092s)
	I0603 13:56:23.470171 1142862 main.go:141] libmachine: Making call to close driver server
	I0603 13:56:23.470192 1142862 main.go:141] libmachine: (no-preload-817450) Calling .Close
	I0603 13:56:23.470200 1142862 main.go:141] libmachine: Making call to close driver server
	I0603 13:56:23.470216 1142862 main.go:141] libmachine: (no-preload-817450) Calling .Close
	I0603 13:56:23.470515 1142862 main.go:141] libmachine: (no-preload-817450) DBG | Closing plugin on server side
	I0603 13:56:23.470553 1142862 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:56:23.470567 1142862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:56:23.470576 1142862 main.go:141] libmachine: Making call to close driver server
	I0603 13:56:23.470586 1142862 main.go:141] libmachine: (no-preload-817450) Calling .Close
	I0603 13:56:23.470589 1142862 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:56:23.470602 1142862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:56:23.470613 1142862 main.go:141] libmachine: Making call to close driver server
	I0603 13:56:23.470625 1142862 main.go:141] libmachine: (no-preload-817450) Calling .Close
	I0603 13:56:23.470807 1142862 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:56:23.470823 1142862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:56:23.471108 1142862 main.go:141] libmachine: (no-preload-817450) DBG | Closing plugin on server side
	I0603 13:56:23.471138 1142862 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:56:23.471180 1142862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:56:23.492187 1142862 main.go:141] libmachine: Making call to close driver server
	I0603 13:56:23.492226 1142862 main.go:141] libmachine: (no-preload-817450) Calling .Close
	I0603 13:56:23.492596 1142862 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:56:23.492618 1142862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:56:23.492636 1142862 main.go:141] libmachine: (no-preload-817450) DBG | Closing plugin on server side
	I0603 13:56:23.892903 1142862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.45716212s)
	I0603 13:56:23.892991 1142862 main.go:141] libmachine: Making call to close driver server
	I0603 13:56:23.893006 1142862 main.go:141] libmachine: (no-preload-817450) Calling .Close
	I0603 13:56:23.893418 1142862 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:56:23.893426 1142862 main.go:141] libmachine: (no-preload-817450) DBG | Closing plugin on server side
	I0603 13:56:23.893442 1142862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:56:23.893459 1142862 main.go:141] libmachine: Making call to close driver server
	I0603 13:56:23.893468 1142862 main.go:141] libmachine: (no-preload-817450) Calling .Close
	I0603 13:56:23.893745 1142862 main.go:141] libmachine: (no-preload-817450) DBG | Closing plugin on server side
	I0603 13:56:23.893790 1142862 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:56:23.893811 1142862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:56:23.893832 1142862 addons.go:475] Verifying addon metrics-server=true in "no-preload-817450"
	I0603 13:56:23.895990 1142862 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0603 13:56:23.897968 1142862 addons.go:510] duration metric: took 2.013558036s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0603 13:56:24.157803 1142862 pod_ready.go:102] pod "coredns-7db6d8ff4d-f8pbl" in "kube-system" namespace has status "Ready":"False"
	I0603 13:56:24.658730 1142862 pod_ready.go:92] pod "coredns-7db6d8ff4d-f8pbl" in "kube-system" namespace has status "Ready":"True"
	I0603 13:56:24.658765 1142862 pod_ready.go:81] duration metric: took 2.507699067s for pod "coredns-7db6d8ff4d-f8pbl" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.658779 1142862 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.664053 1142862 pod_ready.go:92] pod "etcd-no-preload-817450" in "kube-system" namespace has status "Ready":"True"
	I0603 13:56:24.664084 1142862 pod_ready.go:81] duration metric: took 5.2962ms for pod "etcd-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.664096 1142862 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.668496 1142862 pod_ready.go:92] pod "kube-apiserver-no-preload-817450" in "kube-system" namespace has status "Ready":"True"
	I0603 13:56:24.668521 1142862 pod_ready.go:81] duration metric: took 4.417565ms for pod "kube-apiserver-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.668533 1142862 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.673549 1142862 pod_ready.go:92] pod "kube-controller-manager-no-preload-817450" in "kube-system" namespace has status "Ready":"True"
	I0603 13:56:24.673568 1142862 pod_ready.go:81] duration metric: took 5.026882ms for pod "kube-controller-manager-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.673577 1142862 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-t45fn" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.678207 1142862 pod_ready.go:92] pod "kube-proxy-t45fn" in "kube-system" namespace has status "Ready":"True"
	I0603 13:56:24.678228 1142862 pod_ready.go:81] duration metric: took 4.644345ms for pod "kube-proxy-t45fn" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.678239 1142862 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:25.056174 1142862 pod_ready.go:92] pod "kube-scheduler-no-preload-817450" in "kube-system" namespace has status "Ready":"True"
	I0603 13:56:25.056204 1142862 pod_ready.go:81] duration metric: took 377.957963ms for pod "kube-scheduler-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:25.056214 1142862 pod_ready.go:38] duration metric: took 2.920586356s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:56:25.056231 1142862 api_server.go:52] waiting for apiserver process to appear ...
	I0603 13:56:25.056294 1142862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:56:25.071253 1142862 api_server.go:72] duration metric: took 3.186917827s to wait for apiserver process to appear ...
	I0603 13:56:25.071291 1142862 api_server.go:88] waiting for apiserver healthz status ...
	I0603 13:56:25.071319 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:56:25.076592 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 200:
	ok
	I0603 13:56:25.077531 1142862 api_server.go:141] control plane version: v1.30.1
	I0603 13:56:25.077553 1142862 api_server.go:131] duration metric: took 6.255263ms to wait for apiserver health ...
	I0603 13:56:25.077561 1142862 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 13:56:25.258520 1142862 system_pods.go:59] 9 kube-system pods found
	I0603 13:56:25.258552 1142862 system_pods.go:61] "coredns-7db6d8ff4d-f8pbl" [201e687b-1c1b-4030-8b59-b0257a0f876c] Running
	I0603 13:56:25.258557 1142862 system_pods.go:61] "coredns-7db6d8ff4d-jgk4p" [75956644-426d-49a7-b80c-492c4284f438] Running
	I0603 13:56:25.258560 1142862 system_pods.go:61] "etcd-no-preload-817450" [51d6541e-42ba-4d69-938d-0f2d379572ec] Running
	I0603 13:56:25.258565 1142862 system_pods.go:61] "kube-apiserver-no-preload-817450" [76c05ee7-8f8c-4280-af34-534c73422c51] Running
	I0603 13:56:25.258569 1142862 system_pods.go:61] "kube-controller-manager-no-preload-817450" [e3394427-3c75-4fb4-bd08-b22b8b6ad9eb] Running
	I0603 13:56:25.258573 1142862 system_pods.go:61] "kube-proxy-t45fn" [0578c151-2b36-4125-83f8-f4fbd62a1dc4] Running
	I0603 13:56:25.258578 1142862 system_pods.go:61] "kube-scheduler-no-preload-817450" [9d7c419f-d671-4b0a-bfee-7fe26c690312] Running
	I0603 13:56:25.258585 1142862 system_pods.go:61] "metrics-server-569cc877fc-j2lpf" [4f776017-1575-4461-a7c8-656e5a170460] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:56:25.258591 1142862 system_pods.go:61] "storage-provisioner" [f22655fc-5571-496e-a93f-3970d1693435] Running
	I0603 13:56:25.258603 1142862 system_pods.go:74] duration metric: took 181.034608ms to wait for pod list to return data ...
	I0603 13:56:25.258618 1142862 default_sa.go:34] waiting for default service account to be created ...
	I0603 13:56:25.454775 1142862 default_sa.go:45] found service account: "default"
	I0603 13:56:25.454810 1142862 default_sa.go:55] duration metric: took 196.18004ms for default service account to be created ...
	I0603 13:56:25.454820 1142862 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 13:56:25.658868 1142862 system_pods.go:86] 9 kube-system pods found
	I0603 13:56:25.658908 1142862 system_pods.go:89] "coredns-7db6d8ff4d-f8pbl" [201e687b-1c1b-4030-8b59-b0257a0f876c] Running
	I0603 13:56:25.658919 1142862 system_pods.go:89] "coredns-7db6d8ff4d-jgk4p" [75956644-426d-49a7-b80c-492c4284f438] Running
	I0603 13:56:25.658926 1142862 system_pods.go:89] "etcd-no-preload-817450" [51d6541e-42ba-4d69-938d-0f2d379572ec] Running
	I0603 13:56:25.658932 1142862 system_pods.go:89] "kube-apiserver-no-preload-817450" [76c05ee7-8f8c-4280-af34-534c73422c51] Running
	I0603 13:56:25.658938 1142862 system_pods.go:89] "kube-controller-manager-no-preload-817450" [e3394427-3c75-4fb4-bd08-b22b8b6ad9eb] Running
	I0603 13:56:25.658944 1142862 system_pods.go:89] "kube-proxy-t45fn" [0578c151-2b36-4125-83f8-f4fbd62a1dc4] Running
	I0603 13:56:25.658950 1142862 system_pods.go:89] "kube-scheduler-no-preload-817450" [9d7c419f-d671-4b0a-bfee-7fe26c690312] Running
	I0603 13:56:25.658959 1142862 system_pods.go:89] "metrics-server-569cc877fc-j2lpf" [4f776017-1575-4461-a7c8-656e5a170460] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:56:25.658970 1142862 system_pods.go:89] "storage-provisioner" [f22655fc-5571-496e-a93f-3970d1693435] Running
	I0603 13:56:25.658983 1142862 system_pods.go:126] duration metric: took 204.156078ms to wait for k8s-apps to be running ...
	I0603 13:56:25.658999 1142862 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 13:56:25.659058 1142862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:56:25.674728 1142862 system_svc.go:56] duration metric: took 15.717684ms WaitForService to wait for kubelet
	I0603 13:56:25.674759 1142862 kubeadm.go:576] duration metric: took 3.790431991s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 13:56:25.674777 1142862 node_conditions.go:102] verifying NodePressure condition ...
	I0603 13:56:25.855640 1142862 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 13:56:25.855671 1142862 node_conditions.go:123] node cpu capacity is 2
	I0603 13:56:25.855684 1142862 node_conditions.go:105] duration metric: took 180.901974ms to run NodePressure ...
	I0603 13:56:25.855696 1142862 start.go:240] waiting for startup goroutines ...
	I0603 13:56:25.855703 1142862 start.go:245] waiting for cluster config update ...
	I0603 13:56:25.855716 1142862 start.go:254] writing updated cluster config ...
	I0603 13:56:25.856020 1142862 ssh_runner.go:195] Run: rm -f paused
	I0603 13:56:25.908747 1142862 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 13:56:25.911049 1142862 out.go:177] * Done! kubectl is now configured to use "no-preload-817450" cluster and "default" namespace by default
	I0603 13:56:44.813650 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:56:44.813933 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:56:44.813964 1143678 kubeadm.go:309] 
	I0603 13:56:44.814039 1143678 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0603 13:56:44.814075 1143678 kubeadm.go:309] 		timed out waiting for the condition
	I0603 13:56:44.814115 1143678 kubeadm.go:309] 
	I0603 13:56:44.814197 1143678 kubeadm.go:309] 	This error is likely caused by:
	I0603 13:56:44.814246 1143678 kubeadm.go:309] 		- The kubelet is not running
	I0603 13:56:44.814369 1143678 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0603 13:56:44.814378 1143678 kubeadm.go:309] 
	I0603 13:56:44.814496 1143678 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0603 13:56:44.814540 1143678 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0603 13:56:44.814573 1143678 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0603 13:56:44.814580 1143678 kubeadm.go:309] 
	I0603 13:56:44.814685 1143678 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0603 13:56:44.814785 1143678 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0603 13:56:44.814798 1143678 kubeadm.go:309] 
	I0603 13:56:44.814896 1143678 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0603 13:56:44.815001 1143678 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0603 13:56:44.815106 1143678 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0603 13:56:44.815208 1143678 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0603 13:56:44.815220 1143678 kubeadm.go:309] 
	I0603 13:56:44.816032 1143678 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 13:56:44.816137 1143678 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0603 13:56:44.816231 1143678 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0603 13:56:44.816405 1143678 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0603 13:56:44.816480 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 13:56:45.288649 1143678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:56:45.305284 1143678 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 13:56:45.316705 1143678 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 13:56:45.316736 1143678 kubeadm.go:156] found existing configuration files:
	
	I0603 13:56:45.316804 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 13:56:45.327560 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 13:56:45.327630 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 13:56:45.337910 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 13:56:45.349864 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 13:56:45.349948 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 13:56:45.361369 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 13:56:45.371797 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 13:56:45.371866 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 13:56:45.382861 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 13:56:45.393310 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 13:56:45.393382 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 13:56:45.403822 1143678 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 13:56:45.476725 1143678 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0603 13:56:45.476794 1143678 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 13:56:45.630786 1143678 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 13:56:45.630956 1143678 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 13:56:45.631125 1143678 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 13:56:45.814370 1143678 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 13:56:45.816372 1143678 out.go:204]   - Generating certificates and keys ...
	I0603 13:56:45.816481 1143678 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 13:56:45.816556 1143678 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 13:56:45.816710 1143678 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 13:56:45.816831 1143678 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 13:56:45.816928 1143678 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 13:56:45.817003 1143678 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 13:56:45.817093 1143678 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 13:56:45.817178 1143678 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 13:56:45.817328 1143678 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 13:56:45.817477 1143678 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 13:56:45.817533 1143678 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 13:56:45.817607 1143678 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 13:56:46.025905 1143678 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 13:56:46.331809 1143678 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 13:56:46.551488 1143678 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 13:56:46.636938 1143678 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 13:56:46.663292 1143678 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 13:56:46.663400 1143678 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 13:56:46.663448 1143678 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 13:56:46.840318 1143678 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 13:56:46.842399 1143678 out.go:204]   - Booting up control plane ...
	I0603 13:56:46.842530 1143678 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 13:56:46.851940 1143678 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 13:56:46.855283 1143678 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 13:56:46.855443 1143678 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 13:56:46.857883 1143678 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0603 13:57:26.860915 1143678 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0603 13:57:26.861047 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:57:26.861296 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:57:31.861724 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:57:31.862046 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:57:41.862803 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:57:41.863057 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:58:01.862907 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:58:01.863136 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:58:41.862069 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:58:41.862391 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:58:41.862430 1143678 kubeadm.go:309] 
	I0603 13:58:41.862535 1143678 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0603 13:58:41.862613 1143678 kubeadm.go:309] 		timed out waiting for the condition
	I0603 13:58:41.862624 1143678 kubeadm.go:309] 
	I0603 13:58:41.862675 1143678 kubeadm.go:309] 	This error is likely caused by:
	I0603 13:58:41.862737 1143678 kubeadm.go:309] 		- The kubelet is not running
	I0603 13:58:41.862895 1143678 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0603 13:58:41.862909 1143678 kubeadm.go:309] 
	I0603 13:58:41.863030 1143678 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0603 13:58:41.863060 1143678 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0603 13:58:41.863090 1143678 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0603 13:58:41.863100 1143678 kubeadm.go:309] 
	I0603 13:58:41.863230 1143678 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0603 13:58:41.863388 1143678 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0603 13:58:41.863406 1143678 kubeadm.go:309] 
	I0603 13:58:41.863583 1143678 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0603 13:58:41.863709 1143678 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0603 13:58:41.863811 1143678 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0603 13:58:41.863894 1143678 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0603 13:58:41.863917 1143678 kubeadm.go:309] 
	I0603 13:58:41.865001 1143678 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 13:58:41.865120 1143678 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0603 13:58:41.865209 1143678 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0603 13:58:41.865361 1143678 kubeadm.go:393] duration metric: took 8m3.432874561s to StartCluster
	I0603 13:58:41.865460 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:58:41.865537 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:58:41.912780 1143678 cri.go:89] found id: ""
	I0603 13:58:41.912812 1143678 logs.go:276] 0 containers: []
	W0603 13:58:41.912826 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:58:41.912832 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:58:41.912901 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:58:41.951372 1143678 cri.go:89] found id: ""
	I0603 13:58:41.951402 1143678 logs.go:276] 0 containers: []
	W0603 13:58:41.951411 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:58:41.951418 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:58:41.951490 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:58:41.989070 1143678 cri.go:89] found id: ""
	I0603 13:58:41.989104 1143678 logs.go:276] 0 containers: []
	W0603 13:58:41.989115 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:58:41.989123 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:58:41.989191 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:58:42.026208 1143678 cri.go:89] found id: ""
	I0603 13:58:42.026238 1143678 logs.go:276] 0 containers: []
	W0603 13:58:42.026246 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:58:42.026252 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:58:42.026312 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:58:42.064899 1143678 cri.go:89] found id: ""
	I0603 13:58:42.064941 1143678 logs.go:276] 0 containers: []
	W0603 13:58:42.064950 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:58:42.064971 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:58:42.065043 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:58:42.098817 1143678 cri.go:89] found id: ""
	I0603 13:58:42.098858 1143678 logs.go:276] 0 containers: []
	W0603 13:58:42.098868 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:58:42.098876 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:58:42.098939 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:58:42.133520 1143678 cri.go:89] found id: ""
	I0603 13:58:42.133558 1143678 logs.go:276] 0 containers: []
	W0603 13:58:42.133570 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:58:42.133579 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:58:42.133639 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:58:42.187356 1143678 cri.go:89] found id: ""
	I0603 13:58:42.187387 1143678 logs.go:276] 0 containers: []
	W0603 13:58:42.187399 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:58:42.187412 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:58:42.187434 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:58:42.249992 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:58:42.250034 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:58:42.272762 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:58:42.272801 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:58:42.362004 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:58:42.362030 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:58:42.362046 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:58:42.468630 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:58:42.468676 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0603 13:58:42.510945 1143678 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0603 13:58:42.511002 1143678 out.go:239] * 
	W0603 13:58:42.511094 1143678 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0603 13:58:42.511119 1143678 out.go:239] * 
	W0603 13:58:42.512307 1143678 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0603 13:58:42.516199 1143678 out.go:177] 
	W0603 13:58:42.517774 1143678 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0603 13:58:42.517848 1143678 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0603 13:58:42.517883 1143678 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0603 13:58:42.519747 1143678 out.go:177] 
	
	
	==> CRI-O <==
	Jun 03 14:03:30 embed-certs-223260 crio[727]: time="2024-06-03 14:03:30.974200590Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717423410974173885,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=973e2b54-1b99-4366-a9f5-5a98a34aa2ac name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:03:30 embed-certs-223260 crio[727]: time="2024-06-03 14:03:30.974865030Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2c78c8ef-8bfe-402b-b708-b1b9b6f5e9de name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:03:30 embed-certs-223260 crio[727]: time="2024-06-03 14:03:30.974917310Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2c78c8ef-8bfe-402b-b708-b1b9b6f5e9de name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:03:30 embed-certs-223260 crio[727]: time="2024-06-03 14:03:30.975112562Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b,PodSandboxId:2f1fb72c5f8c2c72ebf4746dbca03f77016b193b2f2458ff4774f83e348649eb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717422631458640760,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ff65744-2d90-4589-a97f-d6b4d792eab4,},Annotations:map[string]string{io.kubernetes.container.hash: a357a252,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a1bcc8fd115462594b57cf2f156c17b6430e92d0215d9e85b595b804bdde5a0,PodSandboxId:21c139c5637b1e6fb84ff27abb4d8ccc37204ab70f3839945ca14b6c0315fced,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1717422609263490429,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 281b59a6-05da-460b-a9de-353a33f7d95c,},Annotations:map[string]string{io.kubernetes.container.hash: 86c813e6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574,PodSandboxId:2236cb094f7ea29487f0c17b14b07af0c8a34d72721ccbb2b0e7e8dbcd75289b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717422608426831157,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qdjrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a490ea5-c189-4d28-bd6b-509610d35f37,},Annotations:map[string]string{io.kubernetes.container.hash: b90f4363,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1,PodSandboxId:928e28c81071bc7e9c03b3c60aa6537d56e01260a9a1241612b3020d1ac622fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717422600620990049,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s5vdl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c515f67-d265-4140-8
2ec-ba9ac4ddda80,},Annotations:map[string]string{io.kubernetes.container.hash: bfdc2270,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88,PodSandboxId:2f1fb72c5f8c2c72ebf4746dbca03f77016b193b2f2458ff4774f83e348649eb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717422600623813012,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ff65744-2d90-4589-a97f-d6b4d792e
ab4,},Annotations:map[string]string{io.kubernetes.container.hash: a357a252,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87,PodSandboxId:9e10d6ddb6a57b24c35cfeeb3344d2f0a50479e14797081a21e741618587ab78,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717422595884622538,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-223260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb8fdbada528a524d35aa6977dd3e121,},Annota
tions:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05,PodSandboxId:9b2b4e4bc09bf83a74b7cda03a0c98a894560ad4ed473d4d5c97de96b5c9a0f4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717422595941894235,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-223260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 607b55931bc26d0cc508f778fa375941,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 6d82c3b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d,PodSandboxId:10bbc9d33598f91cd2fcdd614d2644ea6231fa73297da25751ec608eaccf4794,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717422595921637833,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-223260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee5d442f159594c31098eb6daf7db70,},Annotations:map[string]string{io.k
ubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a,PodSandboxId:daff7e3f3b25385b3289805a323290be27422e2e530c8ce0806f63f5121d103e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717422595838880008,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-223260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd73855663a9f159bd07762c06815ac3,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 97521c20,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2c78c8ef-8bfe-402b-b708-b1b9b6f5e9de name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:03:31 embed-certs-223260 crio[727]: time="2024-06-03 14:03:31.019974913Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6b2fb1c5-dba0-4db2-a941-238c3a6376f3 name=/runtime.v1.RuntimeService/Version
	Jun 03 14:03:31 embed-certs-223260 crio[727]: time="2024-06-03 14:03:31.020069647Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6b2fb1c5-dba0-4db2-a941-238c3a6376f3 name=/runtime.v1.RuntimeService/Version
	Jun 03 14:03:31 embed-certs-223260 crio[727]: time="2024-06-03 14:03:31.021168945Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c5da4f8d-6526-43ba-b02e-ec1f128e3713 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:03:31 embed-certs-223260 crio[727]: time="2024-06-03 14:03:31.021860642Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717423411021835562,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c5da4f8d-6526-43ba-b02e-ec1f128e3713 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:03:31 embed-certs-223260 crio[727]: time="2024-06-03 14:03:31.022394944Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1c7a7060-8f77-4b20-b7f6-83d152f86a5f name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:03:31 embed-certs-223260 crio[727]: time="2024-06-03 14:03:31.022452767Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1c7a7060-8f77-4b20-b7f6-83d152f86a5f name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:03:31 embed-certs-223260 crio[727]: time="2024-06-03 14:03:31.022766907Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b,PodSandboxId:2f1fb72c5f8c2c72ebf4746dbca03f77016b193b2f2458ff4774f83e348649eb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717422631458640760,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ff65744-2d90-4589-a97f-d6b4d792eab4,},Annotations:map[string]string{io.kubernetes.container.hash: a357a252,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a1bcc8fd115462594b57cf2f156c17b6430e92d0215d9e85b595b804bdde5a0,PodSandboxId:21c139c5637b1e6fb84ff27abb4d8ccc37204ab70f3839945ca14b6c0315fced,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1717422609263490429,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 281b59a6-05da-460b-a9de-353a33f7d95c,},Annotations:map[string]string{io.kubernetes.container.hash: 86c813e6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574,PodSandboxId:2236cb094f7ea29487f0c17b14b07af0c8a34d72721ccbb2b0e7e8dbcd75289b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717422608426831157,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qdjrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a490ea5-c189-4d28-bd6b-509610d35f37,},Annotations:map[string]string{io.kubernetes.container.hash: b90f4363,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1,PodSandboxId:928e28c81071bc7e9c03b3c60aa6537d56e01260a9a1241612b3020d1ac622fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717422600620990049,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s5vdl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c515f67-d265-4140-8
2ec-ba9ac4ddda80,},Annotations:map[string]string{io.kubernetes.container.hash: bfdc2270,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88,PodSandboxId:2f1fb72c5f8c2c72ebf4746dbca03f77016b193b2f2458ff4774f83e348649eb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717422600623813012,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ff65744-2d90-4589-a97f-d6b4d792e
ab4,},Annotations:map[string]string{io.kubernetes.container.hash: a357a252,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87,PodSandboxId:9e10d6ddb6a57b24c35cfeeb3344d2f0a50479e14797081a21e741618587ab78,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717422595884622538,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-223260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb8fdbada528a524d35aa6977dd3e121,},Annota
tions:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05,PodSandboxId:9b2b4e4bc09bf83a74b7cda03a0c98a894560ad4ed473d4d5c97de96b5c9a0f4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717422595941894235,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-223260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 607b55931bc26d0cc508f778fa375941,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 6d82c3b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d,PodSandboxId:10bbc9d33598f91cd2fcdd614d2644ea6231fa73297da25751ec608eaccf4794,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717422595921637833,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-223260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee5d442f159594c31098eb6daf7db70,},Annotations:map[string]string{io.k
ubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a,PodSandboxId:daff7e3f3b25385b3289805a323290be27422e2e530c8ce0806f63f5121d103e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717422595838880008,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-223260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd73855663a9f159bd07762c06815ac3,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 97521c20,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1c7a7060-8f77-4b20-b7f6-83d152f86a5f name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:03:31 embed-certs-223260 crio[727]: time="2024-06-03 14:03:31.063369616Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=069a4622-b6df-43c2-aca9-2acae3552095 name=/runtime.v1.RuntimeService/Version
	Jun 03 14:03:31 embed-certs-223260 crio[727]: time="2024-06-03 14:03:31.063471571Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=069a4622-b6df-43c2-aca9-2acae3552095 name=/runtime.v1.RuntimeService/Version
	Jun 03 14:03:31 embed-certs-223260 crio[727]: time="2024-06-03 14:03:31.065002967Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dd855107-1c15-496d-a412-62ff196acfcc name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:03:31 embed-certs-223260 crio[727]: time="2024-06-03 14:03:31.065672879Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717423411065611842,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dd855107-1c15-496d-a412-62ff196acfcc name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:03:31 embed-certs-223260 crio[727]: time="2024-06-03 14:03:31.066113691Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e6b3127e-113b-4d59-8b05-3f39316d7d4a name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:03:31 embed-certs-223260 crio[727]: time="2024-06-03 14:03:31.066167075Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e6b3127e-113b-4d59-8b05-3f39316d7d4a name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:03:31 embed-certs-223260 crio[727]: time="2024-06-03 14:03:31.066425794Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b,PodSandboxId:2f1fb72c5f8c2c72ebf4746dbca03f77016b193b2f2458ff4774f83e348649eb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717422631458640760,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ff65744-2d90-4589-a97f-d6b4d792eab4,},Annotations:map[string]string{io.kubernetes.container.hash: a357a252,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a1bcc8fd115462594b57cf2f156c17b6430e92d0215d9e85b595b804bdde5a0,PodSandboxId:21c139c5637b1e6fb84ff27abb4d8ccc37204ab70f3839945ca14b6c0315fced,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1717422609263490429,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 281b59a6-05da-460b-a9de-353a33f7d95c,},Annotations:map[string]string{io.kubernetes.container.hash: 86c813e6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574,PodSandboxId:2236cb094f7ea29487f0c17b14b07af0c8a34d72721ccbb2b0e7e8dbcd75289b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717422608426831157,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qdjrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a490ea5-c189-4d28-bd6b-509610d35f37,},Annotations:map[string]string{io.kubernetes.container.hash: b90f4363,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1,PodSandboxId:928e28c81071bc7e9c03b3c60aa6537d56e01260a9a1241612b3020d1ac622fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717422600620990049,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s5vdl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c515f67-d265-4140-8
2ec-ba9ac4ddda80,},Annotations:map[string]string{io.kubernetes.container.hash: bfdc2270,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88,PodSandboxId:2f1fb72c5f8c2c72ebf4746dbca03f77016b193b2f2458ff4774f83e348649eb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717422600623813012,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ff65744-2d90-4589-a97f-d6b4d792e
ab4,},Annotations:map[string]string{io.kubernetes.container.hash: a357a252,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87,PodSandboxId:9e10d6ddb6a57b24c35cfeeb3344d2f0a50479e14797081a21e741618587ab78,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717422595884622538,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-223260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb8fdbada528a524d35aa6977dd3e121,},Annota
tions:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05,PodSandboxId:9b2b4e4bc09bf83a74b7cda03a0c98a894560ad4ed473d4d5c97de96b5c9a0f4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717422595941894235,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-223260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 607b55931bc26d0cc508f778fa375941,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 6d82c3b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d,PodSandboxId:10bbc9d33598f91cd2fcdd614d2644ea6231fa73297da25751ec608eaccf4794,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717422595921637833,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-223260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee5d442f159594c31098eb6daf7db70,},Annotations:map[string]string{io.k
ubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a,PodSandboxId:daff7e3f3b25385b3289805a323290be27422e2e530c8ce0806f63f5121d103e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717422595838880008,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-223260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd73855663a9f159bd07762c06815ac3,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 97521c20,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e6b3127e-113b-4d59-8b05-3f39316d7d4a name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:03:31 embed-certs-223260 crio[727]: time="2024-06-03 14:03:31.110377652Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=52b9a88e-a6aa-4fed-a7ce-23bf8923d709 name=/runtime.v1.RuntimeService/Version
	Jun 03 14:03:31 embed-certs-223260 crio[727]: time="2024-06-03 14:03:31.110508056Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=52b9a88e-a6aa-4fed-a7ce-23bf8923d709 name=/runtime.v1.RuntimeService/Version
	Jun 03 14:03:31 embed-certs-223260 crio[727]: time="2024-06-03 14:03:31.111968306Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=29b1bb20-2cd9-4e9e-b03a-b10ce45ab8fc name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:03:31 embed-certs-223260 crio[727]: time="2024-06-03 14:03:31.112818309Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717423411112783319,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=29b1bb20-2cd9-4e9e-b03a-b10ce45ab8fc name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:03:31 embed-certs-223260 crio[727]: time="2024-06-03 14:03:31.113861913Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=05d94d3e-6301-42a4-b828-3dee250642fb name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:03:31 embed-certs-223260 crio[727]: time="2024-06-03 14:03:31.113935077Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=05d94d3e-6301-42a4-b828-3dee250642fb name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:03:31 embed-certs-223260 crio[727]: time="2024-06-03 14:03:31.114185971Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b,PodSandboxId:2f1fb72c5f8c2c72ebf4746dbca03f77016b193b2f2458ff4774f83e348649eb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717422631458640760,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ff65744-2d90-4589-a97f-d6b4d792eab4,},Annotations:map[string]string{io.kubernetes.container.hash: a357a252,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a1bcc8fd115462594b57cf2f156c17b6430e92d0215d9e85b595b804bdde5a0,PodSandboxId:21c139c5637b1e6fb84ff27abb4d8ccc37204ab70f3839945ca14b6c0315fced,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1717422609263490429,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 281b59a6-05da-460b-a9de-353a33f7d95c,},Annotations:map[string]string{io.kubernetes.container.hash: 86c813e6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574,PodSandboxId:2236cb094f7ea29487f0c17b14b07af0c8a34d72721ccbb2b0e7e8dbcd75289b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717422608426831157,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qdjrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a490ea5-c189-4d28-bd6b-509610d35f37,},Annotations:map[string]string{io.kubernetes.container.hash: b90f4363,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1,PodSandboxId:928e28c81071bc7e9c03b3c60aa6537d56e01260a9a1241612b3020d1ac622fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717422600620990049,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s5vdl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c515f67-d265-4140-8
2ec-ba9ac4ddda80,},Annotations:map[string]string{io.kubernetes.container.hash: bfdc2270,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88,PodSandboxId:2f1fb72c5f8c2c72ebf4746dbca03f77016b193b2f2458ff4774f83e348649eb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717422600623813012,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ff65744-2d90-4589-a97f-d6b4d792e
ab4,},Annotations:map[string]string{io.kubernetes.container.hash: a357a252,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87,PodSandboxId:9e10d6ddb6a57b24c35cfeeb3344d2f0a50479e14797081a21e741618587ab78,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717422595884622538,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-223260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb8fdbada528a524d35aa6977dd3e121,},Annota
tions:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05,PodSandboxId:9b2b4e4bc09bf83a74b7cda03a0c98a894560ad4ed473d4d5c97de96b5c9a0f4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717422595941894235,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-223260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 607b55931bc26d0cc508f778fa375941,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 6d82c3b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d,PodSandboxId:10bbc9d33598f91cd2fcdd614d2644ea6231fa73297da25751ec608eaccf4794,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717422595921637833,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-223260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee5d442f159594c31098eb6daf7db70,},Annotations:map[string]string{io.k
ubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a,PodSandboxId:daff7e3f3b25385b3289805a323290be27422e2e530c8ce0806f63f5121d103e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717422595838880008,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-223260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd73855663a9f159bd07762c06815ac3,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 97521c20,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=05d94d3e-6301-42a4-b828-3dee250642fb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e0c551e53061e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   2f1fb72c5f8c2       storage-provisioner
	9a1bcc8fd1154       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   21c139c5637b1       busybox
	f8daff2944ee2       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   2236cb094f7ea       coredns-7db6d8ff4d-qdjrv
	141e89821d9ba       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   2f1fb72c5f8c2       storage-provisioner
	c17ec1b1cf666       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      13 minutes ago      Running             kube-proxy                1                   928e28c81071b       kube-proxy-s5vdl
	114ee50eb8f33       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago      Running             etcd                      1                   9b2b4e4bc09bf       etcd-embed-certs-223260
	a4f8ab9c0a067       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      13 minutes ago      Running             kube-controller-manager   1                   10bbc9d33598f       kube-controller-manager-embed-certs-223260
	f1a49ac6ea3e6       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      13 minutes ago      Running             kube-scheduler            1                   9e10d6ddb6a57       kube-scheduler-embed-certs-223260
	45eebdf59dbe2       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      13 minutes ago      Running             kube-apiserver            1                   daff7e3f3b253       kube-apiserver-embed-certs-223260
	
	
	==> coredns [f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 347fb4f25cc546215231b2e9ef34a7838489408c50ad1d77e38b06de967dd388dc540a0db2692259640c7998323f3763426b7a7e73fad2aa89cebddf27cf7c94
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57497 - 43282 "HINFO IN 3215531351917476745.3466927403052893141. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.036473674s
	
	
	==> describe nodes <==
	Name:               embed-certs-223260
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-223260
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	                    minikube.k8s.io/name=embed-certs-223260
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_03T13_42_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 13:42:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-223260
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 14:03:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 14:00:42 +0000   Mon, 03 Jun 2024 13:42:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 14:00:42 +0000   Mon, 03 Jun 2024 13:42:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 14:00:42 +0000   Mon, 03 Jun 2024 13:42:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 14:00:42 +0000   Mon, 03 Jun 2024 13:50:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.246
	  Hostname:    embed-certs-223260
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e7d84f5be1044fef8f60c281196faa94
	  System UUID:                e7d84f5b-e104-4fef-8f60-c281196faa94
	  Boot ID:                    6e007d64-1412-4605-8915-ff9f1ad29350
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 coredns-7db6d8ff4d-qdjrv                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-embed-certs-223260                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kube-apiserver-embed-certs-223260             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-embed-certs-223260    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-s5vdl                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-embed-certs-223260             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 metrics-server-569cc877fc-v7d9t               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     21m                kubelet          Node embed-certs-223260 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node embed-certs-223260 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node embed-certs-223260 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeReady                21m                kubelet          Node embed-certs-223260 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node embed-certs-223260 event: Registered Node embed-certs-223260 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-223260 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-223260 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-223260 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-223260 event: Registered Node embed-certs-223260 in Controller
	
	
	==> dmesg <==
	[Jun 3 13:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052280] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040233] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.546708] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.405580] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.634814] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.861112] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.058440] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055014] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.166614] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.133550] systemd-fstab-generator[681]: Ignoring "noauto" option for root device
	[  +0.307987] systemd-fstab-generator[711]: Ignoring "noauto" option for root device
	[  +4.471170] systemd-fstab-generator[809]: Ignoring "noauto" option for root device
	[  +0.062604] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.791615] systemd-fstab-generator[932]: Ignoring "noauto" option for root device
	[Jun 3 13:50] kauditd_printk_skb: 97 callbacks suppressed
	[  +4.473224] systemd-fstab-generator[1552]: Ignoring "noauto" option for root device
	[  +1.268933] kauditd_printk_skb: 62 callbacks suppressed
	[  +8.167713] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05] <==
	{"level":"warn","ts":"2024-06-03T13:50:15.175834Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"254.735167ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2929749742812550003 > lease_revoke:<id:28a88fde55e3a4fe>","response":"size:28"}
	{"level":"info","ts":"2024-06-03T13:50:15.175982Z","caller":"traceutil/trace.go:171","msg":"trace[930180762] linearizableReadLoop","detail":"{readStateIndex:643; appliedIndex:642; }","duration":"199.58422ms","start":"2024-06-03T13:50:14.976374Z","end":"2024-06-03T13:50:15.175958Z","steps":["trace[930180762] 'read index received'  (duration: 98.765µs)","trace[930180762] 'applied index is now lower than readState.Index'  (duration: 199.484531ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-03T13:50:15.176852Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"200.436379ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/kube-system/coredns-7db6d8ff4d\" ","response":"range_response_count:1 size:3845"}
	{"level":"info","ts":"2024-06-03T13:50:15.176918Z","caller":"traceutil/trace.go:171","msg":"trace[1468531996] range","detail":"{range_begin:/registry/replicasets/kube-system/coredns-7db6d8ff4d; range_end:; response_count:1; response_revision:602; }","duration":"200.557996ms","start":"2024-06-03T13:50:14.976349Z","end":"2024-06-03T13:50:15.176907Z","steps":["trace[1468531996] 'agreement among raft nodes before linearized reading'  (duration: 200.413659ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T13:50:15.177171Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"200.692337ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kube-proxy\" ","response":"range_response_count:1 size:2895"}
	{"level":"info","ts":"2024-06-03T13:50:15.177222Z","caller":"traceutil/trace.go:171","msg":"trace[1516422484] range","detail":"{range_begin:/registry/daemonsets/kube-system/kube-proxy; range_end:; response_count:1; response_revision:602; }","duration":"200.756565ms","start":"2024-06-03T13:50:14.976457Z","end":"2024-06-03T13:50:15.177213Z","steps":["trace[1516422484] 'agreement among raft nodes before linearized reading'  (duration: 200.632872ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T13:50:15.178049Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"201.44376ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/kube-system/metrics-server\" ","response":"range_response_count:1 size:1654"}
	{"level":"info","ts":"2024-06-03T13:50:15.17812Z","caller":"traceutil/trace.go:171","msg":"trace[1217354040] range","detail":"{range_begin:/registry/services/specs/kube-system/metrics-server; range_end:; response_count:1; response_revision:602; }","duration":"201.52191ms","start":"2024-06-03T13:50:14.976587Z","end":"2024-06-03T13:50:15.178108Z","steps":["trace[1217354040] 'agreement among raft nodes before linearized reading'  (duration: 201.425025ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T13:50:15.178636Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"202.052256ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/kube-system/metrics-server-569cc877fc\" ","response":"range_response_count:1 size:3200"}
	{"level":"info","ts":"2024-06-03T13:50:15.178695Z","caller":"traceutil/trace.go:171","msg":"trace[550568889] range","detail":"{range_begin:/registry/replicasets/kube-system/metrics-server-569cc877fc; range_end:; response_count:1; response_revision:602; }","duration":"202.122575ms","start":"2024-06-03T13:50:14.976564Z","end":"2024-06-03T13:50:15.178686Z","steps":["trace[550568889] 'agreement among raft nodes before linearized reading'  (duration: 202.036454ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T13:50:15.178891Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"202.318962ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/kube-system/kube-dns\" ","response":"range_response_count:1 size:1211"}
	{"level":"info","ts":"2024-06-03T13:50:15.178945Z","caller":"traceutil/trace.go:171","msg":"trace[1053540742] range","detail":"{range_begin:/registry/services/specs/kube-system/kube-dns; range_end:; response_count:1; response_revision:602; }","duration":"202.39256ms","start":"2024-06-03T13:50:14.976542Z","end":"2024-06-03T13:50:15.178934Z","steps":["trace[1053540742] 'agreement among raft nodes before linearized reading'  (duration: 202.318086ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T13:50:16.178532Z","caller":"traceutil/trace.go:171","msg":"trace[785430932] transaction","detail":"{read_only:false; response_revision:603; number_of_response:1; }","duration":"259.336394ms","start":"2024-06-03T13:50:15.919175Z","end":"2024-06-03T13:50:16.178512Z","steps":["trace[785430932] 'process raft request'  (duration: 259.02513ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T13:50:16.774438Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"588.443358ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-223260\" ","response":"range_response_count:1 size:5500"}
	{"level":"info","ts":"2024-06-03T13:50:16.774505Z","caller":"traceutil/trace.go:171","msg":"trace[714207542] range","detail":"{range_begin:/registry/minions/embed-certs-223260; range_end:; response_count:1; response_revision:603; }","duration":"588.53985ms","start":"2024-06-03T13:50:16.185955Z","end":"2024-06-03T13:50:16.774494Z","steps":["trace[714207542] 'range keys from in-memory index tree'  (duration: 588.343048ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T13:50:16.774542Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-03T13:50:16.185945Z","time spent":"588.584182ms","remote":"127.0.0.1:60652","response type":"/etcdserverpb.KV/Range","request count":0,"request size":38,"response count":1,"response size":5523,"request content":"key:\"/registry/minions/embed-certs-223260\" "}
	{"level":"info","ts":"2024-06-03T13:50:16.774702Z","caller":"traceutil/trace.go:171","msg":"trace[423434282] transaction","detail":"{read_only:false; response_revision:604; number_of_response:1; }","duration":"583.897095ms","start":"2024-06-03T13:50:16.190793Z","end":"2024-06-03T13:50:16.77469Z","steps":["trace[423434282] 'process raft request'  (duration: 583.655177ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T13:50:16.775386Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-03T13:50:16.19078Z","time spent":"583.992729ms","remote":"127.0.0.1:60662","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6707,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-223260\" mod_revision:603 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-223260\" value_size:6639 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-223260\" > >"}
	{"level":"warn","ts":"2024-06-03T13:50:38.592544Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"265.017394ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2929749742812550189 > lease_revoke:<id:28a88fde5d2253b9>","response":"size:28"}
	{"level":"info","ts":"2024-06-03T13:50:38.592685Z","caller":"traceutil/trace.go:171","msg":"trace[9174562] linearizableReadLoop","detail":"{readStateIndex:669; appliedIndex:668; }","duration":"289.514822ms","start":"2024-06-03T13:50:38.303143Z","end":"2024-06-03T13:50:38.592658Z","steps":["trace[9174562] 'read index received'  (duration: 24.302777ms)","trace[9174562] 'applied index is now lower than readState.Index'  (duration: 265.210791ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-03T13:50:38.592819Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"289.697095ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-v7d9t\" ","response":"range_response_count:1 size:4283"}
	{"level":"info","ts":"2024-06-03T13:50:38.592851Z","caller":"traceutil/trace.go:171","msg":"trace[232053768] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-569cc877fc-v7d9t; range_end:; response_count:1; response_revision:623; }","duration":"289.755295ms","start":"2024-06-03T13:50:38.303084Z","end":"2024-06-03T13:50:38.592839Z","steps":["trace[232053768] 'agreement among raft nodes before linearized reading'  (duration: 289.631057ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T13:59:58.42747Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":848}
	{"level":"info","ts":"2024-06-03T13:59:58.439305Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":848,"took":"11.438145ms","hash":531994192,"current-db-size-bytes":2670592,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2670592,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-06-03T13:59:58.439373Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":531994192,"revision":848,"compact-revision":-1}
	
	
	==> kernel <==
	 14:03:31 up 14 min,  0 users,  load average: 0.10, 0.17, 0.12
	Linux embed-certs-223260 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a] <==
	I0603 13:58:00.770825       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 13:59:59.773782       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 13:59:59.774148       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0603 14:00:00.774454       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 14:00:00.774615       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0603 14:00:00.774655       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 14:00:00.774587       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 14:00:00.774768       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0603 14:00:00.775924       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 14:01:00.774882       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 14:01:00.774972       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0603 14:01:00.774982       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 14:01:00.776055       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 14:01:00.776152       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0603 14:01:00.776179       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 14:03:00.775861       1 handler_proxy.go:93] no RequestInfo found in the context
	W0603 14:03:00.776316       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 14:03:00.776384       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0603 14:03:00.776433       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0603 14:03:00.776397       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0603 14:03:00.778480       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d] <==
	I0603 13:57:45.104740       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 13:58:14.639860       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 13:58:15.113234       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 13:58:44.645551       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 13:58:45.123751       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 13:59:14.651484       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 13:59:15.131654       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 13:59:44.656111       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 13:59:45.139185       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 14:00:14.661023       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:00:15.152499       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 14:00:44.666879       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:00:45.160800       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 14:01:14.672894       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:01:15.169194       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0603 14:01:27.256509       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="285.014µs"
	I0603 14:01:38.253835       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="48.206µs"
	E0603 14:01:44.676849       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:01:45.176774       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 14:02:14.682542       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:02:15.186090       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 14:02:44.687388       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:02:45.194699       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 14:03:14.692458       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:03:15.203354       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1] <==
	I0603 13:50:00.880068       1 server_linux.go:69] "Using iptables proxy"
	I0603 13:50:00.896867       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.83.246"]
	I0603 13:50:00.949675       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 13:50:00.949766       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 13:50:00.949897       1 server_linux.go:165] "Using iptables Proxier"
	I0603 13:50:00.953304       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 13:50:00.953550       1 server.go:872] "Version info" version="v1.30.1"
	I0603 13:50:00.953808       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 13:50:00.955560       1 config.go:192] "Starting service config controller"
	I0603 13:50:00.955619       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 13:50:00.955667       1 config.go:101] "Starting endpoint slice config controller"
	I0603 13:50:00.955687       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 13:50:00.956159       1 config.go:319] "Starting node config controller"
	I0603 13:50:00.956192       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 13:50:01.056459       1 shared_informer.go:320] Caches are synced for node config
	I0603 13:50:01.056555       1 shared_informer.go:320] Caches are synced for service config
	I0603 13:50:01.056589       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87] <==
	I0603 13:49:57.121122       1 serving.go:380] Generated self-signed cert in-memory
	W0603 13:49:59.686786       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0603 13:49:59.686891       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0603 13:49:59.686905       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0603 13:49:59.686962       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0603 13:49:59.756799       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0603 13:49:59.756888       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 13:49:59.762436       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0603 13:49:59.762633       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0603 13:49:59.762672       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 13:49:59.764330       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 13:49:59.866831       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 03 14:01:12 embed-certs-223260 kubelet[939]: E0603 14:01:12.275424     939 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jun 03 14:01:12 embed-certs-223260 kubelet[939]: E0603 14:01:12.275837     939 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jun 03 14:01:12 embed-certs-223260 kubelet[939]: E0603 14:01:12.276176     939 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v2pdb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,Recurs
iveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false
,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-v7d9t_kube-system(e89c698d-7aab-4acd-a9b3-5ba0315ad681): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jun 03 14:01:12 embed-certs-223260 kubelet[939]: E0603 14:01:12.276405     939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-v7d9t" podUID="e89c698d-7aab-4acd-a9b3-5ba0315ad681"
	Jun 03 14:01:27 embed-certs-223260 kubelet[939]: E0603 14:01:27.238644     939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-v7d9t" podUID="e89c698d-7aab-4acd-a9b3-5ba0315ad681"
	Jun 03 14:01:38 embed-certs-223260 kubelet[939]: E0603 14:01:38.238849     939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-v7d9t" podUID="e89c698d-7aab-4acd-a9b3-5ba0315ad681"
	Jun 03 14:01:49 embed-certs-223260 kubelet[939]: E0603 14:01:49.238729     939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-v7d9t" podUID="e89c698d-7aab-4acd-a9b3-5ba0315ad681"
	Jun 03 14:01:55 embed-certs-223260 kubelet[939]: E0603 14:01:55.253599     939 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 14:01:55 embed-certs-223260 kubelet[939]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 14:01:55 embed-certs-223260 kubelet[939]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 14:01:55 embed-certs-223260 kubelet[939]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 14:01:55 embed-certs-223260 kubelet[939]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 14:02:00 embed-certs-223260 kubelet[939]: E0603 14:02:00.237983     939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-v7d9t" podUID="e89c698d-7aab-4acd-a9b3-5ba0315ad681"
	Jun 03 14:02:14 embed-certs-223260 kubelet[939]: E0603 14:02:14.238495     939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-v7d9t" podUID="e89c698d-7aab-4acd-a9b3-5ba0315ad681"
	Jun 03 14:02:26 embed-certs-223260 kubelet[939]: E0603 14:02:26.238444     939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-v7d9t" podUID="e89c698d-7aab-4acd-a9b3-5ba0315ad681"
	Jun 03 14:02:37 embed-certs-223260 kubelet[939]: E0603 14:02:37.238638     939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-v7d9t" podUID="e89c698d-7aab-4acd-a9b3-5ba0315ad681"
	Jun 03 14:02:48 embed-certs-223260 kubelet[939]: E0603 14:02:48.238038     939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-v7d9t" podUID="e89c698d-7aab-4acd-a9b3-5ba0315ad681"
	Jun 03 14:02:55 embed-certs-223260 kubelet[939]: E0603 14:02:55.255791     939 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 14:02:55 embed-certs-223260 kubelet[939]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 14:02:55 embed-certs-223260 kubelet[939]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 14:02:55 embed-certs-223260 kubelet[939]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 14:02:55 embed-certs-223260 kubelet[939]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 14:03:03 embed-certs-223260 kubelet[939]: E0603 14:03:03.239806     939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-v7d9t" podUID="e89c698d-7aab-4acd-a9b3-5ba0315ad681"
	Jun 03 14:03:16 embed-certs-223260 kubelet[939]: E0603 14:03:16.238397     939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-v7d9t" podUID="e89c698d-7aab-4acd-a9b3-5ba0315ad681"
	Jun 03 14:03:30 embed-certs-223260 kubelet[939]: E0603 14:03:30.238066     939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-v7d9t" podUID="e89c698d-7aab-4acd-a9b3-5ba0315ad681"
	
	
	==> storage-provisioner [141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88] <==
	I0603 13:50:00.789165       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0603 13:50:30.795562       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b] <==
	I0603 13:50:31.571689       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0603 13:50:31.585683       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0603 13:50:31.585798       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0603 13:50:48.992210       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0603 13:50:48.992920       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-223260_c0a76c61-0743-4c2f-ba8a-ad97be818e25!
	I0603 13:50:48.993440       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"353379f3-5b07-45b6-b1e9-5e7fcc2c94ed", APIVersion:"v1", ResourceVersion:"631", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-223260_c0a76c61-0743-4c2f-ba8a-ad97be818e25 became leader
	I0603 13:50:49.093426       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-223260_c0a76c61-0743-4c2f-ba8a-ad97be818e25!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-223260 -n embed-certs-223260
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-223260 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-v7d9t
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-223260 describe pod metrics-server-569cc877fc-v7d9t
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-223260 describe pod metrics-server-569cc877fc-v7d9t: exit status 1 (66.13375ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-v7d9t" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-223260 describe pod metrics-server-569cc877fc-v7d9t: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0603 13:54:58.228313 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/functional-093300/client.crt: no such file or directory
E0603 13:55:11.525797 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/flannel-021279/client.crt: no such file or directory
E0603 13:55:15.397097 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/enable-default-cni-021279/client.crt: no such file or directory
E0603 13:55:54.355541 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/bridge-021279/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-030870 -n default-k8s-diff-port-030870
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-06-03 14:03:49.850562511 +0000 UTC m=+5982.055121027
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-030870 -n default-k8s-diff-port-030870
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-030870 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-030870 logs -n 25: (2.26427783s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-021279 sudo cat                              | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-021279 sudo                                  | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-021279 sudo                                  | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-021279 sudo                                  | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-021279 sudo find                             | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-021279 sudo crio                             | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-021279                                       | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	| delete  | -p                                                     | disable-driver-mounts-069000 | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | disable-driver-mounts-069000                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-030870 | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:43 UTC |
	|         | default-k8s-diff-port-030870                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-817450             | no-preload-817450            | jenkins | v1.33.1 | 03 Jun 24 13:42 UTC | 03 Jun 24 13:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-817450                                   | no-preload-817450            | jenkins | v1.33.1 | 03 Jun 24 13:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-223260            | embed-certs-223260           | jenkins | v1.33.1 | 03 Jun 24 13:43 UTC | 03 Jun 24 13:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-223260                                  | embed-certs-223260           | jenkins | v1.33.1 | 03 Jun 24 13:43 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-030870  | default-k8s-diff-port-030870 | jenkins | v1.33.1 | 03 Jun 24 13:43 UTC | 03 Jun 24 13:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-030870 | jenkins | v1.33.1 | 03 Jun 24 13:43 UTC |                     |
	|         | default-k8s-diff-port-030870                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-817450                  | no-preload-817450            | jenkins | v1.33.1 | 03 Jun 24 13:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-151788        | old-k8s-version-151788       | jenkins | v1.33.1 | 03 Jun 24 13:44 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p no-preload-817450                                   | no-preload-817450            | jenkins | v1.33.1 | 03 Jun 24 13:44 UTC | 03 Jun 24 13:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-223260                 | embed-certs-223260           | jenkins | v1.33.1 | 03 Jun 24 13:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-223260                                  | embed-certs-223260           | jenkins | v1.33.1 | 03 Jun 24 13:45 UTC | 03 Jun 24 13:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-030870       | default-k8s-diff-port-030870 | jenkins | v1.33.1 | 03 Jun 24 13:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-030870 | jenkins | v1.33.1 | 03 Jun 24 13:46 UTC | 03 Jun 24 13:54 UTC |
	|         | default-k8s-diff-port-030870                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-151788                              | old-k8s-version-151788       | jenkins | v1.33.1 | 03 Jun 24 13:46 UTC | 03 Jun 24 13:46 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-151788             | old-k8s-version-151788       | jenkins | v1.33.1 | 03 Jun 24 13:46 UTC | 03 Jun 24 13:46 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-151788                              | old-k8s-version-151788       | jenkins | v1.33.1 | 03 Jun 24 13:46 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 13:46:22
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 13:46:22.347386 1143678 out.go:291] Setting OutFile to fd 1 ...
	I0603 13:46:22.347655 1143678 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:46:22.347666 1143678 out.go:304] Setting ErrFile to fd 2...
	I0603 13:46:22.347672 1143678 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:46:22.347855 1143678 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
	I0603 13:46:22.348458 1143678 out.go:298] Setting JSON to false
	I0603 13:46:22.349502 1143678 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":16129,"bootTime":1717406253,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 13:46:22.349571 1143678 start.go:139] virtualization: kvm guest
	I0603 13:46:22.351720 1143678 out.go:177] * [old-k8s-version-151788] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 13:46:22.353180 1143678 out.go:177]   - MINIKUBE_LOCATION=19011
	I0603 13:46:22.353235 1143678 notify.go:220] Checking for updates...
	I0603 13:46:22.354400 1143678 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 13:46:22.355680 1143678 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 13:46:22.356796 1143678 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 13:46:22.357952 1143678 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 13:46:22.359052 1143678 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 13:46:22.360807 1143678 config.go:182] Loaded profile config "old-k8s-version-151788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0603 13:46:22.361230 1143678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:46:22.361306 1143678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:46:22.376241 1143678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42729
	I0603 13:46:22.376679 1143678 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:46:22.377267 1143678 main.go:141] libmachine: Using API Version  1
	I0603 13:46:22.377292 1143678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:46:22.377663 1143678 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:46:22.377897 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:46:22.379705 1143678 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0603 13:46:22.380895 1143678 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 13:46:22.381188 1143678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:46:22.381222 1143678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:46:22.396163 1143678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43961
	I0603 13:46:22.396669 1143678 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:46:22.397158 1143678 main.go:141] libmachine: Using API Version  1
	I0603 13:46:22.397180 1143678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:46:22.397509 1143678 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:46:22.397693 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:46:22.433731 1143678 out.go:177] * Using the kvm2 driver based on existing profile
	I0603 13:46:22.434876 1143678 start.go:297] selected driver: kvm2
	I0603 13:46:22.434897 1143678 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-151788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-151788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:46:22.435028 1143678 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 13:46:22.435716 1143678 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 13:46:22.435807 1143678 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19011-1078924/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0603 13:46:22.451200 1143678 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0603 13:46:22.451663 1143678 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 13:46:22.451755 1143678 cni.go:84] Creating CNI manager for ""
	I0603 13:46:22.451773 1143678 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:46:22.451832 1143678 start.go:340] cluster config:
	{Name:old-k8s-version-151788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-151788 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:46:22.451961 1143678 iso.go:125] acquiring lock: {Name:mka26d6a83f88b83737ccc78b57cc462fbe70fe1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 13:46:22.454327 1143678 out.go:177] * Starting "old-k8s-version-151788" primary control-plane node in "old-k8s-version-151788" cluster
	I0603 13:46:22.057705 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:46:22.455453 1143678 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0603 13:46:22.455492 1143678 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0603 13:46:22.455501 1143678 cache.go:56] Caching tarball of preloaded images
	I0603 13:46:22.455591 1143678 preload.go:173] Found /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 13:46:22.455604 1143678 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0603 13:46:22.455685 1143678 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/config.json ...
	I0603 13:46:22.455860 1143678 start.go:360] acquireMachinesLock for old-k8s-version-151788: {Name:mk20baaab39609d00406b78ad309423511e633ec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 13:46:28.137725 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:46:31.209684 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:46:37.289692 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:46:40.361614 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:46:46.441692 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:46:49.513686 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:46:55.593727 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:46:58.665749 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:04.745752 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:07.817726 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:13.897702 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:16.969727 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:23.049716 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:26.121758 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:32.201765 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:35.273759 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:41.353716 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:44.425767 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:50.505743 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:53.577777 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:59.657729 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:02.729769 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:08.809709 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:11.881708 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:17.961759 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:21.033726 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:27.113698 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:30.185691 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:36.265722 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:39.337764 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:45.417711 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:48.489729 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:54.569746 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:57.641701 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:49:03.721772 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:49:06.793709 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:49:12.873710 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:49:15.945728 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:49:22.025678 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:49:25.097675 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:49:28.102218 1143252 start.go:364] duration metric: took 3m44.709006863s to acquireMachinesLock for "embed-certs-223260"
	I0603 13:49:28.102293 1143252 start.go:96] Skipping create...Using existing machine configuration
	I0603 13:49:28.102302 1143252 fix.go:54] fixHost starting: 
	I0603 13:49:28.102635 1143252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:49:28.102666 1143252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:49:28.118384 1143252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44729
	I0603 13:49:28.119014 1143252 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:49:28.119601 1143252 main.go:141] libmachine: Using API Version  1
	I0603 13:49:28.119630 1143252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:49:28.119930 1143252 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:49:28.120116 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:49:28.120302 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetState
	I0603 13:49:28.122003 1143252 fix.go:112] recreateIfNeeded on embed-certs-223260: state=Stopped err=<nil>
	I0603 13:49:28.122030 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	W0603 13:49:28.122167 1143252 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 13:49:28.123963 1143252 out.go:177] * Restarting existing kvm2 VM for "embed-certs-223260" ...
	I0603 13:49:28.125564 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .Start
	I0603 13:49:28.125750 1143252 main.go:141] libmachine: (embed-certs-223260) Ensuring networks are active...
	I0603 13:49:28.126598 1143252 main.go:141] libmachine: (embed-certs-223260) Ensuring network default is active
	I0603 13:49:28.126965 1143252 main.go:141] libmachine: (embed-certs-223260) Ensuring network mk-embed-certs-223260 is active
	I0603 13:49:28.127319 1143252 main.go:141] libmachine: (embed-certs-223260) Getting domain xml...
	I0603 13:49:28.128017 1143252 main.go:141] libmachine: (embed-certs-223260) Creating domain...
	I0603 13:49:28.099474 1142862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 13:49:28.099536 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetMachineName
	I0603 13:49:28.099883 1142862 buildroot.go:166] provisioning hostname "no-preload-817450"
	I0603 13:49:28.099915 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetMachineName
	I0603 13:49:28.100115 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:49:28.102052 1142862 machine.go:97] duration metric: took 4m37.409499751s to provisionDockerMachine
	I0603 13:49:28.102123 1142862 fix.go:56] duration metric: took 4m37.432963538s for fixHost
	I0603 13:49:28.102135 1142862 start.go:83] releasing machines lock for "no-preload-817450", held for 4m37.432994587s
	W0603 13:49:28.102158 1142862 start.go:713] error starting host: provision: host is not running
	W0603 13:49:28.102317 1142862 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0603 13:49:28.102332 1142862 start.go:728] Will try again in 5 seconds ...
	I0603 13:49:29.332986 1143252 main.go:141] libmachine: (embed-certs-223260) Waiting to get IP...
	I0603 13:49:29.333963 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:29.334430 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:29.334475 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:29.334403 1144333 retry.go:31] will retry after 203.681987ms: waiting for machine to come up
	I0603 13:49:29.539995 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:29.540496 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:29.540564 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:29.540457 1144333 retry.go:31] will retry after 368.548292ms: waiting for machine to come up
	I0603 13:49:29.911212 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:29.911632 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:29.911665 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:29.911566 1144333 retry.go:31] will retry after 402.690969ms: waiting for machine to come up
	I0603 13:49:30.316480 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:30.316889 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:30.316920 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:30.316852 1144333 retry.go:31] will retry after 500.397867ms: waiting for machine to come up
	I0603 13:49:30.818653 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:30.819082 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:30.819107 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:30.819026 1144333 retry.go:31] will retry after 663.669804ms: waiting for machine to come up
	I0603 13:49:31.483776 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:31.484117 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:31.484144 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:31.484079 1144333 retry.go:31] will retry after 938.433137ms: waiting for machine to come up
	I0603 13:49:32.424128 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:32.424609 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:32.424640 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:32.424548 1144333 retry.go:31] will retry after 919.793328ms: waiting for machine to come up
	I0603 13:49:33.103895 1142862 start.go:360] acquireMachinesLock for no-preload-817450: {Name:mk20baaab39609d00406b78ad309423511e633ec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 13:49:33.346091 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:33.346549 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:33.346574 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:33.346511 1144333 retry.go:31] will retry after 1.115349726s: waiting for machine to come up
	I0603 13:49:34.463875 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:34.464588 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:34.464616 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:34.464529 1144333 retry.go:31] will retry after 1.153940362s: waiting for machine to come up
	I0603 13:49:35.619844 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:35.620243 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:35.620275 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:35.620176 1144333 retry.go:31] will retry after 1.514504154s: waiting for machine to come up
	I0603 13:49:37.135961 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:37.136409 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:37.136431 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:37.136382 1144333 retry.go:31] will retry after 2.757306897s: waiting for machine to come up
	I0603 13:49:39.895589 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:39.895942 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:39.895970 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:39.895881 1144333 retry.go:31] will retry after 3.019503072s: waiting for machine to come up
	I0603 13:49:42.919177 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:42.919640 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:42.919670 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:42.919588 1144333 retry.go:31] will retry after 3.150730989s: waiting for machine to come up
	I0603 13:49:47.494462 1143450 start.go:364] duration metric: took 3m37.207410663s to acquireMachinesLock for "default-k8s-diff-port-030870"
	I0603 13:49:47.494544 1143450 start.go:96] Skipping create...Using existing machine configuration
	I0603 13:49:47.494557 1143450 fix.go:54] fixHost starting: 
	I0603 13:49:47.494876 1143450 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:49:47.494918 1143450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:49:47.511570 1143450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44939
	I0603 13:49:47.512072 1143450 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:49:47.512568 1143450 main.go:141] libmachine: Using API Version  1
	I0603 13:49:47.512593 1143450 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:49:47.512923 1143450 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:49:47.513117 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:49:47.513276 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetState
	I0603 13:49:47.514783 1143450 fix.go:112] recreateIfNeeded on default-k8s-diff-port-030870: state=Stopped err=<nil>
	I0603 13:49:47.514817 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	W0603 13:49:47.514999 1143450 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 13:49:47.517441 1143450 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-030870" ...
	I0603 13:49:46.071609 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.072094 1143252 main.go:141] libmachine: (embed-certs-223260) Found IP for machine: 192.168.83.246
	I0603 13:49:46.072117 1143252 main.go:141] libmachine: (embed-certs-223260) Reserving static IP address...
	I0603 13:49:46.072132 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has current primary IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.072552 1143252 main.go:141] libmachine: (embed-certs-223260) Reserved static IP address: 192.168.83.246
	I0603 13:49:46.072585 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "embed-certs-223260", mac: "52:54:00:8e:14:a8", ip: "192.168.83.246"} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.072593 1143252 main.go:141] libmachine: (embed-certs-223260) Waiting for SSH to be available...
	I0603 13:49:46.072632 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | skip adding static IP to network mk-embed-certs-223260 - found existing host DHCP lease matching {name: "embed-certs-223260", mac: "52:54:00:8e:14:a8", ip: "192.168.83.246"}
	I0603 13:49:46.072655 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | Getting to WaitForSSH function...
	I0603 13:49:46.074738 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.075059 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.075091 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.075189 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | Using SSH client type: external
	I0603 13:49:46.075213 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | Using SSH private key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa (-rw-------)
	I0603 13:49:46.075249 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.246 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 13:49:46.075271 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | About to run SSH command:
	I0603 13:49:46.075283 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | exit 0
	I0603 13:49:46.197971 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | SSH cmd err, output: <nil>: 
	I0603 13:49:46.198498 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetConfigRaw
	I0603 13:49:46.199179 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetIP
	I0603 13:49:46.201821 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.202239 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.202277 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.202533 1143252 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260/config.json ...
	I0603 13:49:46.202727 1143252 machine.go:94] provisionDockerMachine start ...
	I0603 13:49:46.202745 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:49:46.202964 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:46.205259 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.205636 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.205663 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.205773 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:46.205954 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.206100 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.206318 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:46.206538 1143252 main.go:141] libmachine: Using SSH client type: native
	I0603 13:49:46.206819 1143252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.246 22 <nil> <nil>}
	I0603 13:49:46.206837 1143252 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 13:49:46.310241 1143252 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 13:49:46.310277 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetMachineName
	I0603 13:49:46.310583 1143252 buildroot.go:166] provisioning hostname "embed-certs-223260"
	I0603 13:49:46.310616 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetMachineName
	I0603 13:49:46.310836 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:46.313692 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.314078 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.314116 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.314222 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:46.314446 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.314631 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.314800 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:46.314969 1143252 main.go:141] libmachine: Using SSH client type: native
	I0603 13:49:46.315166 1143252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.246 22 <nil> <nil>}
	I0603 13:49:46.315183 1143252 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-223260 && echo "embed-certs-223260" | sudo tee /etc/hostname
	I0603 13:49:46.428560 1143252 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-223260
	
	I0603 13:49:46.428600 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:46.431381 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.431757 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.431784 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.432021 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:46.432283 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.432477 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.432609 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:46.432785 1143252 main.go:141] libmachine: Using SSH client type: native
	I0603 13:49:46.432960 1143252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.246 22 <nil> <nil>}
	I0603 13:49:46.432976 1143252 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-223260' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-223260/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-223260' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 13:49:46.542400 1143252 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 13:49:46.542446 1143252 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19011-1078924/.minikube CaCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19011-1078924/.minikube}
	I0603 13:49:46.542536 1143252 buildroot.go:174] setting up certificates
	I0603 13:49:46.542557 1143252 provision.go:84] configureAuth start
	I0603 13:49:46.542576 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetMachineName
	I0603 13:49:46.542913 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetIP
	I0603 13:49:46.545940 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.546339 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.546368 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.546499 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:46.548715 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.549097 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.549127 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.549294 1143252 provision.go:143] copyHostCerts
	I0603 13:49:46.549382 1143252 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem, removing ...
	I0603 13:49:46.549397 1143252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 13:49:46.549486 1143252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem (1078 bytes)
	I0603 13:49:46.549578 1143252 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem, removing ...
	I0603 13:49:46.549587 1143252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 13:49:46.549613 1143252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem (1123 bytes)
	I0603 13:49:46.549664 1143252 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem, removing ...
	I0603 13:49:46.549671 1143252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 13:49:46.549690 1143252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem (1675 bytes)
	I0603 13:49:46.549740 1143252 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem org=jenkins.embed-certs-223260 san=[127.0.0.1 192.168.83.246 embed-certs-223260 localhost minikube]
	I0603 13:49:46.807050 1143252 provision.go:177] copyRemoteCerts
	I0603 13:49:46.807111 1143252 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 13:49:46.807140 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:46.809916 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.810303 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.810347 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.810513 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:46.810758 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.810929 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:46.811168 1143252 sshutil.go:53] new ssh client: &{IP:192.168.83.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa Username:docker}
	I0603 13:49:46.892182 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 13:49:46.916657 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0603 13:49:46.941896 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 13:49:46.967292 1143252 provision.go:87] duration metric: took 424.714334ms to configureAuth
	I0603 13:49:46.967331 1143252 buildroot.go:189] setting minikube options for container-runtime
	I0603 13:49:46.967539 1143252 config.go:182] Loaded profile config "embed-certs-223260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:49:46.967626 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:46.970350 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.970668 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.970703 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.970870 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:46.971115 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.971314 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.971454 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:46.971625 1143252 main.go:141] libmachine: Using SSH client type: native
	I0603 13:49:46.971809 1143252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.246 22 <nil> <nil>}
	I0603 13:49:46.971831 1143252 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 13:49:47.264894 1143252 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 13:49:47.264922 1143252 machine.go:97] duration metric: took 1.062182146s to provisionDockerMachine
	I0603 13:49:47.264935 1143252 start.go:293] postStartSetup for "embed-certs-223260" (driver="kvm2")
	I0603 13:49:47.264946 1143252 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 13:49:47.264963 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:49:47.265368 1143252 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 13:49:47.265398 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:47.268412 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.268765 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:47.268796 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.268989 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:47.269223 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:47.269455 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:47.269625 1143252 sshutil.go:53] new ssh client: &{IP:192.168.83.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa Username:docker}
	I0603 13:49:47.348583 1143252 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 13:49:47.352828 1143252 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 13:49:47.352867 1143252 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/addons for local assets ...
	I0603 13:49:47.352949 1143252 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/files for local assets ...
	I0603 13:49:47.353046 1143252 filesync.go:149] local asset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> 10862512.pem in /etc/ssl/certs
	I0603 13:49:47.353164 1143252 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 13:49:47.363222 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:49:47.388132 1143252 start.go:296] duration metric: took 123.177471ms for postStartSetup
	I0603 13:49:47.388202 1143252 fix.go:56] duration metric: took 19.285899119s for fixHost
	I0603 13:49:47.388233 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:47.390960 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.391414 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:47.391477 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.391681 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:47.391937 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:47.392127 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:47.392266 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:47.392436 1143252 main.go:141] libmachine: Using SSH client type: native
	I0603 13:49:47.392670 1143252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.246 22 <nil> <nil>}
	I0603 13:49:47.392687 1143252 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 13:49:47.494294 1143252 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717422587.469729448
	
	I0603 13:49:47.494320 1143252 fix.go:216] guest clock: 1717422587.469729448
	I0603 13:49:47.494328 1143252 fix.go:229] Guest: 2024-06-03 13:49:47.469729448 +0000 UTC Remote: 2024-06-03 13:49:47.388208749 +0000 UTC m=+244.138441135 (delta=81.520699ms)
	I0603 13:49:47.494354 1143252 fix.go:200] guest clock delta is within tolerance: 81.520699ms
	I0603 13:49:47.494361 1143252 start.go:83] releasing machines lock for "embed-certs-223260", held for 19.392103897s
	I0603 13:49:47.494394 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:49:47.494686 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetIP
	I0603 13:49:47.497515 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.497930 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:47.497976 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.498110 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:49:47.498672 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:49:47.498859 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:49:47.498934 1143252 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 13:49:47.498988 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:47.499062 1143252 ssh_runner.go:195] Run: cat /version.json
	I0603 13:49:47.499082 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:47.501788 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.502075 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.502131 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:47.502156 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.502291 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:47.502390 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:47.502427 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.502550 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:47.502647 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:47.502738 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:47.502806 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:47.502942 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:47.502955 1143252 sshutil.go:53] new ssh client: &{IP:192.168.83.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa Username:docker}
	I0603 13:49:47.503078 1143252 sshutil.go:53] new ssh client: &{IP:192.168.83.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa Username:docker}
	I0603 13:49:47.612706 1143252 ssh_runner.go:195] Run: systemctl --version
	I0603 13:49:47.618922 1143252 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 13:49:47.764749 1143252 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 13:49:47.770936 1143252 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 13:49:47.771023 1143252 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 13:49:47.788401 1143252 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 13:49:47.788427 1143252 start.go:494] detecting cgroup driver to use...
	I0603 13:49:47.788486 1143252 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 13:49:47.805000 1143252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 13:49:47.822258 1143252 docker.go:217] disabling cri-docker service (if available) ...
	I0603 13:49:47.822315 1143252 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 13:49:47.837826 1143252 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 13:49:47.853818 1143252 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 13:49:47.978204 1143252 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 13:49:48.106302 1143252 docker.go:233] disabling docker service ...
	I0603 13:49:48.106366 1143252 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 13:49:48.120974 1143252 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 13:49:48.134911 1143252 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 13:49:48.278103 1143252 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 13:49:48.398238 1143252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 13:49:48.413207 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 13:49:48.432211 1143252 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 13:49:48.432281 1143252 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:49:48.443668 1143252 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 13:49:48.443746 1143252 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:49:48.454990 1143252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:49:48.467119 1143252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:49:48.479875 1143252 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 13:49:48.496767 1143252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:49:48.508872 1143252 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:49:48.530972 1143252 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:49:48.542631 1143252 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 13:49:48.552775 1143252 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 13:49:48.552836 1143252 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 13:49:48.566528 1143252 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 13:49:48.582917 1143252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:49:48.716014 1143252 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 13:49:48.860157 1143252 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 13:49:48.860283 1143252 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 13:49:48.865046 1143252 start.go:562] Will wait 60s for crictl version
	I0603 13:49:48.865121 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:49:48.869520 1143252 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 13:49:48.909721 1143252 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 13:49:48.909819 1143252 ssh_runner.go:195] Run: crio --version
	I0603 13:49:48.939080 1143252 ssh_runner.go:195] Run: crio --version
	I0603 13:49:48.970595 1143252 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 13:49:47.518807 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .Start
	I0603 13:49:47.518981 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Ensuring networks are active...
	I0603 13:49:47.519623 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Ensuring network default is active
	I0603 13:49:47.519926 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Ensuring network mk-default-k8s-diff-port-030870 is active
	I0603 13:49:47.520408 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Getting domain xml...
	I0603 13:49:47.521014 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Creating domain...
	I0603 13:49:48.798483 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting to get IP...
	I0603 13:49:48.799695 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:48.800174 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:48.800305 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:48.800165 1144471 retry.go:31] will retry after 204.161843ms: waiting for machine to come up
	I0603 13:49:49.005669 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:49.006143 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:49.006180 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:49.006091 1144471 retry.go:31] will retry after 382.751679ms: waiting for machine to come up
	I0603 13:49:49.391162 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:49.391717 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:49.391750 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:49.391670 1144471 retry.go:31] will retry after 314.248576ms: waiting for machine to come up
	I0603 13:49:49.707349 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:49.707957 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:49.707990 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:49.707856 1144471 retry.go:31] will retry after 446.461931ms: waiting for machine to come up
	I0603 13:49:50.155616 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:50.156238 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:50.156274 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:50.156174 1144471 retry.go:31] will retry after 712.186964ms: waiting for machine to come up
	I0603 13:49:48.971971 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetIP
	I0603 13:49:48.975079 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:48.975439 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:48.975471 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:48.975721 1143252 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0603 13:49:48.980114 1143252 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:49:48.993380 1143252 kubeadm.go:877] updating cluster {Name:embed-certs-223260 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:embed-certs-223260 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.246 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 13:49:48.993543 1143252 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 13:49:48.993636 1143252 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:49:49.032289 1143252 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0603 13:49:49.032364 1143252 ssh_runner.go:195] Run: which lz4
	I0603 13:49:49.036707 1143252 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 13:49:49.040973 1143252 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 13:49:49.041000 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0603 13:49:50.554295 1143252 crio.go:462] duration metric: took 1.517623353s to copy over tarball
	I0603 13:49:50.554387 1143252 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 13:49:52.823733 1143252 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.269303423s)
	I0603 13:49:52.823785 1143252 crio.go:469] duration metric: took 2.269454274s to extract the tarball
	I0603 13:49:52.823799 1143252 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 13:49:52.862060 1143252 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:49:52.906571 1143252 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 13:49:52.906602 1143252 cache_images.go:84] Images are preloaded, skipping loading
	I0603 13:49:52.906618 1143252 kubeadm.go:928] updating node { 192.168.83.246 8443 v1.30.1 crio true true} ...
	I0603 13:49:52.906774 1143252 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-223260 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:embed-certs-223260 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 13:49:52.906866 1143252 ssh_runner.go:195] Run: crio config
	I0603 13:49:52.954082 1143252 cni.go:84] Creating CNI manager for ""
	I0603 13:49:52.954111 1143252 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:49:52.954129 1143252 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 13:49:52.954159 1143252 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.246 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-223260 NodeName:embed-certs-223260 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.246"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.246 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 13:49:52.954355 1143252 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.246
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-223260"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.246
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.246"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 13:49:52.954446 1143252 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 13:49:52.964488 1143252 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 13:49:52.964582 1143252 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 13:49:52.974118 1143252 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0603 13:49:52.990701 1143252 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 13:49:53.007539 1143252 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0603 13:49:53.024943 1143252 ssh_runner.go:195] Run: grep 192.168.83.246	control-plane.minikube.internal$ /etc/hosts
	I0603 13:49:53.029097 1143252 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.246	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:49:53.041234 1143252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:49:53.178449 1143252 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:49:53.195718 1143252 certs.go:68] Setting up /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260 for IP: 192.168.83.246
	I0603 13:49:53.195750 1143252 certs.go:194] generating shared ca certs ...
	I0603 13:49:53.195769 1143252 certs.go:226] acquiring lock for ca certs: {Name:mkeec5aabce7c9540fcb31b78e4f96c2851d54f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:49:53.195954 1143252 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key
	I0603 13:49:53.196021 1143252 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key
	I0603 13:49:53.196035 1143252 certs.go:256] generating profile certs ...
	I0603 13:49:53.196256 1143252 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260/client.key
	I0603 13:49:53.196341 1143252 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260/apiserver.key.90d43877
	I0603 13:49:53.196437 1143252 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260/proxy-client.key
	I0603 13:49:53.196605 1143252 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem (1338 bytes)
	W0603 13:49:53.196663 1143252 certs.go:480] ignoring /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251_empty.pem, impossibly tiny 0 bytes
	I0603 13:49:53.196678 1143252 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 13:49:53.196708 1143252 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem (1078 bytes)
	I0603 13:49:53.196756 1143252 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem (1123 bytes)
	I0603 13:49:53.196787 1143252 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem (1675 bytes)
	I0603 13:49:53.196838 1143252 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:49:53.197895 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 13:49:53.231612 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 13:49:53.263516 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 13:49:50.870317 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:50.870816 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:50.870841 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:50.870781 1144471 retry.go:31] will retry after 855.15183ms: waiting for machine to come up
	I0603 13:49:51.727393 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:51.727926 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:51.727960 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:51.727869 1144471 retry.go:31] will retry after 997.293541ms: waiting for machine to come up
	I0603 13:49:52.726578 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:52.727036 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:52.727073 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:52.726953 1144471 retry.go:31] will retry after 1.4233414s: waiting for machine to come up
	I0603 13:49:54.151594 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:54.152072 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:54.152099 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:54.152021 1144471 retry.go:31] will retry after 1.348888248s: waiting for machine to come up
	I0603 13:49:53.303724 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0603 13:49:53.334700 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0603 13:49:53.371594 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 13:49:53.396381 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 13:49:53.420985 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0603 13:49:53.445334 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem --> /usr/share/ca-certificates/1086251.pem (1338 bytes)
	I0603 13:49:53.469632 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /usr/share/ca-certificates/10862512.pem (1708 bytes)
	I0603 13:49:53.495720 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 13:49:53.522416 1143252 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 13:49:53.541593 1143252 ssh_runner.go:195] Run: openssl version
	I0603 13:49:53.547653 1143252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1086251.pem && ln -fs /usr/share/ca-certificates/1086251.pem /etc/ssl/certs/1086251.pem"
	I0603 13:49:53.558802 1143252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1086251.pem
	I0603 13:49:53.563511 1143252 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:37 /usr/share/ca-certificates/1086251.pem
	I0603 13:49:53.563579 1143252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1086251.pem
	I0603 13:49:53.569691 1143252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1086251.pem /etc/ssl/certs/51391683.0"
	I0603 13:49:53.582814 1143252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10862512.pem && ln -fs /usr/share/ca-certificates/10862512.pem /etc/ssl/certs/10862512.pem"
	I0603 13:49:53.595684 1143252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10862512.pem
	I0603 13:49:53.600613 1143252 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:37 /usr/share/ca-certificates/10862512.pem
	I0603 13:49:53.600675 1143252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10862512.pem
	I0603 13:49:53.607008 1143252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10862512.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 13:49:53.619919 1143252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 13:49:53.632663 1143252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:49:53.637604 1143252 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:24 /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:49:53.637675 1143252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:49:53.643844 1143252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 13:49:53.655934 1143252 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 13:49:53.660801 1143252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 13:49:53.667391 1143252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 13:49:53.674382 1143252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 13:49:53.681121 1143252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 13:49:53.687496 1143252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 13:49:53.693623 1143252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 13:49:53.699764 1143252 kubeadm.go:391] StartCluster: {Name:embed-certs-223260 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:embed-certs-223260 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.246 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:49:53.699871 1143252 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 13:49:53.699928 1143252 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:49:53.736588 1143252 cri.go:89] found id: ""
	I0603 13:49:53.736662 1143252 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0603 13:49:53.750620 1143252 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 13:49:53.750644 1143252 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 13:49:53.750652 1143252 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 13:49:53.750716 1143252 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 13:49:53.765026 1143252 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 13:49:53.766297 1143252 kubeconfig.go:125] found "embed-certs-223260" server: "https://192.168.83.246:8443"
	I0603 13:49:53.768662 1143252 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 13:49:53.779583 1143252 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.83.246
	I0603 13:49:53.779625 1143252 kubeadm.go:1154] stopping kube-system containers ...
	I0603 13:49:53.779639 1143252 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0603 13:49:53.779695 1143252 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:49:53.820312 1143252 cri.go:89] found id: ""
	I0603 13:49:53.820398 1143252 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 13:49:53.838446 1143252 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 13:49:53.849623 1143252 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 13:49:53.849643 1143252 kubeadm.go:156] found existing configuration files:
	
	I0603 13:49:53.849700 1143252 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 13:49:53.859379 1143252 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 13:49:53.859451 1143252 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 13:49:53.869939 1143252 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 13:49:53.880455 1143252 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 13:49:53.880527 1143252 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 13:49:53.890918 1143252 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 13:49:53.900841 1143252 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 13:49:53.900894 1143252 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 13:49:53.910968 1143252 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 13:49:53.921064 1143252 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 13:49:53.921121 1143252 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 13:49:53.931550 1143252 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 13:49:53.942309 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:49:54.078959 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:49:54.842079 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:49:55.043420 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:49:55.111164 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:49:55.220384 1143252 api_server.go:52] waiting for apiserver process to appear ...
	I0603 13:49:55.220475 1143252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:49:55.721612 1143252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:49:56.221513 1143252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:49:56.257801 1143252 api_server.go:72] duration metric: took 1.037411844s to wait for apiserver process to appear ...
	I0603 13:49:56.257845 1143252 api_server.go:88] waiting for apiserver healthz status ...
	I0603 13:49:56.257874 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:49:55.502734 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:55.503282 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:55.503313 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:55.503226 1144471 retry.go:31] will retry after 1.733012887s: waiting for machine to come up
	I0603 13:49:57.238544 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:57.238975 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:57.239006 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:57.238917 1144471 retry.go:31] will retry after 2.565512625s: waiting for machine to come up
	I0603 13:49:59.806662 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:59.807077 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:59.807105 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:59.807024 1144471 retry.go:31] will retry after 2.759375951s: waiting for machine to come up
	I0603 13:49:59.684015 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 13:49:59.684058 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 13:49:59.684078 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:49:59.757751 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 13:49:59.757791 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 13:49:59.758846 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:49:59.779923 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:49:59.779974 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:00.258098 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:50:00.265061 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:00.265089 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:00.758643 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:50:00.764364 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:00.764400 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:01.257950 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:50:01.262846 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:01.262875 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:01.758078 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:50:01.763269 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:01.763301 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:02.258641 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:50:02.263628 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:02.263658 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:02.758205 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:50:02.765436 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:02.765470 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:03.258663 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:50:03.263141 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 200:
	ok
	I0603 13:50:03.269787 1143252 api_server.go:141] control plane version: v1.30.1
	I0603 13:50:03.269817 1143252 api_server.go:131] duration metric: took 7.011964721s to wait for apiserver health ...
	I0603 13:50:03.269827 1143252 cni.go:84] Creating CNI manager for ""
	I0603 13:50:03.269833 1143252 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:50:03.271812 1143252 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 13:50:03.273154 1143252 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 13:50:03.285329 1143252 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 13:50:03.305480 1143252 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 13:50:03.317546 1143252 system_pods.go:59] 8 kube-system pods found
	I0603 13:50:03.317601 1143252 system_pods.go:61] "coredns-7db6d8ff4d-qdjrv" [9a490ea5-c189-4d28-bd6b-509610d35f37] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 13:50:03.317614 1143252 system_pods.go:61] "etcd-embed-certs-223260" [97807b62-195b-4d94-a7f8-754f68ad4f03] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0603 13:50:03.317627 1143252 system_pods.go:61] "kube-apiserver-embed-certs-223260" [df2f6cde-407c-4ed2-8fec-5fa61a428a88] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0603 13:50:03.317637 1143252 system_pods.go:61] "kube-controller-manager-embed-certs-223260" [9b8bc1b7-3f43-4626-b9ee-37f5176b7fd6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0603 13:50:03.317645 1143252 system_pods.go:61] "kube-proxy-s5vdl" [4c515f67-d265-4140-82ec-ba9ac4ddda80] Running
	I0603 13:50:03.317658 1143252 system_pods.go:61] "kube-scheduler-embed-certs-223260" [d23001bf-d971-42d2-a901-b2ec4b4db649] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0603 13:50:03.317667 1143252 system_pods.go:61] "metrics-server-569cc877fc-v7d9t" [e89c698d-7aab-4acd-a9b3-5ba0315ad681] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:50:03.317677 1143252 system_pods.go:61] "storage-provisioner" [6ff65744-2d90-4589-a97f-d6b4d792eab4] Running
	I0603 13:50:03.317686 1143252 system_pods.go:74] duration metric: took 12.177585ms to wait for pod list to return data ...
	I0603 13:50:03.317695 1143252 node_conditions.go:102] verifying NodePressure condition ...
	I0603 13:50:03.321445 1143252 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 13:50:03.321479 1143252 node_conditions.go:123] node cpu capacity is 2
	I0603 13:50:03.321493 1143252 node_conditions.go:105] duration metric: took 3.787651ms to run NodePressure ...
	I0603 13:50:03.321512 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:03.598576 1143252 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0603 13:50:03.604196 1143252 kubeadm.go:733] kubelet initialised
	I0603 13:50:03.604219 1143252 kubeadm.go:734] duration metric: took 5.606021ms waiting for restarted kubelet to initialise ...
	I0603 13:50:03.604236 1143252 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:50:03.611441 1143252 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-qdjrv" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:03.615911 1143252 pod_ready.go:97] node "embed-certs-223260" hosting pod "coredns-7db6d8ff4d-qdjrv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:03.615936 1143252 pod_ready.go:81] duration metric: took 4.468017ms for pod "coredns-7db6d8ff4d-qdjrv" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:03.615945 1143252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-223260" hosting pod "coredns-7db6d8ff4d-qdjrv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:03.615955 1143252 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:03.620663 1143252 pod_ready.go:97] node "embed-certs-223260" hosting pod "etcd-embed-certs-223260" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:03.620683 1143252 pod_ready.go:81] duration metric: took 4.71967ms for pod "etcd-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:03.620691 1143252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-223260" hosting pod "etcd-embed-certs-223260" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:03.620697 1143252 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:03.624894 1143252 pod_ready.go:97] node "embed-certs-223260" hosting pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:03.624917 1143252 pod_ready.go:81] duration metric: took 4.212227ms for pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:03.624925 1143252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-223260" hosting pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:03.624933 1143252 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:03.708636 1143252 pod_ready.go:97] node "embed-certs-223260" hosting pod "kube-controller-manager-embed-certs-223260" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:03.708665 1143252 pod_ready.go:81] duration metric: took 83.72445ms for pod "kube-controller-manager-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:03.708675 1143252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-223260" hosting pod "kube-controller-manager-embed-certs-223260" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:03.708681 1143252 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-s5vdl" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:04.109391 1143252 pod_ready.go:97] node "embed-certs-223260" hosting pod "kube-proxy-s5vdl" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:04.109454 1143252 pod_ready.go:81] duration metric: took 400.761651ms for pod "kube-proxy-s5vdl" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:04.109469 1143252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-223260" hosting pod "kube-proxy-s5vdl" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:04.109478 1143252 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:04.509683 1143252 pod_ready.go:97] node "embed-certs-223260" hosting pod "kube-scheduler-embed-certs-223260" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:04.509712 1143252 pod_ready.go:81] duration metric: took 400.226435ms for pod "kube-scheduler-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:04.509723 1143252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-223260" hosting pod "kube-scheduler-embed-certs-223260" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:04.509730 1143252 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:04.909629 1143252 pod_ready.go:97] node "embed-certs-223260" hosting pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:04.909659 1143252 pod_ready.go:81] duration metric: took 399.917901ms for pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:04.909669 1143252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-223260" hosting pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:04.909679 1143252 pod_ready.go:38] duration metric: took 1.30543039s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:50:04.909697 1143252 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 13:50:04.921682 1143252 ops.go:34] apiserver oom_adj: -16
	I0603 13:50:04.921708 1143252 kubeadm.go:591] duration metric: took 11.171050234s to restartPrimaryControlPlane
	I0603 13:50:04.921717 1143252 kubeadm.go:393] duration metric: took 11.221962831s to StartCluster
	I0603 13:50:04.921737 1143252 settings.go:142] acquiring lock: {Name:mka7155af15d143794eb08b8670f7d850f44839e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:50:04.921807 1143252 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 13:50:04.923342 1143252 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/kubeconfig: {Name:mk082a4c41fd0f4876b4085806e1bc5ef6533b14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:50:04.923628 1143252 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.83.246 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 13:50:04.927063 1143252 out.go:177] * Verifying Kubernetes components...
	I0603 13:50:04.923693 1143252 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 13:50:04.923865 1143252 config.go:182] Loaded profile config "embed-certs-223260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:50:04.928850 1143252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:50:04.928873 1143252 addons.go:69] Setting default-storageclass=true in profile "embed-certs-223260"
	I0603 13:50:04.928872 1143252 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-223260"
	I0603 13:50:04.928889 1143252 addons.go:69] Setting metrics-server=true in profile "embed-certs-223260"
	I0603 13:50:04.928906 1143252 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-223260"
	I0603 13:50:04.928923 1143252 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-223260"
	I0603 13:50:04.928935 1143252 addons.go:234] Setting addon metrics-server=true in "embed-certs-223260"
	W0603 13:50:04.928938 1143252 addons.go:243] addon storage-provisioner should already be in state true
	W0603 13:50:04.928945 1143252 addons.go:243] addon metrics-server should already be in state true
	I0603 13:50:04.928980 1143252 host.go:66] Checking if "embed-certs-223260" exists ...
	I0603 13:50:04.928980 1143252 host.go:66] Checking if "embed-certs-223260" exists ...
	I0603 13:50:04.929307 1143252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:04.929346 1143252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:04.929352 1143252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:04.929372 1143252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:04.929597 1143252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:04.929630 1143252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:04.944948 1143252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38847
	I0603 13:50:04.945071 1143252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44523
	I0603 13:50:04.945489 1143252 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:04.945571 1143252 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:04.946137 1143252 main.go:141] libmachine: Using API Version  1
	I0603 13:50:04.946166 1143252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:04.946299 1143252 main.go:141] libmachine: Using API Version  1
	I0603 13:50:04.946319 1143252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:04.946589 1143252 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:04.946650 1143252 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:04.946798 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetState
	I0603 13:50:04.947022 1143252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35157
	I0603 13:50:04.947210 1143252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:04.947250 1143252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:04.947517 1143252 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:04.948043 1143252 main.go:141] libmachine: Using API Version  1
	I0603 13:50:04.948069 1143252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:04.948437 1143252 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:04.949064 1143252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:04.949107 1143252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:04.950532 1143252 addons.go:234] Setting addon default-storageclass=true in "embed-certs-223260"
	W0603 13:50:04.950558 1143252 addons.go:243] addon default-storageclass should already be in state true
	I0603 13:50:04.950589 1143252 host.go:66] Checking if "embed-certs-223260" exists ...
	I0603 13:50:04.950951 1143252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:04.951008 1143252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:04.964051 1143252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37589
	I0603 13:50:04.964078 1143252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35097
	I0603 13:50:04.964513 1143252 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:04.964562 1143252 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:04.965062 1143252 main.go:141] libmachine: Using API Version  1
	I0603 13:50:04.965088 1143252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:04.965128 1143252 main.go:141] libmachine: Using API Version  1
	I0603 13:50:04.965153 1143252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:04.965473 1143252 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:04.965532 1143252 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:04.965652 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetState
	I0603 13:50:04.965740 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetState
	I0603 13:50:04.967606 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:50:04.967739 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:50:04.969783 1143252 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:04.971193 1143252 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0603 13:50:02.567560 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:02.567988 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:50:02.568020 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:50:02.567915 1144471 retry.go:31] will retry after 3.955051362s: waiting for machine to come up
	I0603 13:50:04.972568 1143252 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0603 13:50:04.972588 1143252 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0603 13:50:04.972606 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:50:04.971275 1143252 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 13:50:04.972634 1143252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 13:50:04.972658 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:50:04.971495 1143252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42379
	I0603 13:50:04.973108 1143252 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:04.973575 1143252 main.go:141] libmachine: Using API Version  1
	I0603 13:50:04.973599 1143252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:04.973931 1143252 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:04.974623 1143252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:04.974658 1143252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:04.976128 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:50:04.976251 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:50:04.976535 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:50:04.976559 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:50:04.976709 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:50:04.976724 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:50:04.976768 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:50:04.976915 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:50:04.976989 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:50:04.977099 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:50:04.977156 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:50:04.977242 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:50:04.977305 1143252 sshutil.go:53] new ssh client: &{IP:192.168.83.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa Username:docker}
	I0603 13:50:04.977500 1143252 sshutil.go:53] new ssh client: &{IP:192.168.83.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa Username:docker}
	I0603 13:50:04.990810 1143252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36607
	I0603 13:50:04.991293 1143252 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:04.991844 1143252 main.go:141] libmachine: Using API Version  1
	I0603 13:50:04.991875 1143252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:04.992279 1143252 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:04.992499 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetState
	I0603 13:50:04.994225 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:50:04.994456 1143252 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 13:50:04.994476 1143252 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 13:50:04.994490 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:50:04.997771 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:50:04.998210 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:50:04.998239 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:50:04.998418 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:50:04.998627 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:50:04.998811 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:50:04.998941 1143252 sshutil.go:53] new ssh client: &{IP:192.168.83.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa Username:docker}
	I0603 13:50:05.119962 1143252 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:50:05.140880 1143252 node_ready.go:35] waiting up to 6m0s for node "embed-certs-223260" to be "Ready" ...
	I0603 13:50:05.271863 1143252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 13:50:05.275815 1143252 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0603 13:50:05.275843 1143252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0603 13:50:05.294572 1143252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 13:50:05.346520 1143252 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0603 13:50:05.346553 1143252 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0603 13:50:05.417100 1143252 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 13:50:05.417141 1143252 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0603 13:50:05.496250 1143252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 13:50:06.207746 1143252 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:06.207781 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .Close
	I0603 13:50:06.207849 1143252 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:06.207873 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .Close
	I0603 13:50:06.208103 1143252 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:06.208152 1143252 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:06.208161 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | Closing plugin on server side
	I0603 13:50:06.208182 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | Closing plugin on server side
	I0603 13:50:06.208157 1143252 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:06.208197 1143252 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:06.208200 1143252 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:06.208216 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .Close
	I0603 13:50:06.208208 1143252 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:06.208284 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .Close
	I0603 13:50:06.208572 1143252 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:06.208590 1143252 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:06.208691 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | Closing plugin on server side
	I0603 13:50:06.208703 1143252 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:06.208724 1143252 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:06.216764 1143252 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:06.216783 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .Close
	I0603 13:50:06.217095 1143252 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:06.217111 1143252 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:06.374254 1143252 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:06.374281 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .Close
	I0603 13:50:06.374603 1143252 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:06.374623 1143252 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:06.374634 1143252 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:06.374638 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | Closing plugin on server side
	I0603 13:50:06.374644 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .Close
	I0603 13:50:06.374901 1143252 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:06.374916 1143252 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:06.374933 1143252 addons.go:475] Verifying addon metrics-server=true in "embed-certs-223260"
	I0603 13:50:06.374948 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | Closing plugin on server side
	I0603 13:50:06.377491 1143252 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0603 13:50:08.083130 1143678 start.go:364] duration metric: took 3m45.627229097s to acquireMachinesLock for "old-k8s-version-151788"
	I0603 13:50:08.083256 1143678 start.go:96] Skipping create...Using existing machine configuration
	I0603 13:50:08.083266 1143678 fix.go:54] fixHost starting: 
	I0603 13:50:08.083762 1143678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:08.083812 1143678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:08.103187 1143678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36483
	I0603 13:50:08.103693 1143678 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:08.104269 1143678 main.go:141] libmachine: Using API Version  1
	I0603 13:50:08.104299 1143678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:08.104746 1143678 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:08.105115 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:50:08.105347 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetState
	I0603 13:50:08.107125 1143678 fix.go:112] recreateIfNeeded on old-k8s-version-151788: state=Stopped err=<nil>
	I0603 13:50:08.107173 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	W0603 13:50:08.107340 1143678 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 13:50:08.109207 1143678 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-151788" ...
	I0603 13:50:06.378684 1143252 addons.go:510] duration metric: took 1.4549999s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0603 13:50:07.145643 1143252 node_ready.go:53] node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:06.526793 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.527302 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Found IP for machine: 192.168.39.177
	I0603 13:50:06.527341 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has current primary IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.527366 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Reserving static IP address...
	I0603 13:50:06.527822 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Reserved static IP address: 192.168.39.177
	I0603 13:50:06.527857 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for SSH to be available...
	I0603 13:50:06.527902 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-030870", mac: "52:54:00:62:09:d4", ip: "192.168.39.177"} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:06.527956 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | skip adding static IP to network mk-default-k8s-diff-port-030870 - found existing host DHCP lease matching {name: "default-k8s-diff-port-030870", mac: "52:54:00:62:09:d4", ip: "192.168.39.177"}
	I0603 13:50:06.527973 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | Getting to WaitForSSH function...
	I0603 13:50:06.530287 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.530662 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:06.530696 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.530802 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | Using SSH client type: external
	I0603 13:50:06.530827 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | Using SSH private key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa (-rw-------)
	I0603 13:50:06.530849 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.177 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 13:50:06.530866 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | About to run SSH command:
	I0603 13:50:06.530877 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | exit 0
	I0603 13:50:06.653910 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | SSH cmd err, output: <nil>: 
	I0603 13:50:06.654259 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetConfigRaw
	I0603 13:50:06.654981 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetIP
	I0603 13:50:06.658094 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.658561 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:06.658600 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.658921 1143450 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870/config.json ...
	I0603 13:50:06.659144 1143450 machine.go:94] provisionDockerMachine start ...
	I0603 13:50:06.659168 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:06.659486 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:06.662534 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.662915 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:06.662959 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.663059 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:06.663258 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:06.663476 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:06.663660 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:06.663866 1143450 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:06.664103 1143450 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0603 13:50:06.664115 1143450 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 13:50:06.766054 1143450 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 13:50:06.766083 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetMachineName
	I0603 13:50:06.766406 1143450 buildroot.go:166] provisioning hostname "default-k8s-diff-port-030870"
	I0603 13:50:06.766440 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetMachineName
	I0603 13:50:06.766708 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:06.769445 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.769820 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:06.769871 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.770029 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:06.770244 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:06.770423 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:06.770670 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:06.770893 1143450 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:06.771057 1143450 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0603 13:50:06.771070 1143450 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-030870 && echo "default-k8s-diff-port-030870" | sudo tee /etc/hostname
	I0603 13:50:06.889997 1143450 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-030870
	
	I0603 13:50:06.890029 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:06.893778 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.894260 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:06.894297 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.894614 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:06.894826 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:06.895029 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:06.895211 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:06.895423 1143450 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:06.895608 1143450 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0603 13:50:06.895625 1143450 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-030870' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-030870/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-030870' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 13:50:07.007930 1143450 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 13:50:07.007971 1143450 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19011-1078924/.minikube CaCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19011-1078924/.minikube}
	I0603 13:50:07.008009 1143450 buildroot.go:174] setting up certificates
	I0603 13:50:07.008020 1143450 provision.go:84] configureAuth start
	I0603 13:50:07.008034 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetMachineName
	I0603 13:50:07.008433 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetIP
	I0603 13:50:07.011208 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.011607 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:07.011640 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.011774 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:07.013986 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.014431 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:07.014462 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.014655 1143450 provision.go:143] copyHostCerts
	I0603 13:50:07.014726 1143450 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem, removing ...
	I0603 13:50:07.014737 1143450 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 13:50:07.014787 1143450 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem (1078 bytes)
	I0603 13:50:07.014874 1143450 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem, removing ...
	I0603 13:50:07.014882 1143450 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 13:50:07.014902 1143450 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem (1123 bytes)
	I0603 13:50:07.014952 1143450 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem, removing ...
	I0603 13:50:07.014959 1143450 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 13:50:07.014974 1143450 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem (1675 bytes)
	I0603 13:50:07.015020 1143450 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-030870 san=[127.0.0.1 192.168.39.177 default-k8s-diff-port-030870 localhost minikube]
	I0603 13:50:07.402535 1143450 provision.go:177] copyRemoteCerts
	I0603 13:50:07.402595 1143450 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 13:50:07.402626 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:07.405891 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.406240 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:07.406272 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.406484 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:07.406718 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:07.406943 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:07.407132 1143450 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa Username:docker}
	I0603 13:50:07.489480 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 13:50:07.517212 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0603 13:50:07.543510 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 13:50:07.570284 1143450 provision.go:87] duration metric: took 562.244781ms to configureAuth
	I0603 13:50:07.570318 1143450 buildroot.go:189] setting minikube options for container-runtime
	I0603 13:50:07.570537 1143450 config.go:182] Loaded profile config "default-k8s-diff-port-030870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:50:07.570629 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:07.574171 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.574706 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:07.574739 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.574948 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:07.575262 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:07.575549 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:07.575781 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:07.575965 1143450 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:07.576217 1143450 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0603 13:50:07.576247 1143450 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 13:50:07.839415 1143450 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 13:50:07.839455 1143450 machine.go:97] duration metric: took 1.180296439s to provisionDockerMachine
	I0603 13:50:07.839468 1143450 start.go:293] postStartSetup for "default-k8s-diff-port-030870" (driver="kvm2")
	I0603 13:50:07.839482 1143450 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 13:50:07.839506 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:07.839843 1143450 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 13:50:07.839872 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:07.842547 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.842884 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:07.842918 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.843234 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:07.843471 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:07.843708 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:07.843952 1143450 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa Username:docker}
	I0603 13:50:07.927654 1143450 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 13:50:07.932965 1143450 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 13:50:07.932997 1143450 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/addons for local assets ...
	I0603 13:50:07.933082 1143450 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/files for local assets ...
	I0603 13:50:07.933202 1143450 filesync.go:149] local asset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> 10862512.pem in /etc/ssl/certs
	I0603 13:50:07.933343 1143450 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 13:50:07.945059 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:50:07.975774 1143450 start.go:296] duration metric: took 136.280559ms for postStartSetup
	I0603 13:50:07.975822 1143450 fix.go:56] duration metric: took 20.481265153s for fixHost
	I0603 13:50:07.975848 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:07.979035 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.979436 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:07.979486 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.979737 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:07.980012 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:07.980228 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:07.980452 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:07.980691 1143450 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:07.980935 1143450 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0603 13:50:07.980954 1143450 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 13:50:08.082946 1143450 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717422608.057620379
	
	I0603 13:50:08.082978 1143450 fix.go:216] guest clock: 1717422608.057620379
	I0603 13:50:08.082988 1143450 fix.go:229] Guest: 2024-06-03 13:50:08.057620379 +0000 UTC Remote: 2024-06-03 13:50:07.975826846 +0000 UTC m=+237.845886752 (delta=81.793533ms)
	I0603 13:50:08.083018 1143450 fix.go:200] guest clock delta is within tolerance: 81.793533ms
	I0603 13:50:08.083025 1143450 start.go:83] releasing machines lock for "default-k8s-diff-port-030870", held for 20.588515063s
	I0603 13:50:08.083060 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:08.083369 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetIP
	I0603 13:50:08.086674 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:08.087202 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:08.087285 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:08.087508 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:08.088324 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:08.088575 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:08.088673 1143450 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 13:50:08.088758 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:08.088823 1143450 ssh_runner.go:195] Run: cat /version.json
	I0603 13:50:08.088852 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:08.092020 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:08.092175 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:08.092406 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:08.092485 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:08.092863 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:08.092893 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:08.092916 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:08.092924 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:08.093273 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:08.093276 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:08.093522 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:08.093541 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:08.093708 1143450 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa Username:docker}
	I0603 13:50:08.093710 1143450 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa Username:docker}
	I0603 13:50:08.176292 1143450 ssh_runner.go:195] Run: systemctl --version
	I0603 13:50:08.204977 1143450 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 13:50:08.367121 1143450 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 13:50:08.376347 1143450 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 13:50:08.376431 1143450 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 13:50:08.398639 1143450 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 13:50:08.398672 1143450 start.go:494] detecting cgroup driver to use...
	I0603 13:50:08.398750 1143450 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 13:50:08.422776 1143450 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 13:50:08.443035 1143450 docker.go:217] disabling cri-docker service (if available) ...
	I0603 13:50:08.443108 1143450 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 13:50:08.459853 1143450 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 13:50:08.482009 1143450 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 13:50:08.631237 1143450 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 13:50:08.806623 1143450 docker.go:233] disabling docker service ...
	I0603 13:50:08.806715 1143450 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 13:50:08.827122 1143450 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 13:50:08.842457 1143450 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 13:50:08.999795 1143450 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 13:50:09.148706 1143450 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 13:50:09.167314 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 13:50:09.188867 1143450 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 13:50:09.188959 1143450 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:09.202239 1143450 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 13:50:09.202319 1143450 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:09.216228 1143450 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:09.231140 1143450 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:09.246767 1143450 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 13:50:09.260418 1143450 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:09.274349 1143450 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:09.300588 1143450 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:09.314659 1143450 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 13:50:09.326844 1143450 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 13:50:09.326919 1143450 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 13:50:09.344375 1143450 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 13:50:09.357955 1143450 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:50:09.504105 1143450 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 13:50:09.685468 1143450 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 13:50:09.685562 1143450 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 13:50:09.690863 1143450 start.go:562] Will wait 60s for crictl version
	I0603 13:50:09.690943 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:50:09.696532 1143450 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 13:50:09.742785 1143450 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 13:50:09.742891 1143450 ssh_runner.go:195] Run: crio --version
	I0603 13:50:09.782137 1143450 ssh_runner.go:195] Run: crio --version
	I0603 13:50:09.816251 1143450 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 13:50:09.817854 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetIP
	I0603 13:50:09.821049 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:09.821555 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:09.821595 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:09.821855 1143450 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0603 13:50:09.826658 1143450 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:50:09.841351 1143450 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-030870 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:default-k8s-diff-port-030870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.177 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 13:50:09.841521 1143450 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 13:50:09.841586 1143450 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:50:09.883751 1143450 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0603 13:50:09.883825 1143450 ssh_runner.go:195] Run: which lz4
	I0603 13:50:09.888383 1143450 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 13:50:09.893662 1143450 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 13:50:09.893704 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0603 13:50:08.110706 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .Start
	I0603 13:50:08.110954 1143678 main.go:141] libmachine: (old-k8s-version-151788) Ensuring networks are active...
	I0603 13:50:08.111890 1143678 main.go:141] libmachine: (old-k8s-version-151788) Ensuring network default is active
	I0603 13:50:08.112291 1143678 main.go:141] libmachine: (old-k8s-version-151788) Ensuring network mk-old-k8s-version-151788 is active
	I0603 13:50:08.112708 1143678 main.go:141] libmachine: (old-k8s-version-151788) Getting domain xml...
	I0603 13:50:08.113547 1143678 main.go:141] libmachine: (old-k8s-version-151788) Creating domain...
	I0603 13:50:09.528855 1143678 main.go:141] libmachine: (old-k8s-version-151788) Waiting to get IP...
	I0603 13:50:09.529978 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:09.530410 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:09.530453 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:09.530382 1144654 retry.go:31] will retry after 208.935457ms: waiting for machine to come up
	I0603 13:50:09.741245 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:09.741816 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:09.741864 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:09.741769 1144654 retry.go:31] will retry after 376.532154ms: waiting for machine to come up
	I0603 13:50:10.120533 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:10.121261 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:10.121337 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:10.121239 1144654 retry.go:31] will retry after 339.126643ms: waiting for machine to come up
	I0603 13:50:10.461708 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:10.462488 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:10.462514 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:10.462425 1144654 retry.go:31] will retry after 490.057426ms: waiting for machine to come up
	I0603 13:50:10.954107 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:10.954887 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:10.954921 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:10.954840 1144654 retry.go:31] will retry after 711.209001ms: waiting for machine to come up
	I0603 13:50:11.667459 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:11.668198 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:11.668231 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:11.668135 1144654 retry.go:31] will retry after 928.879285ms: waiting for machine to come up
	I0603 13:50:09.645006 1143252 node_ready.go:53] node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:10.146403 1143252 node_ready.go:49] node "embed-certs-223260" has status "Ready":"True"
	I0603 13:50:10.146438 1143252 node_ready.go:38] duration metric: took 5.005510729s for node "embed-certs-223260" to be "Ready" ...
	I0603 13:50:10.146453 1143252 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:50:10.154249 1143252 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-qdjrv" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:10.164361 1143252 pod_ready.go:92] pod "coredns-7db6d8ff4d-qdjrv" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:10.164401 1143252 pod_ready.go:81] duration metric: took 10.115855ms for pod "coredns-7db6d8ff4d-qdjrv" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:10.164419 1143252 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:11.675214 1143252 pod_ready.go:92] pod "etcd-embed-certs-223260" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:11.675243 1143252 pod_ready.go:81] duration metric: took 1.510815036s for pod "etcd-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:11.675254 1143252 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:11.522734 1143450 crio.go:462] duration metric: took 1.634406537s to copy over tarball
	I0603 13:50:11.522837 1143450 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 13:50:13.983446 1143450 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.460564522s)
	I0603 13:50:13.983484 1143450 crio.go:469] duration metric: took 2.460706596s to extract the tarball
	I0603 13:50:13.983503 1143450 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 13:50:14.029942 1143450 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:50:14.083084 1143450 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 13:50:14.083113 1143450 cache_images.go:84] Images are preloaded, skipping loading
	I0603 13:50:14.083122 1143450 kubeadm.go:928] updating node { 192.168.39.177 8444 v1.30.1 crio true true} ...
	I0603 13:50:14.083247 1143450 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-030870 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.177
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-030870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 13:50:14.083319 1143450 ssh_runner.go:195] Run: crio config
	I0603 13:50:14.142320 1143450 cni.go:84] Creating CNI manager for ""
	I0603 13:50:14.142344 1143450 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:50:14.142354 1143450 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 13:50:14.142379 1143450 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.177 APIServerPort:8444 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-030870 NodeName:default-k8s-diff-port-030870 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.177"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.177 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 13:50:14.142517 1143450 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.177
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-030870"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.177
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.177"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 13:50:14.142577 1143450 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 13:50:14.153585 1143450 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 13:50:14.153687 1143450 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 13:50:14.164499 1143450 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0603 13:50:14.186564 1143450 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 13:50:14.205489 1143450 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0603 13:50:14.227005 1143450 ssh_runner.go:195] Run: grep 192.168.39.177	control-plane.minikube.internal$ /etc/hosts
	I0603 13:50:14.231782 1143450 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.177	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:50:14.247433 1143450 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:50:14.368336 1143450 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:50:14.391791 1143450 certs.go:68] Setting up /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870 for IP: 192.168.39.177
	I0603 13:50:14.391816 1143450 certs.go:194] generating shared ca certs ...
	I0603 13:50:14.391840 1143450 certs.go:226] acquiring lock for ca certs: {Name:mkeec5aabce7c9540fcb31b78e4f96c2851d54f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:50:14.392015 1143450 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key
	I0603 13:50:14.392075 1143450 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key
	I0603 13:50:14.392090 1143450 certs.go:256] generating profile certs ...
	I0603 13:50:14.392282 1143450 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870/client.key
	I0603 13:50:14.392373 1143450 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870/apiserver.key.7a30187e
	I0603 13:50:14.392428 1143450 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870/proxy-client.key
	I0603 13:50:14.392545 1143450 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem (1338 bytes)
	W0603 13:50:14.392602 1143450 certs.go:480] ignoring /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251_empty.pem, impossibly tiny 0 bytes
	I0603 13:50:14.392616 1143450 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 13:50:14.392650 1143450 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem (1078 bytes)
	I0603 13:50:14.392687 1143450 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem (1123 bytes)
	I0603 13:50:14.392722 1143450 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem (1675 bytes)
	I0603 13:50:14.392780 1143450 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:50:14.393706 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 13:50:14.424354 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 13:50:14.476267 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 13:50:14.514457 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0603 13:50:14.548166 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0603 13:50:14.584479 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0603 13:50:14.626894 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 13:50:14.663103 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0603 13:50:14.696750 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /usr/share/ca-certificates/10862512.pem (1708 bytes)
	I0603 13:50:14.725770 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 13:50:14.755779 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem --> /usr/share/ca-certificates/1086251.pem (1338 bytes)
	I0603 13:50:14.786060 1143450 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 13:50:14.805976 1143450 ssh_runner.go:195] Run: openssl version
	I0603 13:50:14.812737 1143450 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10862512.pem && ln -fs /usr/share/ca-certificates/10862512.pem /etc/ssl/certs/10862512.pem"
	I0603 13:50:14.824707 1143450 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10862512.pem
	I0603 13:50:14.831139 1143450 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:37 /usr/share/ca-certificates/10862512.pem
	I0603 13:50:14.831255 1143450 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10862512.pem
	I0603 13:50:14.838855 1143450 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10862512.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 13:50:14.850974 1143450 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 13:50:14.865613 1143450 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:50:14.871431 1143450 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:24 /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:50:14.871518 1143450 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:50:14.878919 1143450 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 13:50:14.891371 1143450 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1086251.pem && ln -fs /usr/share/ca-certificates/1086251.pem /etc/ssl/certs/1086251.pem"
	I0603 13:50:14.903721 1143450 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1086251.pem
	I0603 13:50:14.909069 1143450 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:37 /usr/share/ca-certificates/1086251.pem
	I0603 13:50:14.909180 1143450 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1086251.pem
	I0603 13:50:14.915904 1143450 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1086251.pem /etc/ssl/certs/51391683.0"
	I0603 13:50:14.928622 1143450 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 13:50:14.934466 1143450 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 13:50:14.941321 1143450 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 13:50:14.947960 1143450 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 13:50:14.955629 1143450 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 13:50:14.962761 1143450 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 13:50:14.970396 1143450 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 13:50:14.977381 1143450 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-030870 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.1 ClusterName:default-k8s-diff-port-030870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.177 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:50:14.977543 1143450 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 13:50:14.977599 1143450 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:50:15.042628 1143450 cri.go:89] found id: ""
	I0603 13:50:15.042733 1143450 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0603 13:50:15.055439 1143450 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 13:50:15.055469 1143450 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 13:50:15.055476 1143450 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 13:50:15.055535 1143450 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 13:50:15.067250 1143450 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 13:50:15.068159 1143450 kubeconfig.go:125] found "default-k8s-diff-port-030870" server: "https://192.168.39.177:8444"
	I0603 13:50:15.070060 1143450 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 13:50:15.082723 1143450 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.177
	I0603 13:50:15.082788 1143450 kubeadm.go:1154] stopping kube-system containers ...
	I0603 13:50:15.082809 1143450 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0603 13:50:15.082972 1143450 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:50:15.124369 1143450 cri.go:89] found id: ""
	I0603 13:50:15.124509 1143450 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 13:50:15.144064 1143450 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 13:50:15.156148 1143450 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 13:50:15.156174 1143450 kubeadm.go:156] found existing configuration files:
	
	I0603 13:50:15.156240 1143450 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0603 13:50:15.166927 1143450 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 13:50:15.167006 1143450 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 13:50:12.598536 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:12.598972 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:12.599008 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:12.598948 1144654 retry.go:31] will retry after 882.970422ms: waiting for machine to come up
	I0603 13:50:13.483171 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:13.483723 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:13.483758 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:13.483640 1144654 retry.go:31] will retry after 1.215665556s: waiting for machine to come up
	I0603 13:50:14.701392 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:14.701960 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:14.701991 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:14.701899 1144654 retry.go:31] will retry after 1.614371992s: waiting for machine to come up
	I0603 13:50:16.318708 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:16.319127 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:16.319148 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:16.319103 1144654 retry.go:31] will retry after 2.146267337s: waiting for machine to come up
	I0603 13:50:13.683419 1143252 pod_ready.go:102] pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:15.684744 1143252 pod_ready.go:102] pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:16.792510 1143252 pod_ready.go:92] pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:16.792538 1143252 pod_ready.go:81] duration metric: took 5.117277447s for pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:16.792549 1143252 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:16.798083 1143252 pod_ready.go:92] pod "kube-controller-manager-embed-certs-223260" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:16.798112 1143252 pod_ready.go:81] duration metric: took 5.554915ms for pod "kube-controller-manager-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:16.798126 1143252 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s5vdl" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:16.804217 1143252 pod_ready.go:92] pod "kube-proxy-s5vdl" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:16.804247 1143252 pod_ready.go:81] duration metric: took 6.113411ms for pod "kube-proxy-s5vdl" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:16.804262 1143252 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:16.810317 1143252 pod_ready.go:92] pod "kube-scheduler-embed-certs-223260" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:16.810343 1143252 pod_ready.go:81] duration metric: took 6.073098ms for pod "kube-scheduler-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:16.810357 1143252 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:15.178645 1143450 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0603 13:50:15.486524 1143450 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 13:50:15.486608 1143450 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 13:50:15.497694 1143450 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0603 13:50:15.509586 1143450 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 13:50:15.509665 1143450 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 13:50:15.521976 1143450 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0603 13:50:15.533446 1143450 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 13:50:15.533535 1143450 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 13:50:15.545525 1143450 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 13:50:15.557558 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:15.710109 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:16.725380 1143450 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.015227554s)
	I0603 13:50:16.725452 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:16.964275 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:17.061586 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:17.183665 1143450 api_server.go:52] waiting for apiserver process to appear ...
	I0603 13:50:17.183764 1143450 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:17.684365 1143450 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:18.184269 1143450 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:18.254733 1143450 api_server.go:72] duration metric: took 1.07106398s to wait for apiserver process to appear ...
	I0603 13:50:18.254769 1143450 api_server.go:88] waiting for apiserver healthz status ...
	I0603 13:50:18.254797 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:18.466825 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:18.467260 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:18.467292 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:18.467187 1144654 retry.go:31] will retry after 2.752334209s: waiting for machine to come up
	I0603 13:50:21.220813 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:21.221235 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:21.221267 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:21.221182 1144654 retry.go:31] will retry after 3.082080728s: waiting for machine to come up
	I0603 13:50:18.819188 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:21.323790 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:21.193140 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 13:50:21.193177 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 13:50:21.193193 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:21.265534 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 13:50:21.265580 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 13:50:21.265602 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:21.277669 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 13:50:21.277703 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 13:50:21.754973 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:21.761802 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:21.761841 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:22.255071 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:22.262166 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:22.262227 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:22.755128 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:22.759896 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:22.759936 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:23.255520 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:23.262093 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:23.262128 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:23.755784 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:23.760053 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:23.760079 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:24.255534 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:24.259793 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:24.259820 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:24.755365 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:24.759964 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 200:
	ok
	I0603 13:50:24.768830 1143450 api_server.go:141] control plane version: v1.30.1
	I0603 13:50:24.768862 1143450 api_server.go:131] duration metric: took 6.51408552s to wait for apiserver health ...
	I0603 13:50:24.768872 1143450 cni.go:84] Creating CNI manager for ""
	I0603 13:50:24.768879 1143450 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:50:24.771099 1143450 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 13:50:24.772806 1143450 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 13:50:24.784204 1143450 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 13:50:24.805572 1143450 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 13:50:24.816944 1143450 system_pods.go:59] 8 kube-system pods found
	I0603 13:50:24.816988 1143450 system_pods.go:61] "coredns-7db6d8ff4d-flxqj" [a116f363-ca50-4e2d-8c77-e99498c81e36] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 13:50:24.816997 1143450 system_pods.go:61] "etcd-default-k8s-diff-port-030870" [4134b8e4-b7c4-4571-ae7f-f1eff2be2427] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0603 13:50:24.817008 1143450 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-030870" [38fe3d48-9d20-448a-b8d1-7c3af8ab1d2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0603 13:50:24.817021 1143450 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-030870" [5c8f2fc4-fc4f-48f8-8d81-3b64aa9a93c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0603 13:50:24.817028 1143450 system_pods.go:61] "kube-proxy-thsrx" [96df5442-b343-47c8-a561-681a2d568d50] Running
	I0603 13:50:24.817037 1143450 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-030870" [1f2c23a1-1c2c-463f-a5f0-e8f1bb8956f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0603 13:50:24.817044 1143450 system_pods.go:61] "metrics-server-569cc877fc-8xw9v" [4ab08177-2171-493b-928c-456d8a21fd68] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:50:24.817050 1143450 system_pods.go:61] "storage-provisioner" [64d080e5-d582-4ee5-adbc-a652e8e2b820] Running
	I0603 13:50:24.817060 1143450 system_pods.go:74] duration metric: took 11.461696ms to wait for pod list to return data ...
	I0603 13:50:24.817069 1143450 node_conditions.go:102] verifying NodePressure condition ...
	I0603 13:50:24.820804 1143450 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 13:50:24.820834 1143450 node_conditions.go:123] node cpu capacity is 2
	I0603 13:50:24.820846 1143450 node_conditions.go:105] duration metric: took 3.771492ms to run NodePressure ...
	I0603 13:50:24.820865 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:25.098472 1143450 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0603 13:50:25.103237 1143450 kubeadm.go:733] kubelet initialised
	I0603 13:50:25.103263 1143450 kubeadm.go:734] duration metric: took 4.763539ms waiting for restarted kubelet to initialise ...
	I0603 13:50:25.103274 1143450 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:50:25.109364 1143450 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-flxqj" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:25.114629 1143450 pod_ready.go:97] node "default-k8s-diff-port-030870" hosting pod "coredns-7db6d8ff4d-flxqj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.114662 1143450 pod_ready.go:81] duration metric: took 5.268473ms for pod "coredns-7db6d8ff4d-flxqj" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:25.114676 1143450 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-030870" hosting pod "coredns-7db6d8ff4d-flxqj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.114687 1143450 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:25.118734 1143450 pod_ready.go:97] node "default-k8s-diff-port-030870" hosting pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.118777 1143450 pod_ready.go:81] duration metric: took 4.079659ms for pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:25.118790 1143450 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-030870" hosting pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.118810 1143450 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:25.123298 1143450 pod_ready.go:97] node "default-k8s-diff-port-030870" hosting pod "kube-apiserver-default-k8s-diff-port-030870" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.123334 1143450 pod_ready.go:81] duration metric: took 4.509948ms for pod "kube-apiserver-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:25.123351 1143450 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-030870" hosting pod "kube-apiserver-default-k8s-diff-port-030870" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.123361 1143450 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:25.210283 1143450 pod_ready.go:97] node "default-k8s-diff-port-030870" hosting pod "kube-controller-manager-default-k8s-diff-port-030870" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.210316 1143450 pod_ready.go:81] duration metric: took 86.945898ms for pod "kube-controller-manager-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:25.210329 1143450 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-030870" hosting pod "kube-controller-manager-default-k8s-diff-port-030870" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.210338 1143450 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-thsrx" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:25.609043 1143450 pod_ready.go:97] node "default-k8s-diff-port-030870" hosting pod "kube-proxy-thsrx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.609074 1143450 pod_ready.go:81] duration metric: took 398.728553ms for pod "kube-proxy-thsrx" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:25.609084 1143450 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-030870" hosting pod "kube-proxy-thsrx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.609091 1143450 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:26.009831 1143450 pod_ready.go:97] node "default-k8s-diff-port-030870" hosting pod "kube-scheduler-default-k8s-diff-port-030870" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:26.009866 1143450 pod_ready.go:81] duration metric: took 400.766037ms for pod "kube-scheduler-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:26.009880 1143450 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-030870" hosting pod "kube-scheduler-default-k8s-diff-port-030870" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:26.009888 1143450 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:26.410271 1143450 pod_ready.go:97] node "default-k8s-diff-port-030870" hosting pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:26.410301 1143450 pod_ready.go:81] duration metric: took 400.402293ms for pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:26.410315 1143450 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-030870" hosting pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:26.410326 1143450 pod_ready.go:38] duration metric: took 1.307039933s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:50:26.410347 1143450 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 13:50:26.422726 1143450 ops.go:34] apiserver oom_adj: -16
	I0603 13:50:26.422753 1143450 kubeadm.go:591] duration metric: took 11.367271168s to restartPrimaryControlPlane
	I0603 13:50:26.422763 1143450 kubeadm.go:393] duration metric: took 11.445396197s to StartCluster
	I0603 13:50:26.422784 1143450 settings.go:142] acquiring lock: {Name:mka7155af15d143794eb08b8670f7d850f44839e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:50:26.422866 1143450 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 13:50:26.424423 1143450 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/kubeconfig: {Name:mk082a4c41fd0f4876b4085806e1bc5ef6533b14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:50:26.424744 1143450 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.177 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 13:50:26.426628 1143450 out.go:177] * Verifying Kubernetes components...
	I0603 13:50:26.424855 1143450 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 13:50:26.424985 1143450 config.go:182] Loaded profile config "default-k8s-diff-port-030870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:50:26.428227 1143450 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:50:26.428239 1143450 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-030870"
	I0603 13:50:26.428241 1143450 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-030870"
	I0603 13:50:26.428275 1143450 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-030870"
	I0603 13:50:26.428285 1143450 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-030870"
	W0603 13:50:26.428297 1143450 addons.go:243] addon storage-provisioner should already be in state true
	I0603 13:50:26.428243 1143450 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-030870"
	I0603 13:50:26.428338 1143450 host.go:66] Checking if "default-k8s-diff-port-030870" exists ...
	I0603 13:50:26.428404 1143450 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-030870"
	W0603 13:50:26.428428 1143450 addons.go:243] addon metrics-server should already be in state true
	I0603 13:50:26.428501 1143450 host.go:66] Checking if "default-k8s-diff-port-030870" exists ...
	I0603 13:50:26.428650 1143450 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:26.428676 1143450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:26.428724 1143450 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:26.428751 1143450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:26.428948 1143450 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:26.429001 1143450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:26.445709 1143450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33951
	I0603 13:50:26.446187 1143450 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:26.446719 1143450 main.go:141] libmachine: Using API Version  1
	I0603 13:50:26.446743 1143450 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:26.447152 1143450 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:26.447817 1143450 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:26.447852 1143450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:26.449660 1143450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46507
	I0603 13:50:26.449721 1143450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37871
	I0603 13:50:26.450120 1143450 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:26.450161 1143450 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:26.450735 1143450 main.go:141] libmachine: Using API Version  1
	I0603 13:50:26.450755 1143450 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:26.450906 1143450 main.go:141] libmachine: Using API Version  1
	I0603 13:50:26.450930 1143450 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:26.451177 1143450 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:26.451333 1143450 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:26.451421 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetState
	I0603 13:50:26.451909 1143450 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:26.451951 1143450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:26.455458 1143450 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-030870"
	W0603 13:50:26.455484 1143450 addons.go:243] addon default-storageclass should already be in state true
	I0603 13:50:26.455523 1143450 host.go:66] Checking if "default-k8s-diff-port-030870" exists ...
	I0603 13:50:26.455776 1143450 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:26.455825 1143450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:26.470807 1143450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36503
	I0603 13:50:26.471179 1143450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44131
	I0603 13:50:26.471763 1143450 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:26.471921 1143450 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:26.472042 1143450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34451
	I0603 13:50:26.472471 1143450 main.go:141] libmachine: Using API Version  1
	I0603 13:50:26.472501 1143450 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:26.472575 1143450 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:26.472750 1143450 main.go:141] libmachine: Using API Version  1
	I0603 13:50:26.472760 1143450 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:26.472966 1143450 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:26.473095 1143450 main.go:141] libmachine: Using API Version  1
	I0603 13:50:26.473118 1143450 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:26.473132 1143450 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:26.473134 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetState
	I0603 13:50:26.473357 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetState
	I0603 13:50:26.473486 1143450 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:26.474129 1143450 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:26.474183 1143450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:26.475437 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:26.475594 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:26.477911 1143450 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:26.479474 1143450 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0603 13:50:24.304462 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:24.305104 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:24.305175 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:24.305099 1144654 retry.go:31] will retry after 4.178596743s: waiting for machine to come up
	I0603 13:50:26.480998 1143450 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0603 13:50:26.481021 1143450 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0603 13:50:26.481047 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:26.479556 1143450 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 13:50:26.481095 1143450 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 13:50:26.481116 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:26.484634 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:26.484694 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:26.485083 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:26.485116 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:26.485147 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:26.485160 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:26.485538 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:26.485628 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:26.485729 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:26.485829 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:26.485856 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:26.485993 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:26.486040 1143450 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa Username:docker}
	I0603 13:50:26.486158 1143450 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa Username:docker}
	I0603 13:50:26.496035 1143450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45527
	I0603 13:50:26.496671 1143450 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:26.497270 1143450 main.go:141] libmachine: Using API Version  1
	I0603 13:50:26.497290 1143450 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:26.497719 1143450 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:26.497989 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetState
	I0603 13:50:26.500018 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:26.500280 1143450 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 13:50:26.500298 1143450 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 13:50:26.500318 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:26.503226 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:26.503732 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:26.503768 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:26.503967 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:26.504212 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:26.504399 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:26.504556 1143450 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa Username:docker}
	I0603 13:50:26.608774 1143450 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:50:26.629145 1143450 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-030870" to be "Ready" ...
	I0603 13:50:26.692164 1143450 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 13:50:26.784756 1143450 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 13:50:26.788686 1143450 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0603 13:50:26.788711 1143450 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0603 13:50:26.841094 1143450 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0603 13:50:26.841129 1143450 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0603 13:50:26.907657 1143450 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 13:50:26.907688 1143450 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0603 13:50:26.963244 1143450 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:26.963280 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .Close
	I0603 13:50:26.963618 1143450 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:26.963641 1143450 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:26.963649 1143450 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:26.963653 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | Closing plugin on server side
	I0603 13:50:26.963657 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .Close
	I0603 13:50:26.963962 1143450 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:26.963980 1143450 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:26.963982 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | Closing plugin on server side
	I0603 13:50:26.971726 1143450 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:26.971748 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .Close
	I0603 13:50:26.972101 1143450 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:26.972125 1143450 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:26.975238 1143450 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 13:50:27.653643 1143450 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:27.653689 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .Close
	I0603 13:50:27.654037 1143450 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:27.654061 1143450 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:27.654078 1143450 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:27.654087 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .Close
	I0603 13:50:27.654429 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | Closing plugin on server side
	I0603 13:50:27.654484 1143450 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:27.654507 1143450 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:27.847367 1143450 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:27.847397 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .Close
	I0603 13:50:27.847745 1143450 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:27.847770 1143450 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:27.847779 1143450 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:27.847785 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | Closing plugin on server side
	I0603 13:50:27.847793 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .Close
	I0603 13:50:27.848112 1143450 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:27.848130 1143450 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:27.848144 1143450 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-030870"
	I0603 13:50:27.851386 1143450 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0603 13:50:23.817272 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:25.818013 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:27.818160 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:29.798777 1142862 start.go:364] duration metric: took 56.694826675s to acquireMachinesLock for "no-preload-817450"
	I0603 13:50:29.798855 1142862 start.go:96] Skipping create...Using existing machine configuration
	I0603 13:50:29.798866 1142862 fix.go:54] fixHost starting: 
	I0603 13:50:29.799329 1142862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:29.799369 1142862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:29.817787 1142862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46057
	I0603 13:50:29.818396 1142862 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:29.819003 1142862 main.go:141] libmachine: Using API Version  1
	I0603 13:50:29.819025 1142862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:29.819450 1142862 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:29.819617 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:50:29.819782 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetState
	I0603 13:50:29.821742 1142862 fix.go:112] recreateIfNeeded on no-preload-817450: state=Stopped err=<nil>
	I0603 13:50:29.821777 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	W0603 13:50:29.821973 1142862 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 13:50:29.823915 1142862 out.go:177] * Restarting existing kvm2 VM for "no-preload-817450" ...
	I0603 13:50:27.852929 1143450 addons.go:510] duration metric: took 1.428071927s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0603 13:50:28.633355 1143450 node_ready.go:53] node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:29.825584 1142862 main.go:141] libmachine: (no-preload-817450) Calling .Start
	I0603 13:50:29.825783 1142862 main.go:141] libmachine: (no-preload-817450) Ensuring networks are active...
	I0603 13:50:29.826746 1142862 main.go:141] libmachine: (no-preload-817450) Ensuring network default is active
	I0603 13:50:29.827116 1142862 main.go:141] libmachine: (no-preload-817450) Ensuring network mk-no-preload-817450 is active
	I0603 13:50:29.827617 1142862 main.go:141] libmachine: (no-preload-817450) Getting domain xml...
	I0603 13:50:29.828419 1142862 main.go:141] libmachine: (no-preload-817450) Creating domain...
	I0603 13:50:28.485041 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.485598 1143678 main.go:141] libmachine: (old-k8s-version-151788) Found IP for machine: 192.168.50.65
	I0603 13:50:28.485624 1143678 main.go:141] libmachine: (old-k8s-version-151788) Reserving static IP address...
	I0603 13:50:28.485639 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has current primary IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.486053 1143678 main.go:141] libmachine: (old-k8s-version-151788) Reserved static IP address: 192.168.50.65
	I0603 13:50:28.486109 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "old-k8s-version-151788", mac: "52:54:00:56:4e:c1", ip: "192.168.50.65"} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.486123 1143678 main.go:141] libmachine: (old-k8s-version-151788) Waiting for SSH to be available...
	I0603 13:50:28.486144 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | skip adding static IP to network mk-old-k8s-version-151788 - found existing host DHCP lease matching {name: "old-k8s-version-151788", mac: "52:54:00:56:4e:c1", ip: "192.168.50.65"}
	I0603 13:50:28.486156 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | Getting to WaitForSSH function...
	I0603 13:50:28.488305 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.488754 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.488788 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.489025 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | Using SSH client type: external
	I0603 13:50:28.489048 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | Using SSH private key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/id_rsa (-rw-------)
	I0603 13:50:28.489114 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.65 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 13:50:28.489147 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | About to run SSH command:
	I0603 13:50:28.489167 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | exit 0
	I0603 13:50:28.613732 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | SSH cmd err, output: <nil>: 
	I0603 13:50:28.614183 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetConfigRaw
	I0603 13:50:28.614879 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetIP
	I0603 13:50:28.617742 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.618235 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.618270 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.618481 1143678 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/config.json ...
	I0603 13:50:28.618699 1143678 machine.go:94] provisionDockerMachine start ...
	I0603 13:50:28.618719 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:50:28.618967 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:28.621356 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.621655 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.621685 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.621897 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:28.622117 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:28.622321 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:28.622511 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:28.622750 1143678 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:28.622946 1143678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0603 13:50:28.622958 1143678 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 13:50:28.726383 1143678 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 13:50:28.726419 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetMachineName
	I0603 13:50:28.726740 1143678 buildroot.go:166] provisioning hostname "old-k8s-version-151788"
	I0603 13:50:28.726777 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetMachineName
	I0603 13:50:28.727042 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:28.729901 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.730372 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.730402 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.730599 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:28.730824 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:28.731031 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:28.731205 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:28.731403 1143678 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:28.731585 1143678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0603 13:50:28.731599 1143678 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-151788 && echo "old-k8s-version-151788" | sudo tee /etc/hostname
	I0603 13:50:28.848834 1143678 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-151788
	
	I0603 13:50:28.848867 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:28.852250 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.852698 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.852721 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.852980 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:28.853239 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:28.853536 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:28.853819 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:28.854093 1143678 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:28.854338 1143678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0603 13:50:28.854367 1143678 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-151788' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-151788/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-151788' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 13:50:28.967427 1143678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 13:50:28.967461 1143678 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19011-1078924/.minikube CaCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19011-1078924/.minikube}
	I0603 13:50:28.967520 1143678 buildroot.go:174] setting up certificates
	I0603 13:50:28.967538 1143678 provision.go:84] configureAuth start
	I0603 13:50:28.967550 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetMachineName
	I0603 13:50:28.967946 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetIP
	I0603 13:50:28.970841 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.971226 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.971256 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.971449 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:28.974316 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.974702 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.974732 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.974911 1143678 provision.go:143] copyHostCerts
	I0603 13:50:28.974994 1143678 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem, removing ...
	I0603 13:50:28.975010 1143678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 13:50:28.975068 1143678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem (1675 bytes)
	I0603 13:50:28.975247 1143678 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem, removing ...
	I0603 13:50:28.975260 1143678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 13:50:28.975283 1143678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem (1078 bytes)
	I0603 13:50:28.975354 1143678 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem, removing ...
	I0603 13:50:28.975362 1143678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 13:50:28.975385 1143678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem (1123 bytes)
	I0603 13:50:28.975463 1143678 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-151788 san=[127.0.0.1 192.168.50.65 localhost minikube old-k8s-version-151788]
	I0603 13:50:29.096777 1143678 provision.go:177] copyRemoteCerts
	I0603 13:50:29.096835 1143678 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 13:50:29.096865 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:29.099989 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.100408 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:29.100434 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.100644 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:29.100831 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.100975 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:29.101144 1143678 sshutil.go:53] new ssh client: &{IP:192.168.50.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/id_rsa Username:docker}
	I0603 13:50:29.184886 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 13:50:29.211432 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0603 13:50:29.238552 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 13:50:29.266803 1143678 provision.go:87] duration metric: took 299.247567ms to configureAuth
	I0603 13:50:29.266844 1143678 buildroot.go:189] setting minikube options for container-runtime
	I0603 13:50:29.267107 1143678 config.go:182] Loaded profile config "old-k8s-version-151788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0603 13:50:29.267220 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:29.270966 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.271417 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:29.271472 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.271688 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:29.271893 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.272121 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.272327 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:29.272544 1143678 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:29.272787 1143678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0603 13:50:29.272811 1143678 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 13:50:29.548407 1143678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 13:50:29.548437 1143678 machine.go:97] duration metric: took 929.724002ms to provisionDockerMachine
	I0603 13:50:29.548449 1143678 start.go:293] postStartSetup for "old-k8s-version-151788" (driver="kvm2")
	I0603 13:50:29.548461 1143678 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 13:50:29.548486 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:50:29.548924 1143678 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 13:50:29.548992 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:29.552127 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.552531 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:29.552571 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.552756 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:29.552974 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.553166 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:29.553364 1143678 sshutil.go:53] new ssh client: &{IP:192.168.50.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/id_rsa Username:docker}
	I0603 13:50:29.637026 1143678 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 13:50:29.641264 1143678 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 13:50:29.641293 1143678 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/addons for local assets ...
	I0603 13:50:29.641376 1143678 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/files for local assets ...
	I0603 13:50:29.641509 1143678 filesync.go:149] local asset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> 10862512.pem in /etc/ssl/certs
	I0603 13:50:29.641600 1143678 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 13:50:29.657273 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:50:29.688757 1143678 start.go:296] duration metric: took 140.291954ms for postStartSetup
	I0603 13:50:29.688806 1143678 fix.go:56] duration metric: took 21.605539652s for fixHost
	I0603 13:50:29.688843 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:29.691764 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.692170 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:29.692216 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.692356 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:29.692623 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.692814 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.692996 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:29.693180 1143678 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:29.693372 1143678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0603 13:50:29.693384 1143678 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 13:50:29.798629 1143678 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717422629.770375968
	
	I0603 13:50:29.798655 1143678 fix.go:216] guest clock: 1717422629.770375968
	I0603 13:50:29.798662 1143678 fix.go:229] Guest: 2024-06-03 13:50:29.770375968 +0000 UTC Remote: 2024-06-03 13:50:29.688811675 +0000 UTC m=+247.377673500 (delta=81.564293ms)
	I0603 13:50:29.798683 1143678 fix.go:200] guest clock delta is within tolerance: 81.564293ms
	I0603 13:50:29.798688 1143678 start.go:83] releasing machines lock for "old-k8s-version-151788", held for 21.715483341s
	I0603 13:50:29.798712 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:50:29.799019 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetIP
	I0603 13:50:29.802078 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.802479 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:29.802522 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.802674 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:50:29.803271 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:50:29.803496 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:50:29.803584 1143678 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 13:50:29.803646 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:29.803961 1143678 ssh_runner.go:195] Run: cat /version.json
	I0603 13:50:29.803988 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:29.806505 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.806863 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.806926 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:29.806961 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.807093 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:29.807299 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.807345 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:29.807386 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.807476 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:29.807670 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:29.807669 1143678 sshutil.go:53] new ssh client: &{IP:192.168.50.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/id_rsa Username:docker}
	I0603 13:50:29.807841 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.807947 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:29.808183 1143678 sshutil.go:53] new ssh client: &{IP:192.168.50.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/id_rsa Username:docker}
	I0603 13:50:29.890622 1143678 ssh_runner.go:195] Run: systemctl --version
	I0603 13:50:29.918437 1143678 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 13:50:30.064471 1143678 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 13:50:30.073881 1143678 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 13:50:30.073969 1143678 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 13:50:30.097037 1143678 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 13:50:30.097070 1143678 start.go:494] detecting cgroup driver to use...
	I0603 13:50:30.097147 1143678 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 13:50:30.114374 1143678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 13:50:30.132000 1143678 docker.go:217] disabling cri-docker service (if available) ...
	I0603 13:50:30.132075 1143678 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 13:50:30.148156 1143678 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 13:50:30.164601 1143678 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 13:50:30.303125 1143678 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 13:50:30.475478 1143678 docker.go:233] disabling docker service ...
	I0603 13:50:30.475578 1143678 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 13:50:30.494632 1143678 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 13:50:30.513383 1143678 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 13:50:30.691539 1143678 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 13:50:30.849280 1143678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 13:50:30.869107 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 13:50:30.893451 1143678 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0603 13:50:30.893528 1143678 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:30.909358 1143678 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 13:50:30.909465 1143678 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:30.926891 1143678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:30.941879 1143678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:30.957985 1143678 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 13:50:30.971349 1143678 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 13:50:30.984948 1143678 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 13:50:30.985023 1143678 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 13:50:30.999255 1143678 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 13:50:31.011615 1143678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:50:31.162848 1143678 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 13:50:31.352121 1143678 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 13:50:31.352190 1143678 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 13:50:31.357946 1143678 start.go:562] Will wait 60s for crictl version
	I0603 13:50:31.358032 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:31.362540 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 13:50:31.410642 1143678 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 13:50:31.410757 1143678 ssh_runner.go:195] Run: crio --version
	I0603 13:50:31.444750 1143678 ssh_runner.go:195] Run: crio --version
	I0603 13:50:31.482404 1143678 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0603 13:50:31.484218 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetIP
	I0603 13:50:31.488049 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:31.488663 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:31.488695 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:31.488985 1143678 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0603 13:50:31.494813 1143678 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:50:31.511436 1143678 kubeadm.go:877] updating cluster {Name:old-k8s-version-151788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-151788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 13:50:31.511597 1143678 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0603 13:50:31.511659 1143678 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:50:31.571733 1143678 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0603 13:50:31.571819 1143678 ssh_runner.go:195] Run: which lz4
	I0603 13:50:31.577765 1143678 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 13:50:31.583983 1143678 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 13:50:31.584025 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0603 13:50:30.319230 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:32.824874 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:30.633456 1143450 node_ready.go:53] node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:32.134192 1143450 node_ready.go:49] node "default-k8s-diff-port-030870" has status "Ready":"True"
	I0603 13:50:32.134227 1143450 node_ready.go:38] duration metric: took 5.505047986s for node "default-k8s-diff-port-030870" to be "Ready" ...
	I0603 13:50:32.134241 1143450 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:50:32.143157 1143450 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-flxqj" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:32.150075 1143450 pod_ready.go:92] pod "coredns-7db6d8ff4d-flxqj" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:32.150113 1143450 pod_ready.go:81] duration metric: took 6.922006ms for pod "coredns-7db6d8ff4d-flxqj" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:32.150128 1143450 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:34.157758 1143450 pod_ready.go:102] pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:31.283193 1142862 main.go:141] libmachine: (no-preload-817450) Waiting to get IP...
	I0603 13:50:31.284191 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:31.284681 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:31.284757 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:31.284641 1144889 retry.go:31] will retry after 246.139268ms: waiting for machine to come up
	I0603 13:50:31.532345 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:31.533024 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:31.533056 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:31.532956 1144889 retry.go:31] will retry after 283.586657ms: waiting for machine to come up
	I0603 13:50:31.818610 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:31.819271 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:31.819302 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:31.819235 1144889 retry.go:31] will retry after 345.327314ms: waiting for machine to come up
	I0603 13:50:32.165948 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:32.166532 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:32.166585 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:32.166485 1144889 retry.go:31] will retry after 567.370644ms: waiting for machine to come up
	I0603 13:50:32.735409 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:32.736074 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:32.736118 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:32.735978 1144889 retry.go:31] will retry after 523.349811ms: waiting for machine to come up
	I0603 13:50:33.261023 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:33.261738 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:33.261769 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:33.261685 1144889 retry.go:31] will retry after 617.256992ms: waiting for machine to come up
	I0603 13:50:33.880579 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:33.881159 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:33.881188 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:33.881113 1144889 retry.go:31] will retry after 975.807438ms: waiting for machine to come up
	I0603 13:50:34.858935 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:34.859418 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:34.859447 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:34.859365 1144889 retry.go:31] will retry after 1.257722281s: waiting for machine to come up
	I0603 13:50:33.399678 1143678 crio.go:462] duration metric: took 1.821959808s to copy over tarball
	I0603 13:50:33.399768 1143678 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 13:50:36.631033 1143678 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.231219364s)
	I0603 13:50:36.631081 1143678 crio.go:469] duration metric: took 3.231364789s to extract the tarball
	I0603 13:50:36.631092 1143678 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 13:50:36.677954 1143678 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:50:36.718160 1143678 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0603 13:50:36.718197 1143678 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0603 13:50:36.718295 1143678 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 13:50:36.718335 1143678 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 13:50:36.718295 1143678 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:36.718456 1143678 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0603 13:50:36.718302 1143678 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 13:50:36.718343 1143678 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0603 13:50:36.718335 1143678 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0603 13:50:36.718858 1143678 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 13:50:36.720574 1143678 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 13:50:36.720644 1143678 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:36.720573 1143678 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0603 13:50:36.720574 1143678 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 13:50:36.720576 1143678 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 13:50:36.720603 1143678 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0603 13:50:36.720608 1143678 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 13:50:36.721118 1143678 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0603 13:50:36.907182 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0603 13:50:36.907179 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0603 13:50:36.910017 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:36.920969 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 13:50:36.925739 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0603 13:50:36.935710 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0603 13:50:36.946767 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0603 13:50:36.973425 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0603 13:50:37.050763 1143678 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0603 13:50:37.050817 1143678 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0603 13:50:37.050846 1143678 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0603 13:50:37.050876 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:37.050880 1143678 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 13:50:37.050906 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:37.162505 1143678 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0603 13:50:37.162561 1143678 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 13:50:37.162608 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:37.162706 1143678 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0603 13:50:37.162727 1143678 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 13:50:37.162754 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:37.162858 1143678 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0603 13:50:37.162898 1143678 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 13:50:37.162922 1143678 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0603 13:50:37.162965 1143678 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0603 13:50:37.163001 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:37.162943 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:37.164963 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0603 13:50:37.165019 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0603 13:50:37.165136 1143678 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0603 13:50:37.165260 1143678 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0603 13:50:37.165295 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:37.188179 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0603 13:50:37.188292 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0603 13:50:37.188315 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0603 13:50:37.188371 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0603 13:50:37.188561 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 13:50:37.300592 1143678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0603 13:50:37.300642 1143678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0603 13:50:35.317483 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:37.318105 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:36.160066 1143450 pod_ready.go:102] pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:37.334685 1143450 pod_ready.go:92] pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:37.334719 1143450 pod_ready.go:81] duration metric: took 5.184582613s for pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.334732 1143450 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.341104 1143450 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-030870" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:37.341140 1143450 pod_ready.go:81] duration metric: took 6.399805ms for pod "kube-apiserver-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.341154 1143450 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.347174 1143450 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-030870" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:37.347208 1143450 pod_ready.go:81] duration metric: took 6.044519ms for pod "kube-controller-manager-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.347220 1143450 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-thsrx" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.356909 1143450 pod_ready.go:92] pod "kube-proxy-thsrx" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:37.356949 1143450 pod_ready.go:81] duration metric: took 9.72108ms for pod "kube-proxy-thsrx" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.356962 1143450 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.363891 1143450 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-030870" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:37.363915 1143450 pod_ready.go:81] duration metric: took 6.9442ms for pod "kube-scheduler-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.363927 1143450 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:39.372092 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:36.118754 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:36.119214 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:36.119251 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:36.119148 1144889 retry.go:31] will retry after 1.380813987s: waiting for machine to come up
	I0603 13:50:37.501464 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:37.501889 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:37.501937 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:37.501849 1144889 retry.go:31] will retry after 2.144177789s: waiting for machine to come up
	I0603 13:50:39.648238 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:39.648744 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:39.648768 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:39.648693 1144889 retry.go:31] will retry after 1.947487062s: waiting for machine to come up
	I0603 13:50:37.360149 1143678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0603 13:50:37.360196 1143678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0603 13:50:37.360346 1143678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0603 13:50:37.360371 1143678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0603 13:50:37.360436 1143678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0603 13:50:37.543409 1143678 cache_images.go:92] duration metric: took 825.189409ms to LoadCachedImages
	W0603 13:50:37.543559 1143678 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0603 13:50:37.543581 1143678 kubeadm.go:928] updating node { 192.168.50.65 8443 v1.20.0 crio true true} ...
	I0603 13:50:37.543723 1143678 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-151788 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-151788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 13:50:37.543804 1143678 ssh_runner.go:195] Run: crio config
	I0603 13:50:37.601388 1143678 cni.go:84] Creating CNI manager for ""
	I0603 13:50:37.601428 1143678 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:50:37.601445 1143678 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 13:50:37.601471 1143678 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.65 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-151788 NodeName:old-k8s-version-151788 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.65"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.65 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0603 13:50:37.601664 1143678 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.65
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-151788"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.65
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.65"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 13:50:37.601746 1143678 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0603 13:50:37.613507 1143678 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 13:50:37.613588 1143678 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 13:50:37.623853 1143678 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0603 13:50:37.642298 1143678 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 13:50:37.660863 1143678 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0603 13:50:37.679974 1143678 ssh_runner.go:195] Run: grep 192.168.50.65	control-plane.minikube.internal$ /etc/hosts
	I0603 13:50:37.685376 1143678 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.65	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:50:37.702732 1143678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:50:37.859343 1143678 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:50:37.880684 1143678 certs.go:68] Setting up /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788 for IP: 192.168.50.65
	I0603 13:50:37.880714 1143678 certs.go:194] generating shared ca certs ...
	I0603 13:50:37.880737 1143678 certs.go:226] acquiring lock for ca certs: {Name:mkeec5aabce7c9540fcb31b78e4f96c2851d54f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:50:37.880952 1143678 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key
	I0603 13:50:37.881012 1143678 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key
	I0603 13:50:37.881024 1143678 certs.go:256] generating profile certs ...
	I0603 13:50:37.881179 1143678 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/client.key
	I0603 13:50:37.881279 1143678 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/apiserver.key.9bfe4cc3
	I0603 13:50:37.881334 1143678 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/proxy-client.key
	I0603 13:50:37.881554 1143678 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem (1338 bytes)
	W0603 13:50:37.881602 1143678 certs.go:480] ignoring /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251_empty.pem, impossibly tiny 0 bytes
	I0603 13:50:37.881629 1143678 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 13:50:37.881667 1143678 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem (1078 bytes)
	I0603 13:50:37.881698 1143678 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem (1123 bytes)
	I0603 13:50:37.881730 1143678 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem (1675 bytes)
	I0603 13:50:37.881805 1143678 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:50:37.882741 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 13:50:37.919377 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 13:50:37.957218 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 13:50:37.987016 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0603 13:50:38.024442 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0603 13:50:38.051406 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0603 13:50:38.094816 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 13:50:38.143689 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 13:50:38.171488 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem --> /usr/share/ca-certificates/1086251.pem (1338 bytes)
	I0603 13:50:38.197296 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /usr/share/ca-certificates/10862512.pem (1708 bytes)
	I0603 13:50:38.224025 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 13:50:38.250728 1143678 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 13:50:38.270485 1143678 ssh_runner.go:195] Run: openssl version
	I0603 13:50:38.276995 1143678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10862512.pem && ln -fs /usr/share/ca-certificates/10862512.pem /etc/ssl/certs/10862512.pem"
	I0603 13:50:38.288742 1143678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10862512.pem
	I0603 13:50:38.293880 1143678 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:37 /usr/share/ca-certificates/10862512.pem
	I0603 13:50:38.293955 1143678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10862512.pem
	I0603 13:50:38.300456 1143678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10862512.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 13:50:38.312180 1143678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 13:50:38.324349 1143678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:50:38.329812 1143678 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:24 /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:50:38.329881 1143678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:50:38.337560 1143678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 13:50:38.350229 1143678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1086251.pem && ln -fs /usr/share/ca-certificates/1086251.pem /etc/ssl/certs/1086251.pem"
	I0603 13:50:38.362635 1143678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1086251.pem
	I0603 13:50:38.368842 1143678 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:37 /usr/share/ca-certificates/1086251.pem
	I0603 13:50:38.368920 1143678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1086251.pem
	I0603 13:50:38.376029 1143678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1086251.pem /etc/ssl/certs/51391683.0"
	I0603 13:50:38.387703 1143678 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 13:50:38.393071 1143678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 13:50:38.399760 1143678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 13:50:38.406332 1143678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 13:50:38.413154 1143678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 13:50:38.419162 1143678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 13:50:38.425818 1143678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 13:50:38.432495 1143678 kubeadm.go:391] StartCluster: {Name:old-k8s-version-151788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-151788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:50:38.432659 1143678 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 13:50:38.432718 1143678 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:50:38.479889 1143678 cri.go:89] found id: ""
	I0603 13:50:38.479975 1143678 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0603 13:50:38.490549 1143678 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 13:50:38.490574 1143678 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 13:50:38.490580 1143678 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 13:50:38.490637 1143678 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 13:50:38.501024 1143678 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 13:50:38.503665 1143678 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-151788" does not appear in /home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 13:50:38.504563 1143678 kubeconfig.go:62] /home/jenkins/minikube-integration/19011-1078924/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-151788" cluster setting kubeconfig missing "old-k8s-version-151788" context setting]
	I0603 13:50:38.505614 1143678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/kubeconfig: {Name:mk082a4c41fd0f4876b4085806e1bc5ef6533b14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:50:38.562691 1143678 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 13:50:38.573839 1143678 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.65
	I0603 13:50:38.573889 1143678 kubeadm.go:1154] stopping kube-system containers ...
	I0603 13:50:38.573905 1143678 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0603 13:50:38.573987 1143678 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:50:38.615876 1143678 cri.go:89] found id: ""
	I0603 13:50:38.615972 1143678 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 13:50:38.633568 1143678 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 13:50:38.645197 1143678 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 13:50:38.645229 1143678 kubeadm.go:156] found existing configuration files:
	
	I0603 13:50:38.645291 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 13:50:38.655344 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 13:50:38.655423 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 13:50:38.665789 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 13:50:38.674765 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 13:50:38.674842 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 13:50:38.684268 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 13:50:38.693586 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 13:50:38.693650 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 13:50:38.703313 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 13:50:38.712523 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 13:50:38.712597 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 13:50:38.722362 1143678 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 13:50:38.732190 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:38.875545 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:39.722534 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:39.970226 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:40.090817 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:40.193178 1143678 api_server.go:52] waiting for apiserver process to appear ...
	I0603 13:50:40.193485 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:40.693580 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:41.193579 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:41.693608 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:42.193587 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:39.318177 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:41.818337 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:41.373738 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:43.870381 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:41.597745 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:41.598343 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:41.598372 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:41.598280 1144889 retry.go:31] will retry after 2.47307834s: waiting for machine to come up
	I0603 13:50:44.074548 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:44.075009 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:44.075037 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:44.074970 1144889 retry.go:31] will retry after 3.055733752s: waiting for machine to come up
	I0603 13:50:42.693593 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:43.194448 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:43.693645 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:44.193587 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:44.694583 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:45.194065 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:45.694138 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:46.194173 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:46.694344 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:47.194063 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:44.316348 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:46.317245 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:47.133727 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.134266 1142862 main.go:141] libmachine: (no-preload-817450) Found IP for machine: 192.168.72.125
	I0603 13:50:47.134301 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has current primary IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.134308 1142862 main.go:141] libmachine: (no-preload-817450) Reserving static IP address...
	I0603 13:50:47.134745 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "no-preload-817450", mac: "52:54:00:8f:cc:be", ip: "192.168.72.125"} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.134777 1142862 main.go:141] libmachine: (no-preload-817450) Reserved static IP address: 192.168.72.125
	I0603 13:50:47.134797 1142862 main.go:141] libmachine: (no-preload-817450) DBG | skip adding static IP to network mk-no-preload-817450 - found existing host DHCP lease matching {name: "no-preload-817450", mac: "52:54:00:8f:cc:be", ip: "192.168.72.125"}
	I0603 13:50:47.134816 1142862 main.go:141] libmachine: (no-preload-817450) DBG | Getting to WaitForSSH function...
	I0603 13:50:47.134858 1142862 main.go:141] libmachine: (no-preload-817450) Waiting for SSH to be available...
	I0603 13:50:47.137239 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.137669 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.137705 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.137810 1142862 main.go:141] libmachine: (no-preload-817450) DBG | Using SSH client type: external
	I0603 13:50:47.137835 1142862 main.go:141] libmachine: (no-preload-817450) DBG | Using SSH private key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa (-rw-------)
	I0603 13:50:47.137870 1142862 main.go:141] libmachine: (no-preload-817450) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 13:50:47.137879 1142862 main.go:141] libmachine: (no-preload-817450) DBG | About to run SSH command:
	I0603 13:50:47.137889 1142862 main.go:141] libmachine: (no-preload-817450) DBG | exit 0
	I0603 13:50:47.265932 1142862 main.go:141] libmachine: (no-preload-817450) DBG | SSH cmd err, output: <nil>: 
	I0603 13:50:47.266268 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetConfigRaw
	I0603 13:50:47.267007 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetIP
	I0603 13:50:47.269463 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.269849 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.269885 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.270135 1142862 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450/config.json ...
	I0603 13:50:47.270355 1142862 machine.go:94] provisionDockerMachine start ...
	I0603 13:50:47.270375 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:50:47.270589 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:47.272915 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.273307 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.273341 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.273543 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:47.273737 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.273905 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.274061 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:47.274242 1142862 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:47.274417 1142862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.125 22 <nil> <nil>}
	I0603 13:50:47.274429 1142862 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 13:50:47.380760 1142862 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 13:50:47.380789 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetMachineName
	I0603 13:50:47.381068 1142862 buildroot.go:166] provisioning hostname "no-preload-817450"
	I0603 13:50:47.381095 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetMachineName
	I0603 13:50:47.381314 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:47.384093 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.384460 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.384482 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.384627 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:47.384798 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.384938 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.385099 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:47.385276 1142862 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:47.385533 1142862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.125 22 <nil> <nil>}
	I0603 13:50:47.385562 1142862 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-817450 && echo "no-preload-817450" | sudo tee /etc/hostname
	I0603 13:50:47.505203 1142862 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-817450
	
	I0603 13:50:47.505231 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:47.508267 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.508696 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.508721 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.508877 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:47.509066 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.509281 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.509437 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:47.509606 1142862 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:47.509780 1142862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.125 22 <nil> <nil>}
	I0603 13:50:47.509795 1142862 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-817450' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-817450/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-817450' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 13:50:47.618705 1142862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 13:50:47.618757 1142862 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19011-1078924/.minikube CaCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19011-1078924/.minikube}
	I0603 13:50:47.618822 1142862 buildroot.go:174] setting up certificates
	I0603 13:50:47.618835 1142862 provision.go:84] configureAuth start
	I0603 13:50:47.618854 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetMachineName
	I0603 13:50:47.619166 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetIP
	I0603 13:50:47.621974 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.622512 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.622548 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.622652 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:47.624950 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.625275 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.625302 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.625419 1142862 provision.go:143] copyHostCerts
	I0603 13:50:47.625504 1142862 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem, removing ...
	I0603 13:50:47.625520 1142862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 13:50:47.625591 1142862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem (1675 bytes)
	I0603 13:50:47.625697 1142862 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem, removing ...
	I0603 13:50:47.625706 1142862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 13:50:47.625725 1142862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem (1078 bytes)
	I0603 13:50:47.625790 1142862 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem, removing ...
	I0603 13:50:47.625800 1142862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 13:50:47.625826 1142862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem (1123 bytes)
	I0603 13:50:47.625891 1142862 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem org=jenkins.no-preload-817450 san=[127.0.0.1 192.168.72.125 localhost minikube no-preload-817450]
	I0603 13:50:47.733710 1142862 provision.go:177] copyRemoteCerts
	I0603 13:50:47.733769 1142862 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 13:50:47.733801 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:47.736326 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.736657 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.736686 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.736844 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:47.737036 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.737222 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:47.737341 1142862 sshutil.go:53] new ssh client: &{IP:192.168.72.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa Username:docker}
	I0603 13:50:47.821893 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 13:50:47.848085 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0603 13:50:47.875891 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 13:50:47.900761 1142862 provision.go:87] duration metric: took 281.906702ms to configureAuth
	I0603 13:50:47.900795 1142862 buildroot.go:189] setting minikube options for container-runtime
	I0603 13:50:47.900986 1142862 config.go:182] Loaded profile config "no-preload-817450": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:50:47.901072 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:47.904128 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.904551 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.904581 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.904802 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:47.905018 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.905203 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.905413 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:47.905609 1142862 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:47.905816 1142862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.125 22 <nil> <nil>}
	I0603 13:50:47.905839 1142862 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 13:50:48.176290 1142862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 13:50:48.176321 1142862 machine.go:97] duration metric: took 905.950732ms to provisionDockerMachine
	I0603 13:50:48.176333 1142862 start.go:293] postStartSetup for "no-preload-817450" (driver="kvm2")
	I0603 13:50:48.176344 1142862 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 13:50:48.176361 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:50:48.176689 1142862 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 13:50:48.176712 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:48.179595 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.179994 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:48.180020 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.180186 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:48.180398 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:48.180561 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:48.180704 1142862 sshutil.go:53] new ssh client: &{IP:192.168.72.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa Username:docker}
	I0603 13:50:48.267996 1142862 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 13:50:48.272936 1142862 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 13:50:48.272970 1142862 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/addons for local assets ...
	I0603 13:50:48.273044 1142862 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/files for local assets ...
	I0603 13:50:48.273141 1142862 filesync.go:149] local asset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> 10862512.pem in /etc/ssl/certs
	I0603 13:50:48.273285 1142862 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 13:50:48.283984 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:50:48.310846 1142862 start.go:296] duration metric: took 134.495139ms for postStartSetup
	I0603 13:50:48.310899 1142862 fix.go:56] duration metric: took 18.512032449s for fixHost
	I0603 13:50:48.310928 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:48.313969 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.314331 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:48.314358 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.314627 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:48.314896 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:48.315086 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:48.315258 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:48.315442 1142862 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:48.315681 1142862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.125 22 <nil> <nil>}
	I0603 13:50:48.315698 1142862 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 13:50:48.422576 1142862 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717422648.390814282
	
	I0603 13:50:48.422599 1142862 fix.go:216] guest clock: 1717422648.390814282
	I0603 13:50:48.422606 1142862 fix.go:229] Guest: 2024-06-03 13:50:48.390814282 +0000 UTC Remote: 2024-06-03 13:50:48.310904217 +0000 UTC m=+357.796105522 (delta=79.910065ms)
	I0603 13:50:48.422636 1142862 fix.go:200] guest clock delta is within tolerance: 79.910065ms
	I0603 13:50:48.422642 1142862 start.go:83] releasing machines lock for "no-preload-817450", held for 18.623816039s
	I0603 13:50:48.422659 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:50:48.422954 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetIP
	I0603 13:50:48.426261 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.426671 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:48.426701 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.426864 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:50:48.427460 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:50:48.427661 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:50:48.427762 1142862 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 13:50:48.427827 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:48.427878 1142862 ssh_runner.go:195] Run: cat /version.json
	I0603 13:50:48.427914 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:48.430586 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.430830 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.430965 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:48.430993 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.431177 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:48.431326 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:48.431355 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.431387 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:48.431516 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:48.431584 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:48.431676 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:48.431751 1142862 sshutil.go:53] new ssh client: &{IP:192.168.72.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa Username:docker}
	I0603 13:50:48.431798 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:48.431936 1142862 sshutil.go:53] new ssh client: &{IP:192.168.72.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa Username:docker}
	I0603 13:50:48.506899 1142862 ssh_runner.go:195] Run: systemctl --version
	I0603 13:50:48.545903 1142862 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 13:50:48.700235 1142862 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 13:50:48.706614 1142862 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 13:50:48.706704 1142862 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 13:50:48.724565 1142862 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 13:50:48.724592 1142862 start.go:494] detecting cgroup driver to use...
	I0603 13:50:48.724664 1142862 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 13:50:48.741006 1142862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 13:50:48.758824 1142862 docker.go:217] disabling cri-docker service (if available) ...
	I0603 13:50:48.758899 1142862 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 13:50:48.773280 1142862 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 13:50:48.791049 1142862 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 13:50:48.917847 1142862 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 13:50:49.081837 1142862 docker.go:233] disabling docker service ...
	I0603 13:50:49.081927 1142862 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 13:50:49.097577 1142862 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 13:50:49.112592 1142862 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 13:50:49.228447 1142862 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 13:50:49.350782 1142862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 13:50:49.366017 1142862 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 13:50:49.385685 1142862 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 13:50:49.385765 1142862 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:49.396361 1142862 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 13:50:49.396432 1142862 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:49.408606 1142862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:49.419642 1142862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:49.430431 1142862 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 13:50:49.441378 1142862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:49.451810 1142862 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:49.469080 1142862 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:49.480054 1142862 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 13:50:49.489742 1142862 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 13:50:49.489814 1142862 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 13:50:49.502889 1142862 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 13:50:49.512414 1142862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:50:49.639903 1142862 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 13:50:49.786388 1142862 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 13:50:49.786486 1142862 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 13:50:49.791642 1142862 start.go:562] Will wait 60s for crictl version
	I0603 13:50:49.791711 1142862 ssh_runner.go:195] Run: which crictl
	I0603 13:50:49.796156 1142862 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 13:50:49.841667 1142862 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 13:50:49.841765 1142862 ssh_runner.go:195] Run: crio --version
	I0603 13:50:49.872213 1142862 ssh_runner.go:195] Run: crio --version
	I0603 13:50:49.910979 1142862 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 13:50:46.370749 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:48.870860 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:49.912417 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetIP
	I0603 13:50:49.915368 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:49.915731 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:49.915759 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:49.915913 1142862 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0603 13:50:49.920247 1142862 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:50:49.933231 1142862 kubeadm.go:877] updating cluster {Name:no-preload-817450 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:no-preload-817450 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.125 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 13:50:49.933358 1142862 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 13:50:49.933388 1142862 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:50:49.970029 1142862 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0603 13:50:49.970059 1142862 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.1 registry.k8s.io/kube-controller-manager:v1.30.1 registry.k8s.io/kube-scheduler:v1.30.1 registry.k8s.io/kube-proxy:v1.30.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0603 13:50:49.970118 1142862 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:49.970147 1142862 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.1
	I0603 13:50:49.970163 1142862 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0603 13:50:49.970198 1142862 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 13:50:49.970239 1142862 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.1
	I0603 13:50:49.970316 1142862 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.1
	I0603 13:50:49.970328 1142862 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0603 13:50:49.970379 1142862 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0603 13:50:49.971809 1142862 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0603 13:50:49.971837 1142862 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0603 13:50:49.971841 1142862 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.1
	I0603 13:50:49.971809 1142862 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0603 13:50:49.971808 1142862 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.1
	I0603 13:50:49.971876 1142862 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 13:50:49.971816 1142862 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.1
	I0603 13:50:49.971813 1142862 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:50.126557 1142862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0603 13:50:50.146394 1142862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.1
	I0603 13:50:50.149455 1142862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.1
	I0603 13:50:50.149755 1142862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0603 13:50:50.154990 1142862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0603 13:50:50.162983 1142862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:50.177520 1142862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 13:50:50.188703 1142862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.1
	I0603 13:50:50.299288 1142862 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.1" does not exist at hash "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a" in container runtime
	I0603 13:50:50.299312 1142862 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.1" does not exist at hash "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035" in container runtime
	I0603 13:50:50.299345 1142862 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.1
	I0603 13:50:50.299350 1142862 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.1
	I0603 13:50:50.299389 1142862 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0603 13:50:50.299406 1142862 ssh_runner.go:195] Run: which crictl
	I0603 13:50:50.299413 1142862 ssh_runner.go:195] Run: which crictl
	I0603 13:50:50.299422 1142862 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0603 13:50:50.299488 1142862 ssh_runner.go:195] Run: which crictl
	I0603 13:50:50.353368 1142862 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0603 13:50:50.353431 1142862 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:50.353485 1142862 ssh_runner.go:195] Run: which crictl
	I0603 13:50:50.353506 1142862 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0603 13:50:50.353543 1142862 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0603 13:50:50.353591 1142862 ssh_runner.go:195] Run: which crictl
	I0603 13:50:50.379011 1142862 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.1" needs transfer: "registry.k8s.io/kube-proxy:v1.30.1" does not exist at hash "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd" in container runtime
	I0603 13:50:50.379028 1142862 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.1" does not exist at hash "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c" in container runtime
	I0603 13:50:50.379054 1142862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.1
	I0603 13:50:50.379062 1142862 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.1
	I0603 13:50:50.379105 1142862 ssh_runner.go:195] Run: which crictl
	I0603 13:50:50.379075 1142862 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 13:50:50.379146 1142862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.1
	I0603 13:50:50.379181 1142862 ssh_runner.go:195] Run: which crictl
	I0603 13:50:50.379212 1142862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0603 13:50:50.379229 1142862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0603 13:50:50.379239 1142862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:50.482204 1142862 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1
	I0603 13:50:50.482210 1142862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.1
	I0603 13:50:50.482332 1142862 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0603 13:50:50.511560 1142862 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0603 13:50:50.511671 1142862 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1
	I0603 13:50:50.511721 1142862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 13:50:50.511769 1142862 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0603 13:50:50.511772 1142862 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0603 13:50:50.511682 1142862 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0603 13:50:50.511868 1142862 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0603 13:50:50.512290 1142862 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0603 13:50:50.512360 1142862 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0603 13:50:50.549035 1142862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.1 (exists)
	I0603 13:50:50.549061 1142862 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1
	I0603 13:50:50.549066 1142862 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0603 13:50:50.549156 1142862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0603 13:50:50.549166 1142862 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.1
	I0603 13:50:47.693584 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:48.193894 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:48.694053 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:49.193587 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:49.694081 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:50.194053 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:50.694265 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:51.193572 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:51.694283 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:52.194444 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:48.321194 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:50.816679 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:52.818121 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:51.372716 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:53.372880 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:50.573615 1142862 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1
	I0603 13:50:50.573661 1142862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0603 13:50:50.573708 1142862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.1 (exists)
	I0603 13:50:50.573737 1142862 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0603 13:50:50.573754 1142862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0603 13:50:50.573816 1142862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0603 13:50:50.573839 1142862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.1 (exists)
	I0603 13:50:52.739312 1142862 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1: (2.190102069s)
	I0603 13:50:52.739333 1142862 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.1: (2.165569436s)
	I0603 13:50:52.739354 1142862 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1 from cache
	I0603 13:50:52.739365 1142862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.1 (exists)
	I0603 13:50:52.739372 1142862 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0603 13:50:52.739420 1142862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0603 13:50:54.995960 1142862 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1: (2.256502953s)
	I0603 13:50:54.996000 1142862 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1 from cache
	I0603 13:50:54.996019 1142862 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0603 13:50:54.996076 1142862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0603 13:50:52.694071 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:53.193597 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:53.694503 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:54.193609 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:54.694446 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:55.193856 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:55.693583 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:56.194271 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:56.693558 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:57.194427 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:55.317668 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:57.318423 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:55.872030 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:58.376034 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:55.844775 1142862 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0603 13:50:55.844853 1142862 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0603 13:50:55.844967 1142862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0603 13:50:58.110074 1142862 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1: (2.265068331s)
	I0603 13:50:58.110103 1142862 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1 from cache
	I0603 13:50:58.110115 1142862 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0603 13:50:58.110169 1142862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0603 13:50:59.979789 1142862 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.869594477s)
	I0603 13:50:59.979817 1142862 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0603 13:50:59.979832 1142862 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0603 13:50:59.979875 1142862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0603 13:50:57.694027 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:58.193718 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:58.693488 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:59.193725 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:59.694310 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:00.194455 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:00.694182 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:01.193916 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:01.693504 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:02.194236 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:59.816444 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:01.817757 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:00.872105 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:03.373427 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:04.067476 1142862 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.087571936s)
	I0603 13:51:04.067529 1142862 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0603 13:51:04.067549 1142862 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.1
	I0603 13:51:04.067605 1142862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1
	I0603 13:51:02.694248 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:03.194094 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:03.694072 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:04.194494 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:04.693899 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:05.193578 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:05.693584 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:06.193934 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:06.693586 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:07.193993 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:04.316979 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:06.318105 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:05.871061 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:08.371377 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:05.819264 1142862 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1: (1.75162069s)
	I0603 13:51:05.819302 1142862 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1 from cache
	I0603 13:51:05.819334 1142862 cache_images.go:123] Successfully loaded all cached images
	I0603 13:51:05.819341 1142862 cache_images.go:92] duration metric: took 15.849267186s to LoadCachedImages
	I0603 13:51:05.819352 1142862 kubeadm.go:928] updating node { 192.168.72.125 8443 v1.30.1 crio true true} ...
	I0603 13:51:05.819549 1142862 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-817450 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:no-preload-817450 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 13:51:05.819636 1142862 ssh_runner.go:195] Run: crio config
	I0603 13:51:05.874089 1142862 cni.go:84] Creating CNI manager for ""
	I0603 13:51:05.874114 1142862 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:51:05.874127 1142862 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 13:51:05.874152 1142862 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.125 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-817450 NodeName:no-preload-817450 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 13:51:05.874339 1142862 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.125
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-817450"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 13:51:05.874411 1142862 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 13:51:05.886116 1142862 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 13:51:05.886185 1142862 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 13:51:05.896269 1142862 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0603 13:51:05.914746 1142862 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 13:51:05.931936 1142862 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0603 13:51:05.949151 1142862 ssh_runner.go:195] Run: grep 192.168.72.125	control-plane.minikube.internal$ /etc/hosts
	I0603 13:51:05.953180 1142862 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.125	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:51:05.966675 1142862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:51:06.107517 1142862 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:51:06.129233 1142862 certs.go:68] Setting up /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450 for IP: 192.168.72.125
	I0603 13:51:06.129264 1142862 certs.go:194] generating shared ca certs ...
	I0603 13:51:06.129280 1142862 certs.go:226] acquiring lock for ca certs: {Name:mkeec5aabce7c9540fcb31b78e4f96c2851d54f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:51:06.129517 1142862 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key
	I0603 13:51:06.129583 1142862 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key
	I0603 13:51:06.129597 1142862 certs.go:256] generating profile certs ...
	I0603 13:51:06.129686 1142862 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450/client.key
	I0603 13:51:06.129746 1142862 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450/apiserver.key.e8ec030b
	I0603 13:51:06.129779 1142862 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450/proxy-client.key
	I0603 13:51:06.129885 1142862 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem (1338 bytes)
	W0603 13:51:06.129912 1142862 certs.go:480] ignoring /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251_empty.pem, impossibly tiny 0 bytes
	I0603 13:51:06.129919 1142862 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 13:51:06.129939 1142862 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem (1078 bytes)
	I0603 13:51:06.129965 1142862 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem (1123 bytes)
	I0603 13:51:06.129991 1142862 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem (1675 bytes)
	I0603 13:51:06.130028 1142862 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:51:06.130817 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 13:51:06.171348 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 13:51:06.206270 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 13:51:06.240508 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0603 13:51:06.292262 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0603 13:51:06.320406 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 13:51:06.346655 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 13:51:06.375908 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 13:51:06.401723 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 13:51:06.425992 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem --> /usr/share/ca-certificates/1086251.pem (1338 bytes)
	I0603 13:51:06.450484 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /usr/share/ca-certificates/10862512.pem (1708 bytes)
	I0603 13:51:06.475206 1142862 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 13:51:06.492795 1142862 ssh_runner.go:195] Run: openssl version
	I0603 13:51:06.499759 1142862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 13:51:06.511760 1142862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:51:06.516690 1142862 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:24 /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:51:06.516763 1142862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:51:06.523284 1142862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 13:51:06.535250 1142862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1086251.pem && ln -fs /usr/share/ca-certificates/1086251.pem /etc/ssl/certs/1086251.pem"
	I0603 13:51:06.545921 1142862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1086251.pem
	I0603 13:51:06.550765 1142862 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:37 /usr/share/ca-certificates/1086251.pem
	I0603 13:51:06.550823 1142862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1086251.pem
	I0603 13:51:06.556898 1142862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1086251.pem /etc/ssl/certs/51391683.0"
	I0603 13:51:06.567717 1142862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10862512.pem && ln -fs /usr/share/ca-certificates/10862512.pem /etc/ssl/certs/10862512.pem"
	I0603 13:51:06.578662 1142862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10862512.pem
	I0603 13:51:06.584084 1142862 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:37 /usr/share/ca-certificates/10862512.pem
	I0603 13:51:06.584153 1142862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10862512.pem
	I0603 13:51:06.591566 1142862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10862512.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 13:51:06.603554 1142862 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 13:51:06.608323 1142862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 13:51:06.614939 1142862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 13:51:06.621519 1142862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 13:51:06.627525 1142862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 13:51:06.633291 1142862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 13:51:06.639258 1142862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 13:51:06.644789 1142862 kubeadm.go:391] StartCluster: {Name:no-preload-817450 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:no-preload-817450 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.125 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:51:06.644876 1142862 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 13:51:06.644928 1142862 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:51:06.694731 1142862 cri.go:89] found id: ""
	I0603 13:51:06.694811 1142862 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0603 13:51:06.709773 1142862 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 13:51:06.709804 1142862 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 13:51:06.709812 1142862 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 13:51:06.709875 1142862 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 13:51:06.721095 1142862 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 13:51:06.722256 1142862 kubeconfig.go:125] found "no-preload-817450" server: "https://192.168.72.125:8443"
	I0603 13:51:06.724877 1142862 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 13:51:06.735753 1142862 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.125
	I0603 13:51:06.735789 1142862 kubeadm.go:1154] stopping kube-system containers ...
	I0603 13:51:06.735802 1142862 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0603 13:51:06.735847 1142862 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:51:06.776650 1142862 cri.go:89] found id: ""
	I0603 13:51:06.776743 1142862 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 13:51:06.796259 1142862 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 13:51:06.809765 1142862 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 13:51:06.809785 1142862 kubeadm.go:156] found existing configuration files:
	
	I0603 13:51:06.809839 1142862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 13:51:06.819821 1142862 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 13:51:06.819878 1142862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 13:51:06.829960 1142862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 13:51:06.839510 1142862 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 13:51:06.839561 1142862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 13:51:06.849346 1142862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 13:51:06.858834 1142862 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 13:51:06.858886 1142862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 13:51:06.869159 1142862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 13:51:06.879672 1142862 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 13:51:06.879739 1142862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 13:51:06.889393 1142862 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 13:51:06.899309 1142862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:51:07.021375 1142862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:51:08.119929 1142862 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.098510185s)
	I0603 13:51:08.119959 1142862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:51:08.318752 1142862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:51:08.396713 1142862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:51:08.506285 1142862 api_server.go:52] waiting for apiserver process to appear ...
	I0603 13:51:08.506384 1142862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:09.006865 1142862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:09.506528 1142862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:09.582432 1142862 api_server.go:72] duration metric: took 1.076134659s to wait for apiserver process to appear ...
	I0603 13:51:09.582463 1142862 api_server.go:88] waiting for apiserver healthz status ...
	I0603 13:51:09.582507 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:07.693540 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:08.194490 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:08.694498 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:09.194496 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:09.694286 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:10.193605 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:10.694326 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:11.193904 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:11.694504 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:12.194093 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:08.318739 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:10.817309 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:10.371622 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:12.372640 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:14.871007 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:12.049693 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 13:51:12.049731 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 13:51:12.049748 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:12.084495 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 13:51:12.084526 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 13:51:12.084541 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:12.141515 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:51:12.141555 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:51:12.582630 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:12.590238 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:51:12.590279 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:51:13.082813 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:13.097350 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:51:13.097380 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:51:13.582895 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:13.587479 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:51:13.587511 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:51:14.083076 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:14.087531 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:51:14.087561 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:51:14.583203 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:14.587735 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:51:14.587781 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:51:15.082844 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:15.087403 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:51:15.087438 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:51:15.583226 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:15.590238 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 200:
	ok
	I0603 13:51:15.601732 1142862 api_server.go:141] control plane version: v1.30.1
	I0603 13:51:15.601762 1142862 api_server.go:131] duration metric: took 6.019291333s to wait for apiserver health ...
	I0603 13:51:15.601775 1142862 cni.go:84] Creating CNI manager for ""
	I0603 13:51:15.601784 1142862 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:51:15.603654 1142862 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 13:51:12.694356 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:13.194219 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:13.693546 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:14.193588 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:14.694003 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:15.193572 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:15.694012 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:16.193567 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:16.694014 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:17.193554 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:13.320666 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:15.818073 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:17.369593 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:19.369916 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:15.605291 1142862 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 13:51:15.618333 1142862 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 13:51:15.640539 1142862 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 13:51:15.651042 1142862 system_pods.go:59] 8 kube-system pods found
	I0603 13:51:15.651086 1142862 system_pods.go:61] "coredns-7db6d8ff4d-s562v" [be995d41-2b25-4839-a36b-212a507e7db7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 13:51:15.651102 1142862 system_pods.go:61] "etcd-no-preload-817450" [1b21708b-d81b-4594-a186-546437467c26] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0603 13:51:15.651117 1142862 system_pods.go:61] "kube-apiserver-no-preload-817450" [0741a4bf-3161-4cf3-a9c6-36af2a0c4fde] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0603 13:51:15.651126 1142862 system_pods.go:61] "kube-controller-manager-no-preload-817450" [43713383-9197-4874-8aa9-7b1b1f05e4b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0603 13:51:15.651133 1142862 system_pods.go:61] "kube-proxy-2j4sg" [112657ad-311a-46ee-b5c0-6f544991465e] Running
	I0603 13:51:15.651145 1142862 system_pods.go:61] "kube-scheduler-no-preload-817450" [40db5c40-dc01-4fd3-a5e0-06a6ee1fd0a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0603 13:51:15.651152 1142862 system_pods.go:61] "metrics-server-569cc877fc-mtvrq" [00cb7657-2564-4d25-8faa-b6f618e61115] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:51:15.651163 1142862 system_pods.go:61] "storage-provisioner" [913d3120-32ce-4212-84be-9e3b99f2a894] Running
	I0603 13:51:15.651171 1142862 system_pods.go:74] duration metric: took 10.608401ms to wait for pod list to return data ...
	I0603 13:51:15.651181 1142862 node_conditions.go:102] verifying NodePressure condition ...
	I0603 13:51:15.654759 1142862 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 13:51:15.654784 1142862 node_conditions.go:123] node cpu capacity is 2
	I0603 13:51:15.654795 1142862 node_conditions.go:105] duration metric: took 3.608137ms to run NodePressure ...
	I0603 13:51:15.654813 1142862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:51:15.940085 1142862 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0603 13:51:15.944785 1142862 kubeadm.go:733] kubelet initialised
	I0603 13:51:15.944808 1142862 kubeadm.go:734] duration metric: took 4.692827ms waiting for restarted kubelet to initialise ...
	I0603 13:51:15.944817 1142862 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:51:15.950113 1142862 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-s562v" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:17.958330 1142862 pod_ready.go:102] pod "coredns-7db6d8ff4d-s562v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:20.456029 1142862 pod_ready.go:102] pod "coredns-7db6d8ff4d-s562v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:17.693856 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:18.193853 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:18.693858 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:19.193568 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:19.693680 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:20.193556 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:20.694129 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:21.193662 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:21.694445 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:22.193668 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:18.317128 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:20.317375 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:22.317530 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:21.371070 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:23.871400 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:21.958183 1142862 pod_ready.go:92] pod "coredns-7db6d8ff4d-s562v" in "kube-system" namespace has status "Ready":"True"
	I0603 13:51:21.958208 1142862 pod_ready.go:81] duration metric: took 6.008058251s for pod "coredns-7db6d8ff4d-s562v" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:21.958220 1142862 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:23.964785 1142862 pod_ready.go:102] pod "etcd-no-preload-817450" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:22.694004 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:23.193793 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:23.694340 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:24.194411 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:24.694314 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:25.194501 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:25.693545 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:26.194255 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:26.694312 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:27.194453 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:24.817165 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:27.317176 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:26.369665 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:28.370392 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:25.966060 1142862 pod_ready.go:102] pod "etcd-no-preload-817450" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:27.965236 1142862 pod_ready.go:92] pod "etcd-no-preload-817450" in "kube-system" namespace has status "Ready":"True"
	I0603 13:51:27.965267 1142862 pod_ready.go:81] duration metric: took 6.007038184s for pod "etcd-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.965281 1142862 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.969898 1142862 pod_ready.go:92] pod "kube-apiserver-no-preload-817450" in "kube-system" namespace has status "Ready":"True"
	I0603 13:51:27.969920 1142862 pod_ready.go:81] duration metric: took 4.630357ms for pod "kube-apiserver-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.969932 1142862 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.974500 1142862 pod_ready.go:92] pod "kube-controller-manager-no-preload-817450" in "kube-system" namespace has status "Ready":"True"
	I0603 13:51:27.974517 1142862 pod_ready.go:81] duration metric: took 4.577117ms for pod "kube-controller-manager-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.974526 1142862 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2j4sg" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.978510 1142862 pod_ready.go:92] pod "kube-proxy-2j4sg" in "kube-system" namespace has status "Ready":"True"
	I0603 13:51:27.978530 1142862 pod_ready.go:81] duration metric: took 3.997645ms for pod "kube-proxy-2j4sg" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.978537 1142862 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.982488 1142862 pod_ready.go:92] pod "kube-scheduler-no-preload-817450" in "kube-system" namespace has status "Ready":"True"
	I0603 13:51:27.982507 1142862 pod_ready.go:81] duration metric: took 3.962666ms for pod "kube-scheduler-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.982518 1142862 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:29.989265 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:27.694334 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:28.193809 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:28.693744 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:29.193608 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:29.693584 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:30.194111 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:30.694213 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:31.193588 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:31.694336 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:32.193716 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:29.317483 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:31.324199 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:30.370435 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:32.870510 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:34.872543 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:31.990649 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:34.488899 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:32.693501 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:33.194174 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:33.693995 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:34.194242 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:34.693961 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:35.194052 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:35.693730 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:36.193559 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:36.693763 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:37.194274 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:33.816533 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:36.316832 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:37.371543 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:39.372034 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:36.489364 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:38.490431 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:40.490888 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:37.693590 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:38.194328 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:38.694296 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:39.194272 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:39.693607 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:40.193595 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:51:40.193691 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:51:40.237747 1143678 cri.go:89] found id: ""
	I0603 13:51:40.237776 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.237785 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:51:40.237792 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:51:40.237854 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:51:40.275924 1143678 cri.go:89] found id: ""
	I0603 13:51:40.275964 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.275975 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:51:40.275983 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:51:40.276049 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:51:40.314827 1143678 cri.go:89] found id: ""
	I0603 13:51:40.314857 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.314870 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:51:40.314877 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:51:40.314939 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:51:40.359040 1143678 cri.go:89] found id: ""
	I0603 13:51:40.359072 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.359084 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:51:40.359092 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:51:40.359154 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:51:40.396136 1143678 cri.go:89] found id: ""
	I0603 13:51:40.396170 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.396185 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:51:40.396194 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:51:40.396261 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:51:40.436766 1143678 cri.go:89] found id: ""
	I0603 13:51:40.436803 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.436814 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:51:40.436828 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:51:40.436902 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:51:40.477580 1143678 cri.go:89] found id: ""
	I0603 13:51:40.477606 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.477615 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:51:40.477621 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:51:40.477713 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:51:40.518920 1143678 cri.go:89] found id: ""
	I0603 13:51:40.518960 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.518972 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:51:40.518984 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:51:40.519001 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:51:40.659881 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:51:40.659913 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:51:40.659932 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:51:40.727850 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:51:40.727894 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:51:40.774153 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:51:40.774189 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:40.828054 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:51:40.828094 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:51:38.820985 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:41.322044 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:41.870717 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:43.872112 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:42.988898 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:44.989384 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:43.342659 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:43.357063 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:51:43.357131 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:51:43.398000 1143678 cri.go:89] found id: ""
	I0603 13:51:43.398036 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.398045 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:51:43.398051 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:51:43.398106 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:51:43.436761 1143678 cri.go:89] found id: ""
	I0603 13:51:43.436805 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.436814 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:51:43.436820 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:51:43.436872 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:51:43.478122 1143678 cri.go:89] found id: ""
	I0603 13:51:43.478154 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.478164 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:51:43.478172 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:51:43.478243 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:51:43.514473 1143678 cri.go:89] found id: ""
	I0603 13:51:43.514511 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.514523 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:51:43.514532 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:51:43.514600 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:51:43.552354 1143678 cri.go:89] found id: ""
	I0603 13:51:43.552390 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.552399 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:51:43.552405 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:51:43.552489 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:51:43.590637 1143678 cri.go:89] found id: ""
	I0603 13:51:43.590665 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.590677 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:51:43.590685 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:51:43.590745 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:51:43.633958 1143678 cri.go:89] found id: ""
	I0603 13:51:43.634001 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.634013 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:51:43.634021 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:51:43.634088 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:51:43.672640 1143678 cri.go:89] found id: ""
	I0603 13:51:43.672683 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.672695 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:51:43.672716 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:51:43.672733 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:43.725880 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:51:43.725937 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:51:43.743736 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:51:43.743771 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:51:43.831757 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:51:43.831785 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:51:43.831801 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:51:43.905062 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:51:43.905114 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:51:46.459588 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:46.472911 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:51:46.472983 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:51:46.513723 1143678 cri.go:89] found id: ""
	I0603 13:51:46.513757 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.513768 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:51:46.513776 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:51:46.513845 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:51:46.549205 1143678 cri.go:89] found id: ""
	I0603 13:51:46.549234 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.549242 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:51:46.549251 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:51:46.549311 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:51:46.585004 1143678 cri.go:89] found id: ""
	I0603 13:51:46.585042 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.585053 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:51:46.585063 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:51:46.585120 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:51:46.620534 1143678 cri.go:89] found id: ""
	I0603 13:51:46.620571 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.620582 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:51:46.620590 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:51:46.620661 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:51:46.655974 1143678 cri.go:89] found id: ""
	I0603 13:51:46.656005 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.656014 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:51:46.656020 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:51:46.656091 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:51:46.693078 1143678 cri.go:89] found id: ""
	I0603 13:51:46.693141 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.693158 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:51:46.693168 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:51:46.693244 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:51:46.729177 1143678 cri.go:89] found id: ""
	I0603 13:51:46.729213 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.729223 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:51:46.729232 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:51:46.729300 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:51:46.766899 1143678 cri.go:89] found id: ""
	I0603 13:51:46.766929 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.766937 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:51:46.766946 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:51:46.766959 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:46.826715 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:51:46.826757 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:51:46.841461 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:51:46.841504 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:51:46.914505 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:51:46.914533 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:51:46.914551 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:51:46.989886 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:51:46.989928 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:51:43.817456 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:45.817576 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:46.370927 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:48.371196 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:46.990440 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:49.489483 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:49.532804 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:49.547359 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:51:49.547438 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:51:49.584262 1143678 cri.go:89] found id: ""
	I0603 13:51:49.584299 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.584311 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:51:49.584319 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:51:49.584389 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:51:49.622332 1143678 cri.go:89] found id: ""
	I0603 13:51:49.622372 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.622384 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:51:49.622393 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:51:49.622488 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:51:49.664339 1143678 cri.go:89] found id: ""
	I0603 13:51:49.664378 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.664390 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:51:49.664399 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:51:49.664468 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:51:49.712528 1143678 cri.go:89] found id: ""
	I0603 13:51:49.712558 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.712565 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:51:49.712574 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:51:49.712640 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:51:49.767343 1143678 cri.go:89] found id: ""
	I0603 13:51:49.767374 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.767382 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:51:49.767388 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:51:49.767450 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:51:49.822457 1143678 cri.go:89] found id: ""
	I0603 13:51:49.822491 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.822499 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:51:49.822505 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:51:49.822561 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:51:49.867823 1143678 cri.go:89] found id: ""
	I0603 13:51:49.867855 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.867867 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:51:49.867875 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:51:49.867936 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:51:49.906765 1143678 cri.go:89] found id: ""
	I0603 13:51:49.906797 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.906805 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:51:49.906816 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:51:49.906829 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:51:49.921731 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:51:49.921764 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:51:49.993832 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:51:49.993860 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:51:49.993878 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:51:50.070080 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:51:50.070125 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:51:50.112323 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:51:50.112357 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:48.317830 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:50.816577 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:52.817035 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:50.871664 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:52.871865 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:51.990258 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:54.489037 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:52.666289 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:52.680475 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:51:52.680550 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:51:52.722025 1143678 cri.go:89] found id: ""
	I0603 13:51:52.722063 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.722075 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:51:52.722083 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:51:52.722145 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:51:52.759709 1143678 cri.go:89] found id: ""
	I0603 13:51:52.759742 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.759754 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:51:52.759762 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:51:52.759838 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:51:52.797131 1143678 cri.go:89] found id: ""
	I0603 13:51:52.797162 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.797171 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:51:52.797176 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:51:52.797231 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:51:52.832921 1143678 cri.go:89] found id: ""
	I0603 13:51:52.832951 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.832959 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:51:52.832965 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:51:52.833024 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:51:52.869361 1143678 cri.go:89] found id: ""
	I0603 13:51:52.869389 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.869399 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:51:52.869422 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:51:52.869495 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:51:52.905863 1143678 cri.go:89] found id: ""
	I0603 13:51:52.905897 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.905909 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:51:52.905917 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:51:52.905985 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:51:52.940407 1143678 cri.go:89] found id: ""
	I0603 13:51:52.940438 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.940446 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:51:52.940452 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:51:52.940517 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:51:52.982079 1143678 cri.go:89] found id: ""
	I0603 13:51:52.982115 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.982126 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:51:52.982138 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:51:52.982155 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:51:53.066897 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:51:53.066942 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:51:53.108016 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:51:53.108056 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:53.164105 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:51:53.164151 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:51:53.178708 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:51:53.178743 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:51:53.257441 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:51:55.758633 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:55.774241 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:51:55.774329 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:51:55.809373 1143678 cri.go:89] found id: ""
	I0603 13:51:55.809436 1143678 logs.go:276] 0 containers: []
	W0603 13:51:55.809450 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:51:55.809467 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:51:55.809539 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:51:55.849741 1143678 cri.go:89] found id: ""
	I0603 13:51:55.849768 1143678 logs.go:276] 0 containers: []
	W0603 13:51:55.849776 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:51:55.849783 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:51:55.849834 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:51:55.893184 1143678 cri.go:89] found id: ""
	I0603 13:51:55.893216 1143678 logs.go:276] 0 containers: []
	W0603 13:51:55.893228 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:51:55.893238 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:51:55.893307 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:51:55.931572 1143678 cri.go:89] found id: ""
	I0603 13:51:55.931618 1143678 logs.go:276] 0 containers: []
	W0603 13:51:55.931632 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:51:55.931642 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:51:55.931713 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:51:55.969490 1143678 cri.go:89] found id: ""
	I0603 13:51:55.969527 1143678 logs.go:276] 0 containers: []
	W0603 13:51:55.969538 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:51:55.969546 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:51:55.969614 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:51:56.009266 1143678 cri.go:89] found id: ""
	I0603 13:51:56.009301 1143678 logs.go:276] 0 containers: []
	W0603 13:51:56.009313 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:51:56.009321 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:51:56.009394 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:51:56.049471 1143678 cri.go:89] found id: ""
	I0603 13:51:56.049520 1143678 logs.go:276] 0 containers: []
	W0603 13:51:56.049540 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:51:56.049547 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:51:56.049616 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:51:56.090176 1143678 cri.go:89] found id: ""
	I0603 13:51:56.090213 1143678 logs.go:276] 0 containers: []
	W0603 13:51:56.090228 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:51:56.090241 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:51:56.090266 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:51:56.175692 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:51:56.175737 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:51:56.222642 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:51:56.222683 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:56.276258 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:51:56.276301 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:51:56.291703 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:51:56.291739 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:51:56.364788 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:51:55.316604 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:57.816804 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:55.370917 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:57.372903 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:59.870783 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:56.489636 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:58.990006 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:58.865558 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:58.879983 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:51:58.880074 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:51:58.917422 1143678 cri.go:89] found id: ""
	I0603 13:51:58.917461 1143678 logs.go:276] 0 containers: []
	W0603 13:51:58.917473 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:51:58.917480 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:51:58.917535 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:51:58.953900 1143678 cri.go:89] found id: ""
	I0603 13:51:58.953933 1143678 logs.go:276] 0 containers: []
	W0603 13:51:58.953943 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:51:58.953959 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:51:58.954030 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:51:58.988677 1143678 cri.go:89] found id: ""
	I0603 13:51:58.988704 1143678 logs.go:276] 0 containers: []
	W0603 13:51:58.988713 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:51:58.988721 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:51:58.988783 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:51:59.023436 1143678 cri.go:89] found id: ""
	I0603 13:51:59.023474 1143678 logs.go:276] 0 containers: []
	W0603 13:51:59.023486 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:51:59.023494 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:51:59.023570 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:51:59.061357 1143678 cri.go:89] found id: ""
	I0603 13:51:59.061386 1143678 logs.go:276] 0 containers: []
	W0603 13:51:59.061394 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:51:59.061400 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:51:59.061487 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:51:59.102995 1143678 cri.go:89] found id: ""
	I0603 13:51:59.103025 1143678 logs.go:276] 0 containers: []
	W0603 13:51:59.103038 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:51:59.103047 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:51:59.103124 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:51:59.141443 1143678 cri.go:89] found id: ""
	I0603 13:51:59.141480 1143678 logs.go:276] 0 containers: []
	W0603 13:51:59.141492 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:51:59.141499 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:51:59.141586 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:51:59.182909 1143678 cri.go:89] found id: ""
	I0603 13:51:59.182943 1143678 logs.go:276] 0 containers: []
	W0603 13:51:59.182953 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:51:59.182967 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:51:59.182984 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:51:59.259533 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:51:59.259580 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:51:59.308976 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:51:59.309016 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:59.362092 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:51:59.362142 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:51:59.378836 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:51:59.378887 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:51:59.454524 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:01.954939 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:01.969968 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:01.970039 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:02.014226 1143678 cri.go:89] found id: ""
	I0603 13:52:02.014267 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.014280 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:02.014289 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:02.014361 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:02.051189 1143678 cri.go:89] found id: ""
	I0603 13:52:02.051244 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.051259 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:02.051268 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:02.051349 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:02.093509 1143678 cri.go:89] found id: ""
	I0603 13:52:02.093548 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.093575 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:02.093586 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:02.093718 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:02.132069 1143678 cri.go:89] found id: ""
	I0603 13:52:02.132113 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.132129 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:02.132138 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:02.132299 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:02.168043 1143678 cri.go:89] found id: ""
	I0603 13:52:02.168071 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.168079 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:02.168085 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:02.168138 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:02.207029 1143678 cri.go:89] found id: ""
	I0603 13:52:02.207064 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.207074 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:02.207081 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:02.207134 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:02.247669 1143678 cri.go:89] found id: ""
	I0603 13:52:02.247719 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.247728 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:02.247734 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:02.247848 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:02.285780 1143678 cri.go:89] found id: ""
	I0603 13:52:02.285817 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.285829 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:02.285841 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:02.285863 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:59.817887 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:01.818381 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:01.871338 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:04.371052 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:00.990263 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:02.990651 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:05.490343 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:02.348775 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:02.349776 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:02.364654 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:02.364691 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:02.447948 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:02.447978 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:02.447992 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:02.534039 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:02.534100 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:05.080437 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:05.094169 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:05.094245 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:05.132312 1143678 cri.go:89] found id: ""
	I0603 13:52:05.132339 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.132346 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:05.132352 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:05.132423 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:05.168941 1143678 cri.go:89] found id: ""
	I0603 13:52:05.168979 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.168990 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:05.168999 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:05.169068 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:05.207151 1143678 cri.go:89] found id: ""
	I0603 13:52:05.207188 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.207196 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:05.207202 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:05.207272 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:05.258807 1143678 cri.go:89] found id: ""
	I0603 13:52:05.258839 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.258850 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:05.258859 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:05.259004 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:05.298250 1143678 cri.go:89] found id: ""
	I0603 13:52:05.298285 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.298297 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:05.298306 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:05.298381 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:05.340922 1143678 cri.go:89] found id: ""
	I0603 13:52:05.340951 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.340959 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:05.340966 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:05.341027 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:05.382680 1143678 cri.go:89] found id: ""
	I0603 13:52:05.382707 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.382715 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:05.382722 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:05.382777 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:05.426774 1143678 cri.go:89] found id: ""
	I0603 13:52:05.426801 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.426811 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:05.426822 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:05.426836 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:05.483042 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:05.483091 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:05.499119 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:05.499159 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:05.580933 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:05.580962 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:05.580983 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:05.660395 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:05.660437 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:03.818676 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:06.316881 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:06.371515 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:08.871174 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:07.490662 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:09.992709 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:08.200887 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:08.215113 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:08.215203 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:08.252367 1143678 cri.go:89] found id: ""
	I0603 13:52:08.252404 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.252417 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:08.252427 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:08.252500 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:08.289249 1143678 cri.go:89] found id: ""
	I0603 13:52:08.289279 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.289290 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:08.289298 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:08.289364 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:08.331155 1143678 cri.go:89] found id: ""
	I0603 13:52:08.331181 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.331195 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:08.331201 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:08.331258 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:08.371376 1143678 cri.go:89] found id: ""
	I0603 13:52:08.371400 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.371408 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:08.371415 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:08.371477 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:08.408009 1143678 cri.go:89] found id: ""
	I0603 13:52:08.408045 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.408057 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:08.408065 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:08.408119 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:08.446377 1143678 cri.go:89] found id: ""
	I0603 13:52:08.446413 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.446421 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:08.446429 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:08.446504 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:08.485429 1143678 cri.go:89] found id: ""
	I0603 13:52:08.485461 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.485471 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:08.485479 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:08.485546 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:08.527319 1143678 cri.go:89] found id: ""
	I0603 13:52:08.527363 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.527375 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:08.527388 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:08.527414 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:08.602347 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:08.602371 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:08.602384 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:08.683855 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:08.683902 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:08.724402 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:08.724443 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:08.781154 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:08.781202 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:11.297827 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:11.313927 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:11.314006 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:11.352622 1143678 cri.go:89] found id: ""
	I0603 13:52:11.352660 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.352671 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:11.352678 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:11.352755 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:11.395301 1143678 cri.go:89] found id: ""
	I0603 13:52:11.395338 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.395351 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:11.395360 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:11.395442 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:11.431104 1143678 cri.go:89] found id: ""
	I0603 13:52:11.431143 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.431155 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:11.431170 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:11.431234 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:11.470177 1143678 cri.go:89] found id: ""
	I0603 13:52:11.470212 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.470223 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:11.470241 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:11.470309 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:11.508741 1143678 cri.go:89] found id: ""
	I0603 13:52:11.508779 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.508803 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:11.508810 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:11.508906 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:11.544970 1143678 cri.go:89] found id: ""
	I0603 13:52:11.545002 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.545012 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:11.545022 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:11.545093 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:11.583606 1143678 cri.go:89] found id: ""
	I0603 13:52:11.583636 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.583653 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:11.583666 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:11.583739 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:11.624770 1143678 cri.go:89] found id: ""
	I0603 13:52:11.624806 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.624815 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:11.624824 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:11.624841 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:11.680251 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:11.680298 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:11.695656 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:11.695695 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:11.770414 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:11.770478 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:11.770497 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:11.850812 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:11.850871 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:08.318447 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:10.817734 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:11.372533 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:13.871822 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:12.490666 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:14.988752 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:14.398649 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:14.411591 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:14.411689 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:14.447126 1143678 cri.go:89] found id: ""
	I0603 13:52:14.447158 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.447170 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:14.447178 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:14.447245 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:14.486681 1143678 cri.go:89] found id: ""
	I0603 13:52:14.486716 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.486728 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:14.486735 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:14.486799 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:14.521297 1143678 cri.go:89] found id: ""
	I0603 13:52:14.521326 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.521337 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:14.521343 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:14.521443 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:14.565086 1143678 cri.go:89] found id: ""
	I0603 13:52:14.565121 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.565130 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:14.565136 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:14.565196 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:14.601947 1143678 cri.go:89] found id: ""
	I0603 13:52:14.601975 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.601984 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:14.601990 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:14.602044 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:14.638332 1143678 cri.go:89] found id: ""
	I0603 13:52:14.638359 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.638366 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:14.638374 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:14.638435 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:14.675254 1143678 cri.go:89] found id: ""
	I0603 13:52:14.675284 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.675293 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:14.675299 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:14.675354 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:14.712601 1143678 cri.go:89] found id: ""
	I0603 13:52:14.712631 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.712639 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:14.712649 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:14.712663 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:14.787026 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:14.787068 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:14.836534 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:14.836564 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:14.889682 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:14.889729 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:14.905230 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:14.905264 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:14.979090 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:13.317070 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:15.317490 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:17.816412 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:15.871901 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:18.370626 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:16.989195 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:18.990108 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:17.479590 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:17.495088 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:17.495250 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:17.530832 1143678 cri.go:89] found id: ""
	I0603 13:52:17.530871 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.530883 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:17.530891 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:17.530966 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:17.567183 1143678 cri.go:89] found id: ""
	I0603 13:52:17.567213 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.567224 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:17.567232 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:17.567305 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:17.602424 1143678 cri.go:89] found id: ""
	I0603 13:52:17.602458 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.602469 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:17.602493 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:17.602570 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:17.641148 1143678 cri.go:89] found id: ""
	I0603 13:52:17.641184 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.641197 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:17.641205 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:17.641273 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:17.679004 1143678 cri.go:89] found id: ""
	I0603 13:52:17.679031 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.679039 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:17.679045 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:17.679102 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:17.717667 1143678 cri.go:89] found id: ""
	I0603 13:52:17.717698 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.717707 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:17.717715 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:17.717786 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:17.760262 1143678 cri.go:89] found id: ""
	I0603 13:52:17.760300 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.760323 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:17.760331 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:17.760416 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:17.796910 1143678 cri.go:89] found id: ""
	I0603 13:52:17.796943 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.796960 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:17.796976 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:17.796990 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:17.811733 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:17.811768 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:17.891891 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:17.891920 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:17.891939 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:17.969495 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:17.969535 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:18.032622 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:18.032654 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:20.586079 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:20.599118 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:20.599202 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:20.633732 1143678 cri.go:89] found id: ""
	I0603 13:52:20.633770 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.633780 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:20.633787 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:20.633841 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:20.668126 1143678 cri.go:89] found id: ""
	I0603 13:52:20.668155 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.668163 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:20.668169 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:20.668231 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:20.704144 1143678 cri.go:89] found id: ""
	I0603 13:52:20.704177 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.704187 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:20.704194 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:20.704251 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:20.745562 1143678 cri.go:89] found id: ""
	I0603 13:52:20.745594 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.745602 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:20.745608 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:20.745663 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:20.788998 1143678 cri.go:89] found id: ""
	I0603 13:52:20.789041 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.789053 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:20.789075 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:20.789152 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:20.832466 1143678 cri.go:89] found id: ""
	I0603 13:52:20.832495 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.832503 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:20.832510 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:20.832575 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:20.875212 1143678 cri.go:89] found id: ""
	I0603 13:52:20.875248 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.875258 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:20.875267 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:20.875336 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:20.912957 1143678 cri.go:89] found id: ""
	I0603 13:52:20.912989 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.912999 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:20.913011 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:20.913030 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:20.963655 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:20.963700 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:20.978619 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:20.978658 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:21.057136 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:21.057163 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:21.057185 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:21.136368 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:21.136415 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:19.817227 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:21.817625 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:20.871465 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:23.370757 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:21.488564 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:23.991662 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:23.676222 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:23.691111 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:23.691213 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:23.733282 1143678 cri.go:89] found id: ""
	I0603 13:52:23.733319 1143678 logs.go:276] 0 containers: []
	W0603 13:52:23.733332 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:23.733341 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:23.733438 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:23.780841 1143678 cri.go:89] found id: ""
	I0603 13:52:23.780873 1143678 logs.go:276] 0 containers: []
	W0603 13:52:23.780882 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:23.780894 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:23.780947 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:23.820521 1143678 cri.go:89] found id: ""
	I0603 13:52:23.820553 1143678 logs.go:276] 0 containers: []
	W0603 13:52:23.820565 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:23.820573 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:23.820636 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:23.857684 1143678 cri.go:89] found id: ""
	I0603 13:52:23.857728 1143678 logs.go:276] 0 containers: []
	W0603 13:52:23.857739 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:23.857747 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:23.857818 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:23.896800 1143678 cri.go:89] found id: ""
	I0603 13:52:23.896829 1143678 logs.go:276] 0 containers: []
	W0603 13:52:23.896842 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:23.896850 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:23.896914 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:23.935511 1143678 cri.go:89] found id: ""
	I0603 13:52:23.935538 1143678 logs.go:276] 0 containers: []
	W0603 13:52:23.935547 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:23.935554 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:23.935608 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:23.973858 1143678 cri.go:89] found id: ""
	I0603 13:52:23.973885 1143678 logs.go:276] 0 containers: []
	W0603 13:52:23.973895 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:23.973901 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:23.973961 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:24.012491 1143678 cri.go:89] found id: ""
	I0603 13:52:24.012521 1143678 logs.go:276] 0 containers: []
	W0603 13:52:24.012532 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:24.012545 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:24.012569 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:24.064274 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:24.064319 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:24.079382 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:24.079420 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:24.153708 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:24.153733 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:24.153749 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:24.233104 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:24.233148 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:26.774771 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:26.789853 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:26.789924 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:26.830089 1143678 cri.go:89] found id: ""
	I0603 13:52:26.830129 1143678 logs.go:276] 0 containers: []
	W0603 13:52:26.830167 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:26.830176 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:26.830251 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:26.866907 1143678 cri.go:89] found id: ""
	I0603 13:52:26.866941 1143678 logs.go:276] 0 containers: []
	W0603 13:52:26.866952 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:26.866960 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:26.867031 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:26.915028 1143678 cri.go:89] found id: ""
	I0603 13:52:26.915061 1143678 logs.go:276] 0 containers: []
	W0603 13:52:26.915070 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:26.915079 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:26.915151 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:26.962044 1143678 cri.go:89] found id: ""
	I0603 13:52:26.962075 1143678 logs.go:276] 0 containers: []
	W0603 13:52:26.962083 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:26.962088 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:26.962154 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:26.996156 1143678 cri.go:89] found id: ""
	I0603 13:52:26.996188 1143678 logs.go:276] 0 containers: []
	W0603 13:52:26.996196 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:26.996202 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:26.996265 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:27.038593 1143678 cri.go:89] found id: ""
	I0603 13:52:27.038627 1143678 logs.go:276] 0 containers: []
	W0603 13:52:27.038636 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:27.038642 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:27.038708 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:27.076116 1143678 cri.go:89] found id: ""
	I0603 13:52:27.076144 1143678 logs.go:276] 0 containers: []
	W0603 13:52:27.076153 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:27.076159 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:27.076228 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:27.110653 1143678 cri.go:89] found id: ""
	I0603 13:52:27.110688 1143678 logs.go:276] 0 containers: []
	W0603 13:52:27.110700 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:27.110714 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:27.110733 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:27.193718 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:27.193743 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:27.193756 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:27.269423 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:27.269483 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:27.307899 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:27.307939 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:24.317663 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:26.817148 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:25.371861 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:27.870070 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:29.870299 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:26.488753 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:28.489065 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:30.489568 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:27.363830 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:27.363878 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:29.879016 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:29.893482 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:29.893553 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:29.932146 1143678 cri.go:89] found id: ""
	I0603 13:52:29.932190 1143678 logs.go:276] 0 containers: []
	W0603 13:52:29.932199 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:29.932205 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:29.932259 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:29.968986 1143678 cri.go:89] found id: ""
	I0603 13:52:29.969020 1143678 logs.go:276] 0 containers: []
	W0603 13:52:29.969032 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:29.969040 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:29.969097 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:30.007190 1143678 cri.go:89] found id: ""
	I0603 13:52:30.007228 1143678 logs.go:276] 0 containers: []
	W0603 13:52:30.007238 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:30.007244 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:30.007303 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:30.044607 1143678 cri.go:89] found id: ""
	I0603 13:52:30.044638 1143678 logs.go:276] 0 containers: []
	W0603 13:52:30.044646 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:30.044652 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:30.044706 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:30.083103 1143678 cri.go:89] found id: ""
	I0603 13:52:30.083179 1143678 logs.go:276] 0 containers: []
	W0603 13:52:30.083193 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:30.083204 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:30.083280 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:30.124125 1143678 cri.go:89] found id: ""
	I0603 13:52:30.124152 1143678 logs.go:276] 0 containers: []
	W0603 13:52:30.124160 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:30.124167 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:30.124234 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:30.164293 1143678 cri.go:89] found id: ""
	I0603 13:52:30.164329 1143678 logs.go:276] 0 containers: []
	W0603 13:52:30.164345 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:30.164353 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:30.164467 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:30.219980 1143678 cri.go:89] found id: ""
	I0603 13:52:30.220015 1143678 logs.go:276] 0 containers: []
	W0603 13:52:30.220028 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:30.220042 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:30.220063 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:30.313282 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:30.313305 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:30.313323 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:30.393759 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:30.393801 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:30.441384 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:30.441434 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:30.493523 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:30.493558 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:28.817554 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:31.317629 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:31.870659 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:33.870954 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:32.990340 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:35.495665 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:33.009114 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:33.023177 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:33.023278 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:33.065346 1143678 cri.go:89] found id: ""
	I0603 13:52:33.065388 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.065400 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:33.065424 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:33.065506 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:33.108513 1143678 cri.go:89] found id: ""
	I0603 13:52:33.108549 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.108561 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:33.108569 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:33.108640 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:33.146053 1143678 cri.go:89] found id: ""
	I0603 13:52:33.146082 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.146089 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:33.146107 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:33.146165 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:33.187152 1143678 cri.go:89] found id: ""
	I0603 13:52:33.187195 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.187206 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:33.187216 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:33.187302 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:33.223887 1143678 cri.go:89] found id: ""
	I0603 13:52:33.223920 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.223932 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:33.223941 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:33.224010 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:33.263902 1143678 cri.go:89] found id: ""
	I0603 13:52:33.263958 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.263971 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:33.263980 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:33.264048 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:33.302753 1143678 cri.go:89] found id: ""
	I0603 13:52:33.302785 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.302796 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:33.302805 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:33.302859 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:33.340711 1143678 cri.go:89] found id: ""
	I0603 13:52:33.340745 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.340754 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:33.340763 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:33.340780 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:33.400226 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:33.400271 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:33.414891 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:33.414923 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:33.498121 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:33.498156 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:33.498172 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:33.575682 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:33.575731 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:36.116930 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:36.133001 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:36.133070 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:36.182727 1143678 cri.go:89] found id: ""
	I0603 13:52:36.182763 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.182774 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:36.182782 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:36.182851 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:36.228804 1143678 cri.go:89] found id: ""
	I0603 13:52:36.228841 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.228854 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:36.228862 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:36.228929 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:36.279320 1143678 cri.go:89] found id: ""
	I0603 13:52:36.279359 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.279370 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:36.279378 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:36.279461 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:36.319725 1143678 cri.go:89] found id: ""
	I0603 13:52:36.319751 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.319759 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:36.319765 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:36.319819 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:36.356657 1143678 cri.go:89] found id: ""
	I0603 13:52:36.356685 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.356693 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:36.356703 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:36.356760 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:36.393397 1143678 cri.go:89] found id: ""
	I0603 13:52:36.393448 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.393459 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:36.393467 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:36.393545 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:36.429211 1143678 cri.go:89] found id: ""
	I0603 13:52:36.429246 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.429254 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:36.429260 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:36.429324 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:36.466796 1143678 cri.go:89] found id: ""
	I0603 13:52:36.466831 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.466839 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:36.466849 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:36.466862 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:36.509871 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:36.509900 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:36.562167 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:36.562206 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:36.577014 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:36.577047 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:36.657581 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:36.657604 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:36.657625 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:33.817495 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:35.820854 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:36.371645 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:38.871484 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:37.989038 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:39.989986 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:39.242339 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:39.257985 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:39.258072 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:39.300153 1143678 cri.go:89] found id: ""
	I0603 13:52:39.300185 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.300197 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:39.300205 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:39.300304 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:39.336117 1143678 cri.go:89] found id: ""
	I0603 13:52:39.336152 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.336162 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:39.336175 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:39.336307 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:39.375945 1143678 cri.go:89] found id: ""
	I0603 13:52:39.375979 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.375990 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:39.375998 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:39.376066 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:39.417207 1143678 cri.go:89] found id: ""
	I0603 13:52:39.417242 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.417253 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:39.417261 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:39.417340 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:39.456259 1143678 cri.go:89] found id: ""
	I0603 13:52:39.456295 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.456307 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:39.456315 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:39.456377 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:39.494879 1143678 cri.go:89] found id: ""
	I0603 13:52:39.494904 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.494913 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:39.494919 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:39.494979 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:39.532129 1143678 cri.go:89] found id: ""
	I0603 13:52:39.532157 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.532168 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:39.532177 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:39.532267 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:39.570662 1143678 cri.go:89] found id: ""
	I0603 13:52:39.570693 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.570703 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:39.570717 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:39.570734 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:39.622008 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:39.622057 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:39.636849 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:39.636884 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:39.719914 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:39.719948 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:39.719967 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:39.801723 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:39.801769 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:38.317321 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:40.817649 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:42.819652 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:41.370965 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:43.371900 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:42.490311 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:44.988731 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:42.348936 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:42.363663 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:42.363735 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:42.400584 1143678 cri.go:89] found id: ""
	I0603 13:52:42.400616 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.400625 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:42.400631 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:42.400685 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:42.438853 1143678 cri.go:89] found id: ""
	I0603 13:52:42.438885 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.438893 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:42.438899 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:42.438954 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:42.474980 1143678 cri.go:89] found id: ""
	I0603 13:52:42.475013 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.475025 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:42.475032 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:42.475086 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:42.511027 1143678 cri.go:89] found id: ""
	I0603 13:52:42.511056 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.511068 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:42.511077 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:42.511237 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:42.545333 1143678 cri.go:89] found id: ""
	I0603 13:52:42.545367 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.545378 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:42.545386 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:42.545468 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:42.583392 1143678 cri.go:89] found id: ""
	I0603 13:52:42.583438 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.583556 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:42.583591 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:42.583656 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:42.620886 1143678 cri.go:89] found id: ""
	I0603 13:52:42.620916 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.620924 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:42.620930 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:42.620985 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:42.656265 1143678 cri.go:89] found id: ""
	I0603 13:52:42.656301 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.656313 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:42.656327 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:42.656344 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:42.711078 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:42.711124 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:42.727751 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:42.727788 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:42.802330 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:42.802356 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:42.802370 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:42.883700 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:42.883742 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:45.424591 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:45.440797 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:45.440883 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:45.483664 1143678 cri.go:89] found id: ""
	I0603 13:52:45.483698 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.483709 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:45.483717 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:45.483789 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:45.523147 1143678 cri.go:89] found id: ""
	I0603 13:52:45.523182 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.523193 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:45.523201 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:45.523273 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:45.563483 1143678 cri.go:89] found id: ""
	I0603 13:52:45.563516 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.563527 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:45.563536 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:45.563598 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:45.603574 1143678 cri.go:89] found id: ""
	I0603 13:52:45.603603 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.603618 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:45.603625 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:45.603680 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:45.642664 1143678 cri.go:89] found id: ""
	I0603 13:52:45.642694 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.642705 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:45.642714 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:45.642793 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:45.679961 1143678 cri.go:89] found id: ""
	I0603 13:52:45.679998 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.680011 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:45.680026 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:45.680100 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:45.716218 1143678 cri.go:89] found id: ""
	I0603 13:52:45.716255 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.716263 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:45.716270 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:45.716364 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:45.752346 1143678 cri.go:89] found id: ""
	I0603 13:52:45.752374 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.752382 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:45.752391 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:45.752405 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:45.793992 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:45.794029 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:45.844930 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:45.844973 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:45.859594 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:45.859633 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:45.936469 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:45.936498 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:45.936515 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:45.317705 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:47.818994 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:45.870780 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:47.871003 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:49.871625 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:46.990866 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:49.488680 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:48.514959 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:48.528331 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:48.528401 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:48.565671 1143678 cri.go:89] found id: ""
	I0603 13:52:48.565703 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.565715 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:48.565724 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:48.565786 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:48.603938 1143678 cri.go:89] found id: ""
	I0603 13:52:48.603973 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.603991 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:48.604000 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:48.604068 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:48.643521 1143678 cri.go:89] found id: ""
	I0603 13:52:48.643550 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.643562 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:48.643571 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:48.643627 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:48.678264 1143678 cri.go:89] found id: ""
	I0603 13:52:48.678301 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.678312 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:48.678320 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:48.678407 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:48.714974 1143678 cri.go:89] found id: ""
	I0603 13:52:48.715014 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.715026 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:48.715034 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:48.715138 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:48.750364 1143678 cri.go:89] found id: ""
	I0603 13:52:48.750396 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.750408 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:48.750416 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:48.750482 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:48.788203 1143678 cri.go:89] found id: ""
	I0603 13:52:48.788238 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.788249 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:48.788258 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:48.788345 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:48.826891 1143678 cri.go:89] found id: ""
	I0603 13:52:48.826920 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.826928 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:48.826938 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:48.826951 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:48.877271 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:48.877315 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:48.892155 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:48.892187 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:48.973433 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:48.973459 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:48.973473 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:49.062819 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:49.062888 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:51.614261 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:51.628056 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:51.628142 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:51.662894 1143678 cri.go:89] found id: ""
	I0603 13:52:51.662924 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.662935 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:51.662942 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:51.663009 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:51.701847 1143678 cri.go:89] found id: ""
	I0603 13:52:51.701878 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.701889 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:51.701896 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:51.701963 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:51.737702 1143678 cri.go:89] found id: ""
	I0603 13:52:51.737741 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.737752 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:51.737760 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:51.737833 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:51.772913 1143678 cri.go:89] found id: ""
	I0603 13:52:51.772944 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.772956 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:51.772964 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:51.773034 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:51.810268 1143678 cri.go:89] found id: ""
	I0603 13:52:51.810298 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.810307 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:51.810312 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:51.810377 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:51.848575 1143678 cri.go:89] found id: ""
	I0603 13:52:51.848612 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.848624 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:51.848633 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:51.848696 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:51.886500 1143678 cri.go:89] found id: ""
	I0603 13:52:51.886536 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.886549 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:51.886560 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:51.886617 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:51.924070 1143678 cri.go:89] found id: ""
	I0603 13:52:51.924104 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.924115 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:51.924128 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:51.924146 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:51.940324 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:51.940355 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:52.019958 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:52.019997 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:52.020015 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:52.095953 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:52.095999 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:52.141070 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:52.141102 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:50.317008 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:52.317142 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:51.872275 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:54.376761 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:51.490098 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:53.491292 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:54.694651 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:54.708508 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:54.708597 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:54.745708 1143678 cri.go:89] found id: ""
	I0603 13:52:54.745748 1143678 logs.go:276] 0 containers: []
	W0603 13:52:54.745762 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:54.745770 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:54.745842 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:54.783335 1143678 cri.go:89] found id: ""
	I0603 13:52:54.783369 1143678 logs.go:276] 0 containers: []
	W0603 13:52:54.783381 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:54.783389 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:54.783465 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:54.824111 1143678 cri.go:89] found id: ""
	I0603 13:52:54.824140 1143678 logs.go:276] 0 containers: []
	W0603 13:52:54.824151 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:54.824159 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:54.824230 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:54.868676 1143678 cri.go:89] found id: ""
	I0603 13:52:54.868710 1143678 logs.go:276] 0 containers: []
	W0603 13:52:54.868721 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:54.868730 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:54.868801 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:54.906180 1143678 cri.go:89] found id: ""
	I0603 13:52:54.906216 1143678 logs.go:276] 0 containers: []
	W0603 13:52:54.906227 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:54.906235 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:54.906310 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:54.945499 1143678 cri.go:89] found id: ""
	I0603 13:52:54.945532 1143678 logs.go:276] 0 containers: []
	W0603 13:52:54.945544 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:54.945552 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:54.945619 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:54.986785 1143678 cri.go:89] found id: ""
	I0603 13:52:54.986812 1143678 logs.go:276] 0 containers: []
	W0603 13:52:54.986820 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:54.986826 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:54.986888 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:55.035290 1143678 cri.go:89] found id: ""
	I0603 13:52:55.035320 1143678 logs.go:276] 0 containers: []
	W0603 13:52:55.035329 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:55.035338 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:55.035352 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:55.085384 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:55.085451 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:55.100699 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:55.100733 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:55.171587 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:55.171614 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:55.171638 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:55.249078 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:55.249123 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:54.317435 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:56.318657 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:56.869954 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:58.872728 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:55.990512 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:58.489578 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:00.490668 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:57.791538 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:57.804373 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:57.804437 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:57.843969 1143678 cri.go:89] found id: ""
	I0603 13:52:57.844007 1143678 logs.go:276] 0 containers: []
	W0603 13:52:57.844016 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:57.844022 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:57.844077 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:57.881201 1143678 cri.go:89] found id: ""
	I0603 13:52:57.881239 1143678 logs.go:276] 0 containers: []
	W0603 13:52:57.881252 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:57.881261 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:57.881336 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:57.917572 1143678 cri.go:89] found id: ""
	I0603 13:52:57.917601 1143678 logs.go:276] 0 containers: []
	W0603 13:52:57.917610 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:57.917617 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:57.917671 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:57.951603 1143678 cri.go:89] found id: ""
	I0603 13:52:57.951642 1143678 logs.go:276] 0 containers: []
	W0603 13:52:57.951654 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:57.951661 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:57.951716 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:57.992833 1143678 cri.go:89] found id: ""
	I0603 13:52:57.992863 1143678 logs.go:276] 0 containers: []
	W0603 13:52:57.992874 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:57.992881 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:57.992945 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:58.031595 1143678 cri.go:89] found id: ""
	I0603 13:52:58.031636 1143678 logs.go:276] 0 containers: []
	W0603 13:52:58.031648 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:58.031657 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:58.031723 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:58.068947 1143678 cri.go:89] found id: ""
	I0603 13:52:58.068985 1143678 logs.go:276] 0 containers: []
	W0603 13:52:58.068996 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:58.069005 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:58.069077 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:58.106559 1143678 cri.go:89] found id: ""
	I0603 13:52:58.106587 1143678 logs.go:276] 0 containers: []
	W0603 13:52:58.106598 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:58.106623 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:58.106640 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:58.162576 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:58.162623 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:58.177104 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:58.177155 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:58.250279 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:58.250312 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:58.250329 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:58.330876 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:58.330920 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:00.871443 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:00.885505 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:00.885589 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:00.923878 1143678 cri.go:89] found id: ""
	I0603 13:53:00.923910 1143678 logs.go:276] 0 containers: []
	W0603 13:53:00.923920 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:00.923928 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:00.923995 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:00.960319 1143678 cri.go:89] found id: ""
	I0603 13:53:00.960362 1143678 logs.go:276] 0 containers: []
	W0603 13:53:00.960375 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:00.960384 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:00.960449 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:00.998806 1143678 cri.go:89] found id: ""
	I0603 13:53:00.998845 1143678 logs.go:276] 0 containers: []
	W0603 13:53:00.998857 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:00.998866 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:00.998929 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:01.033211 1143678 cri.go:89] found id: ""
	I0603 13:53:01.033245 1143678 logs.go:276] 0 containers: []
	W0603 13:53:01.033256 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:01.033265 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:01.033341 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:01.072852 1143678 cri.go:89] found id: ""
	I0603 13:53:01.072883 1143678 logs.go:276] 0 containers: []
	W0603 13:53:01.072891 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:01.072898 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:01.072950 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:01.115667 1143678 cri.go:89] found id: ""
	I0603 13:53:01.115699 1143678 logs.go:276] 0 containers: []
	W0603 13:53:01.115711 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:01.115719 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:01.115824 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:01.153676 1143678 cri.go:89] found id: ""
	I0603 13:53:01.153717 1143678 logs.go:276] 0 containers: []
	W0603 13:53:01.153733 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:01.153741 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:01.153815 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:01.188970 1143678 cri.go:89] found id: ""
	I0603 13:53:01.189003 1143678 logs.go:276] 0 containers: []
	W0603 13:53:01.189017 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:01.189031 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:01.189049 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:01.233151 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:01.233214 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:01.287218 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:01.287269 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:01.302370 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:01.302408 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:01.378414 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:01.378444 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:01.378463 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:58.817003 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:01.317698 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:01.371257 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:03.872917 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:02.989133 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:04.990930 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:03.957327 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:03.971246 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:03.971340 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:04.007299 1143678 cri.go:89] found id: ""
	I0603 13:53:04.007335 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.007347 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:04.007356 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:04.007427 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:04.046364 1143678 cri.go:89] found id: ""
	I0603 13:53:04.046396 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.046405 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:04.046411 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:04.046469 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:04.082094 1143678 cri.go:89] found id: ""
	I0603 13:53:04.082127 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.082139 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:04.082148 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:04.082209 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:04.117389 1143678 cri.go:89] found id: ""
	I0603 13:53:04.117434 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.117446 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:04.117454 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:04.117530 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:04.150560 1143678 cri.go:89] found id: ""
	I0603 13:53:04.150596 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.150606 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:04.150614 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:04.150678 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:04.184808 1143678 cri.go:89] found id: ""
	I0603 13:53:04.184845 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.184857 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:04.184865 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:04.184935 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:04.220286 1143678 cri.go:89] found id: ""
	I0603 13:53:04.220317 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.220326 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:04.220332 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:04.220385 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:04.258898 1143678 cri.go:89] found id: ""
	I0603 13:53:04.258929 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.258941 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:04.258955 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:04.258972 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:04.312151 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:04.312198 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:04.329908 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:04.329943 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:04.402075 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:04.402106 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:04.402138 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:04.482873 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:04.482936 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:07.049978 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:07.063072 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:07.063140 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:07.097703 1143678 cri.go:89] found id: ""
	I0603 13:53:07.097737 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.097748 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:07.097755 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:07.097811 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:07.134826 1143678 cri.go:89] found id: ""
	I0603 13:53:07.134865 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.134878 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:07.134886 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:07.134955 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:07.178015 1143678 cri.go:89] found id: ""
	I0603 13:53:07.178050 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.178061 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:07.178068 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:07.178138 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:07.215713 1143678 cri.go:89] found id: ""
	I0603 13:53:07.215753 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.215764 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:07.215777 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:07.215840 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:07.251787 1143678 cri.go:89] found id: ""
	I0603 13:53:07.251815 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.251824 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:07.251830 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:07.251897 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:07.293357 1143678 cri.go:89] found id: ""
	I0603 13:53:07.293387 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.293398 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:07.293427 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:07.293496 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:07.329518 1143678 cri.go:89] found id: ""
	I0603 13:53:07.329551 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.329561 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:07.329569 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:07.329650 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:03.819203 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:06.317653 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:06.370539 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:08.370701 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:07.490706 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:09.990002 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:07.369534 1143678 cri.go:89] found id: ""
	I0603 13:53:07.369576 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.369587 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:07.369601 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:07.369617 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:07.424211 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:07.424260 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:07.439135 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:07.439172 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:07.511325 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:07.511360 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:07.511378 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:07.588348 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:07.588393 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:10.129812 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:10.143977 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:10.144057 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:10.181873 1143678 cri.go:89] found id: ""
	I0603 13:53:10.181906 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.181918 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:10.181926 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:10.181981 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:10.218416 1143678 cri.go:89] found id: ""
	I0603 13:53:10.218460 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.218473 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:10.218482 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:10.218562 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:10.253580 1143678 cri.go:89] found id: ""
	I0603 13:53:10.253618 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.253630 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:10.253646 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:10.253717 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:10.302919 1143678 cri.go:89] found id: ""
	I0603 13:53:10.302949 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.302957 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:10.302964 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:10.303024 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:10.343680 1143678 cri.go:89] found id: ""
	I0603 13:53:10.343709 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.343721 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:10.343729 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:10.343798 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:10.379281 1143678 cri.go:89] found id: ""
	I0603 13:53:10.379307 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.379315 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:10.379322 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:10.379374 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:10.420197 1143678 cri.go:89] found id: ""
	I0603 13:53:10.420225 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.420233 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:10.420239 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:10.420322 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:10.458578 1143678 cri.go:89] found id: ""
	I0603 13:53:10.458609 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.458618 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:10.458629 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:10.458642 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:10.511785 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:10.511828 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:10.526040 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:10.526081 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:10.603721 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:10.603749 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:10.603766 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:10.684153 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:10.684204 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:08.816447 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:11.318264 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:10.374788 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:12.871019 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:14.871064 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:11.992127 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:14.488866 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:13.227605 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:13.241131 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:13.241228 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:13.284636 1143678 cri.go:89] found id: ""
	I0603 13:53:13.284667 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.284675 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:13.284681 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:13.284737 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:13.322828 1143678 cri.go:89] found id: ""
	I0603 13:53:13.322861 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.322873 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:13.322881 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:13.322945 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:13.360061 1143678 cri.go:89] found id: ""
	I0603 13:53:13.360089 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.360097 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:13.360103 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:13.360176 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:13.397115 1143678 cri.go:89] found id: ""
	I0603 13:53:13.397149 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.397158 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:13.397164 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:13.397234 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:13.434086 1143678 cri.go:89] found id: ""
	I0603 13:53:13.434118 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.434127 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:13.434135 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:13.434194 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:13.470060 1143678 cri.go:89] found id: ""
	I0603 13:53:13.470089 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.470101 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:13.470113 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:13.470189 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:13.508423 1143678 cri.go:89] found id: ""
	I0603 13:53:13.508464 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.508480 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:13.508487 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:13.508552 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:13.546713 1143678 cri.go:89] found id: ""
	I0603 13:53:13.546752 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.546765 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:13.546778 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:13.546796 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:13.632984 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:13.633027 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:13.679169 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:13.679216 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:13.735765 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:13.735812 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:13.750175 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:13.750210 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:13.826571 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:16.327185 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:16.340163 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:16.340253 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:16.380260 1143678 cri.go:89] found id: ""
	I0603 13:53:16.380292 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.380300 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:16.380307 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:16.380373 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:16.420408 1143678 cri.go:89] found id: ""
	I0603 13:53:16.420438 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.420449 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:16.420457 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:16.420534 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:16.459250 1143678 cri.go:89] found id: ""
	I0603 13:53:16.459285 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.459297 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:16.459307 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:16.459377 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:16.496395 1143678 cri.go:89] found id: ""
	I0603 13:53:16.496427 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.496436 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:16.496444 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:16.496516 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:16.534402 1143678 cri.go:89] found id: ""
	I0603 13:53:16.534433 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.534442 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:16.534449 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:16.534514 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:16.571550 1143678 cri.go:89] found id: ""
	I0603 13:53:16.571577 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.571584 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:16.571591 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:16.571659 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:16.608425 1143678 cri.go:89] found id: ""
	I0603 13:53:16.608457 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.608468 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:16.608482 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:16.608549 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:16.647282 1143678 cri.go:89] found id: ""
	I0603 13:53:16.647315 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.647324 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:16.647334 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:16.647351 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:16.728778 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:16.728814 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:16.728831 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:16.822702 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:16.822747 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:16.868816 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:16.868845 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:16.922262 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:16.922301 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:13.818935 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:16.316865 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:17.370681 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:19.371232 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:16.489494 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:18.490176 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:20.491433 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:19.438231 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:19.452520 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:19.452603 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:19.488089 1143678 cri.go:89] found id: ""
	I0603 13:53:19.488121 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.488133 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:19.488141 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:19.488216 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:19.524494 1143678 cri.go:89] found id: ""
	I0603 13:53:19.524527 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.524537 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:19.524543 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:19.524595 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:19.561288 1143678 cri.go:89] found id: ""
	I0603 13:53:19.561323 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.561333 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:19.561341 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:19.561420 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:19.597919 1143678 cri.go:89] found id: ""
	I0603 13:53:19.597965 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.597976 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:19.597984 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:19.598056 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:19.634544 1143678 cri.go:89] found id: ""
	I0603 13:53:19.634579 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.634591 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:19.634599 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:19.634668 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:19.671473 1143678 cri.go:89] found id: ""
	I0603 13:53:19.671506 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.671518 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:19.671527 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:19.671598 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:19.707968 1143678 cri.go:89] found id: ""
	I0603 13:53:19.708000 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.708011 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:19.708019 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:19.708119 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:19.745555 1143678 cri.go:89] found id: ""
	I0603 13:53:19.745593 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.745604 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:19.745617 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:19.745631 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:19.830765 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:19.830812 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:19.875160 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:19.875197 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:19.927582 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:19.927627 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:19.942258 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:19.942289 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:20.016081 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:18.820067 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:21.319103 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:21.871214 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:24.371680 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:22.990210 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:24.990605 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:22.516859 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:22.534973 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:22.535040 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:22.593003 1143678 cri.go:89] found id: ""
	I0603 13:53:22.593043 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.593051 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:22.593058 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:22.593121 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:22.649916 1143678 cri.go:89] found id: ""
	I0603 13:53:22.649951 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.649963 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:22.649971 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:22.650030 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:22.689397 1143678 cri.go:89] found id: ""
	I0603 13:53:22.689449 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.689459 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:22.689465 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:22.689521 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:22.725109 1143678 cri.go:89] found id: ""
	I0603 13:53:22.725149 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.725161 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:22.725169 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:22.725250 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:22.761196 1143678 cri.go:89] found id: ""
	I0603 13:53:22.761225 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.761237 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:22.761245 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:22.761311 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:22.804065 1143678 cri.go:89] found id: ""
	I0603 13:53:22.804103 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.804112 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:22.804119 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:22.804189 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:22.840456 1143678 cri.go:89] found id: ""
	I0603 13:53:22.840485 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.840493 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:22.840499 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:22.840553 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:22.876796 1143678 cri.go:89] found id: ""
	I0603 13:53:22.876831 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.876842 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:22.876854 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:22.876869 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:22.957274 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:22.957317 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:22.998360 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:22.998394 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:23.054895 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:23.054942 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:23.070107 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:23.070141 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:23.147460 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:25.647727 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:25.663603 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:25.663691 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:25.698102 1143678 cri.go:89] found id: ""
	I0603 13:53:25.698139 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.698150 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:25.698159 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:25.698227 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:25.738601 1143678 cri.go:89] found id: ""
	I0603 13:53:25.738641 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.738648 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:25.738655 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:25.738718 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:25.780622 1143678 cri.go:89] found id: ""
	I0603 13:53:25.780657 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.780670 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:25.780678 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:25.780751 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:25.816950 1143678 cri.go:89] found id: ""
	I0603 13:53:25.816978 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.816989 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:25.816997 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:25.817060 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:25.860011 1143678 cri.go:89] found id: ""
	I0603 13:53:25.860051 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.860063 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:25.860072 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:25.860138 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:25.898832 1143678 cri.go:89] found id: ""
	I0603 13:53:25.898866 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.898878 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:25.898886 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:25.898959 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:25.937483 1143678 cri.go:89] found id: ""
	I0603 13:53:25.937518 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.937533 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:25.937541 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:25.937607 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:25.973972 1143678 cri.go:89] found id: ""
	I0603 13:53:25.974008 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.974021 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:25.974034 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:25.974065 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:25.989188 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:25.989227 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:26.065521 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:26.065546 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:26.065560 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:26.147852 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:26.147899 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:26.191395 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:26.191431 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:23.816928 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:25.818534 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:26.872084 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:28.872558 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:27.489951 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:29.989352 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:28.751041 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:28.764764 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:28.764826 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:28.808232 1143678 cri.go:89] found id: ""
	I0603 13:53:28.808271 1143678 logs.go:276] 0 containers: []
	W0603 13:53:28.808285 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:28.808293 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:28.808369 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:28.849058 1143678 cri.go:89] found id: ""
	I0603 13:53:28.849094 1143678 logs.go:276] 0 containers: []
	W0603 13:53:28.849107 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:28.849114 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:28.849187 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:28.892397 1143678 cri.go:89] found id: ""
	I0603 13:53:28.892427 1143678 logs.go:276] 0 containers: []
	W0603 13:53:28.892441 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:28.892447 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:28.892515 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:28.932675 1143678 cri.go:89] found id: ""
	I0603 13:53:28.932715 1143678 logs.go:276] 0 containers: []
	W0603 13:53:28.932727 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:28.932735 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:28.932840 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:28.969732 1143678 cri.go:89] found id: ""
	I0603 13:53:28.969769 1143678 logs.go:276] 0 containers: []
	W0603 13:53:28.969781 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:28.969789 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:28.969857 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:29.007765 1143678 cri.go:89] found id: ""
	I0603 13:53:29.007791 1143678 logs.go:276] 0 containers: []
	W0603 13:53:29.007798 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:29.007804 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:29.007865 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:29.044616 1143678 cri.go:89] found id: ""
	I0603 13:53:29.044652 1143678 logs.go:276] 0 containers: []
	W0603 13:53:29.044664 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:29.044675 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:29.044734 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:29.081133 1143678 cri.go:89] found id: ""
	I0603 13:53:29.081166 1143678 logs.go:276] 0 containers: []
	W0603 13:53:29.081187 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:29.081198 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:29.081213 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:29.095753 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:29.095783 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:29.174472 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:29.174496 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:29.174516 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:29.251216 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:29.251262 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:29.289127 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:29.289168 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:31.845335 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:31.860631 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:31.860720 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:31.904507 1143678 cri.go:89] found id: ""
	I0603 13:53:31.904544 1143678 logs.go:276] 0 containers: []
	W0603 13:53:31.904556 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:31.904564 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:31.904633 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:31.940795 1143678 cri.go:89] found id: ""
	I0603 13:53:31.940832 1143678 logs.go:276] 0 containers: []
	W0603 13:53:31.940845 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:31.940852 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:31.940921 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:31.978447 1143678 cri.go:89] found id: ""
	I0603 13:53:31.978481 1143678 logs.go:276] 0 containers: []
	W0603 13:53:31.978499 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:31.978507 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:31.978569 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:32.017975 1143678 cri.go:89] found id: ""
	I0603 13:53:32.018009 1143678 logs.go:276] 0 containers: []
	W0603 13:53:32.018018 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:32.018025 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:32.018089 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:32.053062 1143678 cri.go:89] found id: ""
	I0603 13:53:32.053091 1143678 logs.go:276] 0 containers: []
	W0603 13:53:32.053099 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:32.053106 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:32.053181 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:32.089822 1143678 cri.go:89] found id: ""
	I0603 13:53:32.089856 1143678 logs.go:276] 0 containers: []
	W0603 13:53:32.089868 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:32.089877 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:32.089944 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:32.126243 1143678 cri.go:89] found id: ""
	I0603 13:53:32.126280 1143678 logs.go:276] 0 containers: []
	W0603 13:53:32.126291 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:32.126299 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:32.126358 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:32.163297 1143678 cri.go:89] found id: ""
	I0603 13:53:32.163346 1143678 logs.go:276] 0 containers: []
	W0603 13:53:32.163357 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:32.163370 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:32.163386 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:32.218452 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:32.218495 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:32.233688 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:32.233731 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:32.318927 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:32.318947 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:32.318963 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:28.317046 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:30.317308 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:32.318273 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:31.370654 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:33.371038 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:31.991594 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:34.492142 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:32.403734 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:32.403786 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:34.947857 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:34.961894 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:34.961983 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:35.006279 1143678 cri.go:89] found id: ""
	I0603 13:53:35.006308 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.006318 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:35.006326 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:35.006398 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:35.042765 1143678 cri.go:89] found id: ""
	I0603 13:53:35.042794 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.042807 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:35.042815 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:35.042877 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:35.084332 1143678 cri.go:89] found id: ""
	I0603 13:53:35.084365 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.084375 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:35.084381 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:35.084448 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:35.121306 1143678 cri.go:89] found id: ""
	I0603 13:53:35.121337 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.121348 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:35.121358 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:35.121444 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:35.155952 1143678 cri.go:89] found id: ""
	I0603 13:53:35.155994 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.156008 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:35.156016 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:35.156089 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:35.196846 1143678 cri.go:89] found id: ""
	I0603 13:53:35.196881 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.196893 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:35.196902 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:35.196972 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:35.232396 1143678 cri.go:89] found id: ""
	I0603 13:53:35.232429 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.232440 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:35.232449 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:35.232528 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:35.269833 1143678 cri.go:89] found id: ""
	I0603 13:53:35.269862 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.269872 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:35.269885 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:35.269902 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:35.357754 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:35.357794 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:35.399793 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:35.399822 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:35.453742 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:35.453782 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:35.468431 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:35.468465 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:35.547817 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:34.816178 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:36.817093 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:35.373072 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:37.870173 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:36.989364 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:38.990163 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:38.048517 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:38.063481 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:38.063569 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:38.100487 1143678 cri.go:89] found id: ""
	I0603 13:53:38.100523 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.100535 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:38.100543 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:38.100612 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:38.137627 1143678 cri.go:89] found id: ""
	I0603 13:53:38.137665 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.137678 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:38.137686 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:38.137754 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:38.176138 1143678 cri.go:89] found id: ""
	I0603 13:53:38.176172 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.176190 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:38.176199 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:38.176265 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:38.214397 1143678 cri.go:89] found id: ""
	I0603 13:53:38.214439 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.214451 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:38.214459 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:38.214528 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:38.250531 1143678 cri.go:89] found id: ""
	I0603 13:53:38.250563 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.250573 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:38.250580 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:38.250642 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:38.286558 1143678 cri.go:89] found id: ""
	I0603 13:53:38.286587 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.286595 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:38.286601 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:38.286652 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:38.327995 1143678 cri.go:89] found id: ""
	I0603 13:53:38.328043 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.328055 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:38.328062 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:38.328126 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:38.374266 1143678 cri.go:89] found id: ""
	I0603 13:53:38.374300 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.374311 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:38.374324 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:38.374341 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:38.426876 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:38.426918 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:38.443296 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:38.443340 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:38.514702 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:38.514728 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:38.514746 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:38.601536 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:38.601590 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:41.141766 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:41.155927 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:41.156006 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:41.196829 1143678 cri.go:89] found id: ""
	I0603 13:53:41.196871 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.196884 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:41.196896 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:41.196967 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:41.231729 1143678 cri.go:89] found id: ""
	I0603 13:53:41.231780 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.231802 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:41.231812 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:41.231900 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:41.266663 1143678 cri.go:89] found id: ""
	I0603 13:53:41.266699 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.266711 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:41.266720 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:41.266783 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:41.305251 1143678 cri.go:89] found id: ""
	I0603 13:53:41.305278 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.305286 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:41.305292 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:41.305351 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:41.342527 1143678 cri.go:89] found id: ""
	I0603 13:53:41.342556 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.342568 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:41.342575 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:41.342637 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:41.379950 1143678 cri.go:89] found id: ""
	I0603 13:53:41.379982 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.379992 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:41.379999 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:41.380068 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:41.414930 1143678 cri.go:89] found id: ""
	I0603 13:53:41.414965 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.414973 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:41.414980 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:41.415043 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:41.449265 1143678 cri.go:89] found id: ""
	I0603 13:53:41.449299 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.449310 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:41.449324 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:41.449343 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:41.502525 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:41.502560 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:41.519357 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:41.519390 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:41.591443 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:41.591471 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:41.591485 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:41.668758 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:41.668802 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:39.317333 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:41.317598 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:40.370844 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:42.871161 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:41.489574 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:43.989620 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:44.211768 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:44.226789 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:44.226869 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:44.265525 1143678 cri.go:89] found id: ""
	I0603 13:53:44.265553 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.265561 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:44.265568 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:44.265646 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:44.304835 1143678 cri.go:89] found id: ""
	I0603 13:53:44.304866 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.304874 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:44.304880 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:44.304935 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:44.345832 1143678 cri.go:89] found id: ""
	I0603 13:53:44.345875 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.345885 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:44.345891 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:44.345950 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:44.386150 1143678 cri.go:89] found id: ""
	I0603 13:53:44.386186 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.386198 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:44.386207 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:44.386268 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:44.423662 1143678 cri.go:89] found id: ""
	I0603 13:53:44.423697 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.423709 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:44.423719 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:44.423788 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:44.462437 1143678 cri.go:89] found id: ""
	I0603 13:53:44.462464 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.462473 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:44.462481 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:44.462567 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:44.501007 1143678 cri.go:89] found id: ""
	I0603 13:53:44.501062 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.501074 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:44.501081 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:44.501138 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:44.535501 1143678 cri.go:89] found id: ""
	I0603 13:53:44.535543 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.535554 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:44.535567 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:44.535585 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:44.587114 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:44.587157 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:44.602151 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:44.602180 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:44.674065 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:44.674104 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:44.674122 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:44.757443 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:44.757488 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:47.306481 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:47.319895 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:47.319958 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:43.818030 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:46.316852 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:45.370762 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:47.371799 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:49.871512 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:46.488076 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:48.488472 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:50.488892 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:47.356975 1143678 cri.go:89] found id: ""
	I0603 13:53:47.357013 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.357026 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:47.357034 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:47.357106 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:47.393840 1143678 cri.go:89] found id: ""
	I0603 13:53:47.393869 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.393877 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:47.393884 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:47.393936 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:47.428455 1143678 cri.go:89] found id: ""
	I0603 13:53:47.428493 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.428506 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:47.428514 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:47.428597 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:47.463744 1143678 cri.go:89] found id: ""
	I0603 13:53:47.463777 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.463788 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:47.463795 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:47.463855 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:47.498134 1143678 cri.go:89] found id: ""
	I0603 13:53:47.498159 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.498167 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:47.498173 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:47.498245 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:47.534153 1143678 cri.go:89] found id: ""
	I0603 13:53:47.534195 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.534206 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:47.534219 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:47.534272 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:47.567148 1143678 cri.go:89] found id: ""
	I0603 13:53:47.567179 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.567187 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:47.567194 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:47.567249 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:47.605759 1143678 cri.go:89] found id: ""
	I0603 13:53:47.605790 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.605798 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:47.605810 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:47.605824 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:47.683651 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:47.683692 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:47.683705 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:47.763810 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:47.763848 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:47.806092 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:47.806131 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:47.859637 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:47.859677 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:50.377538 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:50.391696 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:50.391776 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:50.433968 1143678 cri.go:89] found id: ""
	I0603 13:53:50.434001 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.434013 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:50.434020 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:50.434080 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:50.470561 1143678 cri.go:89] found id: ""
	I0603 13:53:50.470589 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.470596 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:50.470603 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:50.470662 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:50.510699 1143678 cri.go:89] found id: ""
	I0603 13:53:50.510727 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.510735 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:50.510741 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:50.510808 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:50.553386 1143678 cri.go:89] found id: ""
	I0603 13:53:50.553433 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.553445 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:50.553452 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:50.553533 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:50.589731 1143678 cri.go:89] found id: ""
	I0603 13:53:50.589779 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.589792 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:50.589801 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:50.589885 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:50.625144 1143678 cri.go:89] found id: ""
	I0603 13:53:50.625180 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.625192 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:50.625201 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:50.625274 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:50.669021 1143678 cri.go:89] found id: ""
	I0603 13:53:50.669053 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.669061 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:50.669067 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:50.669121 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:50.714241 1143678 cri.go:89] found id: ""
	I0603 13:53:50.714270 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.714284 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:50.714297 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:50.714314 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:50.766290 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:50.766333 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:50.797242 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:50.797275 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:50.866589 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:50.866616 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:50.866637 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:50.948808 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:50.948854 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:48.318282 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:50.817445 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:52.370798 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:54.377027 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:52.490719 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:54.989907 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:53.496797 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:53.511944 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:53.512021 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:53.549028 1143678 cri.go:89] found id: ""
	I0603 13:53:53.549057 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.549066 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:53.549072 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:53.549128 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:53.583533 1143678 cri.go:89] found id: ""
	I0603 13:53:53.583566 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.583578 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:53.583586 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:53.583652 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:53.618578 1143678 cri.go:89] found id: ""
	I0603 13:53:53.618609 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.618618 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:53.618626 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:53.618701 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:53.653313 1143678 cri.go:89] found id: ""
	I0603 13:53:53.653347 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.653358 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:53.653364 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:53.653442 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:53.689805 1143678 cri.go:89] found id: ""
	I0603 13:53:53.689839 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.689849 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:53.689857 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:53.689931 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:53.725538 1143678 cri.go:89] found id: ""
	I0603 13:53:53.725571 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.725584 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:53.725592 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:53.725648 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:53.762284 1143678 cri.go:89] found id: ""
	I0603 13:53:53.762325 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.762336 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:53.762345 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:53.762419 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:53.799056 1143678 cri.go:89] found id: ""
	I0603 13:53:53.799083 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.799092 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:53.799102 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:53.799115 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:53.873743 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:53.873809 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:53.919692 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:53.919724 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:53.969068 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:53.969109 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:53.983840 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:53.983866 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:54.054842 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:56.555587 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:56.570014 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:56.570076 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:56.604352 1143678 cri.go:89] found id: ""
	I0603 13:53:56.604386 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.604400 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:56.604408 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:56.604479 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:56.648126 1143678 cri.go:89] found id: ""
	I0603 13:53:56.648161 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.648171 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:56.648177 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:56.648231 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:56.685621 1143678 cri.go:89] found id: ""
	I0603 13:53:56.685658 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.685670 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:56.685678 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:56.685763 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:56.721860 1143678 cri.go:89] found id: ""
	I0603 13:53:56.721891 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.721913 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:56.721921 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:56.721989 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:56.757950 1143678 cri.go:89] found id: ""
	I0603 13:53:56.757982 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.757995 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:56.758002 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:56.758068 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:56.794963 1143678 cri.go:89] found id: ""
	I0603 13:53:56.794991 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.794999 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:56.795007 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:56.795072 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:56.831795 1143678 cri.go:89] found id: ""
	I0603 13:53:56.831827 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.831839 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:56.831846 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:56.831913 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:56.869263 1143678 cri.go:89] found id: ""
	I0603 13:53:56.869293 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.869303 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:56.869314 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:56.869331 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:56.945068 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:56.945096 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:56.945110 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:57.028545 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:57.028582 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:57.069973 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:57.070009 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:57.126395 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:57.126436 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:53.316616 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:55.316981 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:57.317295 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:56.870680 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:59.371553 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:56.990964 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:59.489616 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:59.644870 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:59.658547 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:59.658634 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:59.694625 1143678 cri.go:89] found id: ""
	I0603 13:53:59.694656 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.694665 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:59.694673 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:59.694740 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:59.730475 1143678 cri.go:89] found id: ""
	I0603 13:53:59.730573 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.730590 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:59.730599 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:59.730696 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:59.768533 1143678 cri.go:89] found id: ""
	I0603 13:53:59.768567 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.768580 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:59.768590 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:59.768662 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:59.804913 1143678 cri.go:89] found id: ""
	I0603 13:53:59.804944 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.804953 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:59.804960 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:59.805014 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:59.850331 1143678 cri.go:89] found id: ""
	I0603 13:53:59.850363 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.850376 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:59.850385 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:59.850466 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:59.890777 1143678 cri.go:89] found id: ""
	I0603 13:53:59.890814 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.890826 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:59.890834 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:59.890909 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:59.931233 1143678 cri.go:89] found id: ""
	I0603 13:53:59.931268 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.931277 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:59.931283 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:59.931354 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:59.966267 1143678 cri.go:89] found id: ""
	I0603 13:53:59.966307 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.966319 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:59.966333 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:59.966356 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:00.019884 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:00.019924 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:00.034936 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:00.034982 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:00.115002 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:00.115035 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:00.115053 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:00.189992 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:00.190035 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:59.818065 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:02.316183 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:01.870679 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:03.872563 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:01.490213 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:03.988699 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:02.737387 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:02.752131 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:02.752220 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:02.787863 1143678 cri.go:89] found id: ""
	I0603 13:54:02.787893 1143678 logs.go:276] 0 containers: []
	W0603 13:54:02.787902 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:02.787908 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:02.787974 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:02.824938 1143678 cri.go:89] found id: ""
	I0603 13:54:02.824973 1143678 logs.go:276] 0 containers: []
	W0603 13:54:02.824983 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:02.824989 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:02.825061 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:02.861425 1143678 cri.go:89] found id: ""
	I0603 13:54:02.861461 1143678 logs.go:276] 0 containers: []
	W0603 13:54:02.861469 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:02.861476 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:02.861546 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:02.907417 1143678 cri.go:89] found id: ""
	I0603 13:54:02.907453 1143678 logs.go:276] 0 containers: []
	W0603 13:54:02.907475 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:02.907483 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:02.907553 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:02.953606 1143678 cri.go:89] found id: ""
	I0603 13:54:02.953640 1143678 logs.go:276] 0 containers: []
	W0603 13:54:02.953649 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:02.953655 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:02.953728 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:03.007785 1143678 cri.go:89] found id: ""
	I0603 13:54:03.007816 1143678 logs.go:276] 0 containers: []
	W0603 13:54:03.007824 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:03.007830 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:03.007896 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:03.058278 1143678 cri.go:89] found id: ""
	I0603 13:54:03.058316 1143678 logs.go:276] 0 containers: []
	W0603 13:54:03.058329 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:03.058338 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:03.058404 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:03.094766 1143678 cri.go:89] found id: ""
	I0603 13:54:03.094800 1143678 logs.go:276] 0 containers: []
	W0603 13:54:03.094811 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:03.094824 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:03.094840 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:03.163663 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:03.163690 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:03.163704 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:03.250751 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:03.250802 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:03.292418 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:03.292466 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:03.344552 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:03.344600 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:05.859965 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:05.875255 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:05.875340 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:05.918590 1143678 cri.go:89] found id: ""
	I0603 13:54:05.918619 1143678 logs.go:276] 0 containers: []
	W0603 13:54:05.918630 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:05.918637 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:05.918706 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:05.953932 1143678 cri.go:89] found id: ""
	I0603 13:54:05.953969 1143678 logs.go:276] 0 containers: []
	W0603 13:54:05.953980 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:05.953988 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:05.954056 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:05.993319 1143678 cri.go:89] found id: ""
	I0603 13:54:05.993348 1143678 logs.go:276] 0 containers: []
	W0603 13:54:05.993359 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:05.993368 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:05.993468 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:06.033047 1143678 cri.go:89] found id: ""
	I0603 13:54:06.033079 1143678 logs.go:276] 0 containers: []
	W0603 13:54:06.033087 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:06.033100 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:06.033156 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:06.072607 1143678 cri.go:89] found id: ""
	I0603 13:54:06.072631 1143678 logs.go:276] 0 containers: []
	W0603 13:54:06.072640 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:06.072647 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:06.072698 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:06.109944 1143678 cri.go:89] found id: ""
	I0603 13:54:06.109990 1143678 logs.go:276] 0 containers: []
	W0603 13:54:06.109999 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:06.110007 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:06.110071 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:06.150235 1143678 cri.go:89] found id: ""
	I0603 13:54:06.150266 1143678 logs.go:276] 0 containers: []
	W0603 13:54:06.150276 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:06.150284 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:06.150349 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:06.193963 1143678 cri.go:89] found id: ""
	I0603 13:54:06.193992 1143678 logs.go:276] 0 containers: []
	W0603 13:54:06.194004 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:06.194017 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:06.194035 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:06.235790 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:06.235827 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:06.289940 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:06.289980 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:06.305205 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:06.305240 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:06.381170 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:06.381191 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:06.381206 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:04.316812 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:06.317759 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:06.370944 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:08.371668 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:05.989346 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:08.492021 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:08.958985 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:08.973364 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:08.973462 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:09.015050 1143678 cri.go:89] found id: ""
	I0603 13:54:09.015087 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.015099 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:09.015107 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:09.015187 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:09.054474 1143678 cri.go:89] found id: ""
	I0603 13:54:09.054508 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.054521 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:09.054533 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:09.054590 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:09.090867 1143678 cri.go:89] found id: ""
	I0603 13:54:09.090905 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.090917 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:09.090926 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:09.090995 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:09.128401 1143678 cri.go:89] found id: ""
	I0603 13:54:09.128433 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.128441 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:09.128447 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:09.128511 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:09.162952 1143678 cri.go:89] found id: ""
	I0603 13:54:09.162992 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.163005 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:09.163013 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:09.163078 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:09.200375 1143678 cri.go:89] found id: ""
	I0603 13:54:09.200402 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.200410 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:09.200416 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:09.200495 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:09.244694 1143678 cri.go:89] found id: ""
	I0603 13:54:09.244729 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.244740 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:09.244749 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:09.244818 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:09.281633 1143678 cri.go:89] found id: ""
	I0603 13:54:09.281666 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.281675 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:09.281686 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:09.281700 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:09.341287 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:09.341331 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:09.355379 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:09.355415 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:09.435934 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:09.435960 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:09.435979 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:09.518203 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:09.518248 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:12.061538 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:12.076939 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:12.077020 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:12.114308 1143678 cri.go:89] found id: ""
	I0603 13:54:12.114344 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.114353 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:12.114359 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:12.114427 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:12.150336 1143678 cri.go:89] found id: ""
	I0603 13:54:12.150368 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.150383 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:12.150390 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:12.150455 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:12.189881 1143678 cri.go:89] found id: ""
	I0603 13:54:12.189934 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.189946 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:12.189954 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:12.190020 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:12.226361 1143678 cri.go:89] found id: ""
	I0603 13:54:12.226396 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.226407 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:12.226415 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:12.226488 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:12.264216 1143678 cri.go:89] found id: ""
	I0603 13:54:12.264257 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.264265 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:12.264271 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:12.264341 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:12.306563 1143678 cri.go:89] found id: ""
	I0603 13:54:12.306600 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.306612 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:12.306620 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:12.306690 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:12.347043 1143678 cri.go:89] found id: ""
	I0603 13:54:12.347082 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.347094 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:12.347105 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:12.347170 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:08.317824 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:10.816743 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:12.816776 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:10.372079 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:12.872314 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:10.990240 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:13.489762 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:12.383947 1143678 cri.go:89] found id: ""
	I0603 13:54:12.383978 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.383989 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:12.384001 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:12.384018 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:12.464306 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:12.464348 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:12.505079 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:12.505110 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:12.563631 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:12.563666 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:12.578328 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:12.578357 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:12.646015 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:15.147166 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:15.163786 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:15.163865 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:15.202249 1143678 cri.go:89] found id: ""
	I0603 13:54:15.202286 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.202296 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:15.202304 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:15.202372 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:15.236305 1143678 cri.go:89] found id: ""
	I0603 13:54:15.236345 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.236359 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:15.236368 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:15.236459 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:15.273457 1143678 cri.go:89] found id: ""
	I0603 13:54:15.273493 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.273510 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:15.273521 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:15.273592 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:15.314917 1143678 cri.go:89] found id: ""
	I0603 13:54:15.314951 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.314963 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:15.314984 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:15.315055 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:15.353060 1143678 cri.go:89] found id: ""
	I0603 13:54:15.353098 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.353112 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:15.353118 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:15.353197 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:15.390412 1143678 cri.go:89] found id: ""
	I0603 13:54:15.390448 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.390460 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:15.390469 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:15.390534 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:15.427735 1143678 cri.go:89] found id: ""
	I0603 13:54:15.427771 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.427782 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:15.427789 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:15.427854 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:15.467134 1143678 cri.go:89] found id: ""
	I0603 13:54:15.467165 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.467175 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:15.467184 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:15.467199 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:15.517924 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:15.517973 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:15.531728 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:15.531760 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:15.608397 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:15.608421 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:15.608444 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:15.688976 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:15.689016 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:15.319250 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:16.817018 1143252 pod_ready.go:81] duration metric: took 4m0.00664589s for pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace to be "Ready" ...
	E0603 13:54:16.817042 1143252 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0603 13:54:16.817049 1143252 pod_ready.go:38] duration metric: took 4m6.670583216s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:54:16.817081 1143252 api_server.go:52] waiting for apiserver process to appear ...
	I0603 13:54:16.817110 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:16.817158 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:16.871314 1143252 cri.go:89] found id: "45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a"
	I0603 13:54:16.871339 1143252 cri.go:89] found id: ""
	I0603 13:54:16.871350 1143252 logs.go:276] 1 containers: [45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a]
	I0603 13:54:16.871405 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:16.876249 1143252 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:16.876319 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:16.917267 1143252 cri.go:89] found id: "114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05"
	I0603 13:54:16.917298 1143252 cri.go:89] found id: ""
	I0603 13:54:16.917310 1143252 logs.go:276] 1 containers: [114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05]
	I0603 13:54:16.917374 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:16.923290 1143252 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:16.923374 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:16.963598 1143252 cri.go:89] found id: "f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574"
	I0603 13:54:16.963619 1143252 cri.go:89] found id: ""
	I0603 13:54:16.963628 1143252 logs.go:276] 1 containers: [f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574]
	I0603 13:54:16.963689 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:16.968201 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:16.968277 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:17.008229 1143252 cri.go:89] found id: "f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87"
	I0603 13:54:17.008264 1143252 cri.go:89] found id: ""
	I0603 13:54:17.008274 1143252 logs.go:276] 1 containers: [f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87]
	I0603 13:54:17.008341 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:17.012719 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:17.012795 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:17.048353 1143252 cri.go:89] found id: "c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1"
	I0603 13:54:17.048384 1143252 cri.go:89] found id: ""
	I0603 13:54:17.048394 1143252 logs.go:276] 1 containers: [c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1]
	I0603 13:54:17.048459 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:17.053094 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:17.053162 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:17.088475 1143252 cri.go:89] found id: "a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d"
	I0603 13:54:17.088507 1143252 cri.go:89] found id: ""
	I0603 13:54:17.088518 1143252 logs.go:276] 1 containers: [a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d]
	I0603 13:54:17.088583 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:17.093293 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:17.093373 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:17.130335 1143252 cri.go:89] found id: ""
	I0603 13:54:17.130370 1143252 logs.go:276] 0 containers: []
	W0603 13:54:17.130381 1143252 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:17.130389 1143252 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0603 13:54:17.130472 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0603 13:54:17.176283 1143252 cri.go:89] found id: "e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b"
	I0603 13:54:17.176317 1143252 cri.go:89] found id: "141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88"
	I0603 13:54:17.176324 1143252 cri.go:89] found id: ""
	I0603 13:54:17.176335 1143252 logs.go:276] 2 containers: [e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b 141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88]
	I0603 13:54:17.176409 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:17.181455 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:17.185881 1143252 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:17.185902 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:17.239636 1143252 logs.go:123] Gathering logs for kube-apiserver [45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a] ...
	I0603 13:54:17.239680 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a"
	I0603 13:54:17.309488 1143252 logs.go:123] Gathering logs for etcd [114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05] ...
	I0603 13:54:17.309532 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05"
	I0603 13:54:17.362243 1143252 logs.go:123] Gathering logs for coredns [f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574] ...
	I0603 13:54:17.362282 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574"
	I0603 13:54:17.401389 1143252 logs.go:123] Gathering logs for kube-proxy [c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1] ...
	I0603 13:54:17.401440 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1"
	I0603 13:54:17.442095 1143252 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:17.442127 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:17.923198 1143252 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:17.923247 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:17.939968 1143252 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:17.940000 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 13:54:18.075054 1143252 logs.go:123] Gathering logs for kube-scheduler [f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87] ...
	I0603 13:54:18.075098 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87"
	I0603 13:54:18.113954 1143252 logs.go:123] Gathering logs for kube-controller-manager [a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d] ...
	I0603 13:54:18.113994 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d"
	I0603 13:54:18.181862 1143252 logs.go:123] Gathering logs for storage-provisioner [e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b] ...
	I0603 13:54:18.181906 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b"
	I0603 13:54:18.227105 1143252 logs.go:123] Gathering logs for storage-provisioner [141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88] ...
	I0603 13:54:18.227137 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88"
	I0603 13:54:18.272684 1143252 logs.go:123] Gathering logs for container status ...
	I0603 13:54:18.272721 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:15.371753 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:17.870321 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:19.879331 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:15.990326 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:18.489960 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:18.228279 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:18.242909 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:18.242985 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:18.285400 1143678 cri.go:89] found id: ""
	I0603 13:54:18.285445 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.285455 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:18.285461 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:18.285521 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:18.321840 1143678 cri.go:89] found id: ""
	I0603 13:54:18.321868 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.321877 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:18.321884 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:18.321943 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:18.358856 1143678 cri.go:89] found id: ""
	I0603 13:54:18.358888 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.358902 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:18.358911 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:18.358979 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:18.395638 1143678 cri.go:89] found id: ""
	I0603 13:54:18.395678 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.395691 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:18.395699 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:18.395766 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:18.435541 1143678 cri.go:89] found id: ""
	I0603 13:54:18.435570 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.435581 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:18.435589 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:18.435653 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:18.469491 1143678 cri.go:89] found id: ""
	I0603 13:54:18.469527 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.469538 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:18.469545 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:18.469615 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:18.507986 1143678 cri.go:89] found id: ""
	I0603 13:54:18.508018 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.508030 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:18.508039 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:18.508106 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:18.542311 1143678 cri.go:89] found id: ""
	I0603 13:54:18.542343 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.542351 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:18.542361 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:18.542375 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:18.619295 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:18.619337 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:18.662500 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:18.662540 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:18.714392 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:18.714432 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:18.728750 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:18.728785 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:18.800786 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:21.301554 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:21.315880 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:21.315944 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:21.358178 1143678 cri.go:89] found id: ""
	I0603 13:54:21.358208 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.358217 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:21.358227 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:21.358289 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:21.395873 1143678 cri.go:89] found id: ""
	I0603 13:54:21.395969 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.395995 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:21.396014 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:21.396111 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:21.431781 1143678 cri.go:89] found id: ""
	I0603 13:54:21.431810 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.431822 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:21.431831 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:21.431906 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:21.472840 1143678 cri.go:89] found id: ""
	I0603 13:54:21.472872 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.472885 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:21.472893 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:21.472955 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:21.512296 1143678 cri.go:89] found id: ""
	I0603 13:54:21.512333 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.512346 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:21.512353 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:21.512421 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:21.547555 1143678 cri.go:89] found id: ""
	I0603 13:54:21.547588 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.547599 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:21.547609 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:21.547670 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:21.584972 1143678 cri.go:89] found id: ""
	I0603 13:54:21.585005 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.585013 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:21.585019 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:21.585085 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:21.621566 1143678 cri.go:89] found id: ""
	I0603 13:54:21.621599 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.621610 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:21.621623 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:21.621639 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:21.637223 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:21.637263 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:21.712272 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:21.712294 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:21.712310 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:21.800453 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:21.800490 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:21.841477 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:21.841525 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:20.819740 1143252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:20.836917 1143252 api_server.go:72] duration metric: took 4m15.913250824s to wait for apiserver process to appear ...
	I0603 13:54:20.836947 1143252 api_server.go:88] waiting for apiserver healthz status ...
	I0603 13:54:20.836988 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:20.837038 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:20.874034 1143252 cri.go:89] found id: "45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a"
	I0603 13:54:20.874064 1143252 cri.go:89] found id: ""
	I0603 13:54:20.874076 1143252 logs.go:276] 1 containers: [45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a]
	I0603 13:54:20.874146 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:20.878935 1143252 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:20.879020 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:20.920390 1143252 cri.go:89] found id: "114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05"
	I0603 13:54:20.920417 1143252 cri.go:89] found id: ""
	I0603 13:54:20.920425 1143252 logs.go:276] 1 containers: [114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05]
	I0603 13:54:20.920494 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:20.924858 1143252 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:20.924934 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:20.966049 1143252 cri.go:89] found id: "f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574"
	I0603 13:54:20.966077 1143252 cri.go:89] found id: ""
	I0603 13:54:20.966088 1143252 logs.go:276] 1 containers: [f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574]
	I0603 13:54:20.966174 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:20.970734 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:20.970812 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:21.010892 1143252 cri.go:89] found id: "f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87"
	I0603 13:54:21.010918 1143252 cri.go:89] found id: ""
	I0603 13:54:21.010929 1143252 logs.go:276] 1 containers: [f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87]
	I0603 13:54:21.010994 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:21.016274 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:21.016347 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:21.055294 1143252 cri.go:89] found id: "c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1"
	I0603 13:54:21.055318 1143252 cri.go:89] found id: ""
	I0603 13:54:21.055327 1143252 logs.go:276] 1 containers: [c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1]
	I0603 13:54:21.055375 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:21.060007 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:21.060069 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:21.099200 1143252 cri.go:89] found id: "a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d"
	I0603 13:54:21.099225 1143252 cri.go:89] found id: ""
	I0603 13:54:21.099236 1143252 logs.go:276] 1 containers: [a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d]
	I0603 13:54:21.099309 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:21.103590 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:21.103662 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:21.140375 1143252 cri.go:89] found id: ""
	I0603 13:54:21.140409 1143252 logs.go:276] 0 containers: []
	W0603 13:54:21.140422 1143252 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:21.140431 1143252 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0603 13:54:21.140498 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0603 13:54:21.180709 1143252 cri.go:89] found id: "e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b"
	I0603 13:54:21.180735 1143252 cri.go:89] found id: "141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88"
	I0603 13:54:21.180739 1143252 cri.go:89] found id: ""
	I0603 13:54:21.180747 1143252 logs.go:276] 2 containers: [e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b 141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88]
	I0603 13:54:21.180814 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:21.184952 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:21.189111 1143252 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:21.189140 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:21.663768 1143252 logs.go:123] Gathering logs for kube-apiserver [45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a] ...
	I0603 13:54:21.663807 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a"
	I0603 13:54:21.719542 1143252 logs.go:123] Gathering logs for etcd [114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05] ...
	I0603 13:54:21.719573 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05"
	I0603 13:54:21.786686 1143252 logs.go:123] Gathering logs for coredns [f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574] ...
	I0603 13:54:21.786725 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574"
	I0603 13:54:21.824908 1143252 logs.go:123] Gathering logs for kube-scheduler [f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87] ...
	I0603 13:54:21.824948 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87"
	I0603 13:54:21.864778 1143252 logs.go:123] Gathering logs for kube-proxy [c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1] ...
	I0603 13:54:21.864818 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1"
	I0603 13:54:21.904450 1143252 logs.go:123] Gathering logs for storage-provisioner [e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b] ...
	I0603 13:54:21.904480 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b"
	I0603 13:54:21.942006 1143252 logs.go:123] Gathering logs for storage-provisioner [141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88] ...
	I0603 13:54:21.942040 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88"
	I0603 13:54:21.979636 1143252 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:21.979673 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:22.033943 1143252 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:22.033980 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:22.048545 1143252 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:22.048578 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 13:54:22.154866 1143252 logs.go:123] Gathering logs for kube-controller-manager [a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d] ...
	I0603 13:54:22.154906 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d"
	I0603 13:54:22.218033 1143252 logs.go:123] Gathering logs for container status ...
	I0603 13:54:22.218073 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:22.374700 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:24.871898 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:20.989874 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:23.489083 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:24.394864 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:24.408416 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:24.408527 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:24.444572 1143678 cri.go:89] found id: ""
	I0603 13:54:24.444603 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.444612 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:24.444618 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:24.444672 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:24.483710 1143678 cri.go:89] found id: ""
	I0603 13:54:24.483744 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.483755 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:24.483763 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:24.483837 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:24.522396 1143678 cri.go:89] found id: ""
	I0603 13:54:24.522437 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.522450 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:24.522457 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:24.522520 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:24.560865 1143678 cri.go:89] found id: ""
	I0603 13:54:24.560896 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.560905 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:24.560911 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:24.560964 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:24.598597 1143678 cri.go:89] found id: ""
	I0603 13:54:24.598632 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.598643 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:24.598657 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:24.598722 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:24.638854 1143678 cri.go:89] found id: ""
	I0603 13:54:24.638885 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.638897 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:24.638908 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:24.638979 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:24.678039 1143678 cri.go:89] found id: ""
	I0603 13:54:24.678076 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.678088 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:24.678096 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:24.678166 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:24.712836 1143678 cri.go:89] found id: ""
	I0603 13:54:24.712871 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.712883 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:24.712896 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:24.712913 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:24.763503 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:24.763545 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:24.779383 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:24.779416 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:24.867254 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:24.867287 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:24.867307 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:24.944920 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:24.944957 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:24.768551 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:54:24.774942 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 200:
	ok
	I0603 13:54:24.776278 1143252 api_server.go:141] control plane version: v1.30.1
	I0603 13:54:24.776301 1143252 api_server.go:131] duration metric: took 3.939347802s to wait for apiserver health ...
	I0603 13:54:24.776310 1143252 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 13:54:24.776334 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:24.776386 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:24.827107 1143252 cri.go:89] found id: "45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a"
	I0603 13:54:24.827139 1143252 cri.go:89] found id: ""
	I0603 13:54:24.827152 1143252 logs.go:276] 1 containers: [45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a]
	I0603 13:54:24.827210 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:24.831681 1143252 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:24.831752 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:24.875645 1143252 cri.go:89] found id: "114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05"
	I0603 13:54:24.875689 1143252 cri.go:89] found id: ""
	I0603 13:54:24.875711 1143252 logs.go:276] 1 containers: [114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05]
	I0603 13:54:24.875778 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:24.880157 1143252 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:24.880256 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:24.932131 1143252 cri.go:89] found id: "f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574"
	I0603 13:54:24.932157 1143252 cri.go:89] found id: ""
	I0603 13:54:24.932167 1143252 logs.go:276] 1 containers: [f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574]
	I0603 13:54:24.932262 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:24.938104 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:24.938168 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:24.980289 1143252 cri.go:89] found id: "f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87"
	I0603 13:54:24.980318 1143252 cri.go:89] found id: ""
	I0603 13:54:24.980327 1143252 logs.go:276] 1 containers: [f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87]
	I0603 13:54:24.980389 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:24.985608 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:24.985687 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:25.033726 1143252 cri.go:89] found id: "c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1"
	I0603 13:54:25.033749 1143252 cri.go:89] found id: ""
	I0603 13:54:25.033757 1143252 logs.go:276] 1 containers: [c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1]
	I0603 13:54:25.033811 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:25.038493 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:25.038561 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:25.077447 1143252 cri.go:89] found id: "a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d"
	I0603 13:54:25.077474 1143252 cri.go:89] found id: ""
	I0603 13:54:25.077485 1143252 logs.go:276] 1 containers: [a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d]
	I0603 13:54:25.077545 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:25.081701 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:25.081770 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:25.120216 1143252 cri.go:89] found id: ""
	I0603 13:54:25.120246 1143252 logs.go:276] 0 containers: []
	W0603 13:54:25.120254 1143252 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:25.120261 1143252 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0603 13:54:25.120313 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0603 13:54:25.162562 1143252 cri.go:89] found id: "e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b"
	I0603 13:54:25.162596 1143252 cri.go:89] found id: "141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88"
	I0603 13:54:25.162602 1143252 cri.go:89] found id: ""
	I0603 13:54:25.162613 1143252 logs.go:276] 2 containers: [e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b 141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88]
	I0603 13:54:25.162678 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:25.167179 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:25.171531 1143252 logs.go:123] Gathering logs for container status ...
	I0603 13:54:25.171558 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:25.223749 1143252 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:25.223787 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:25.290251 1143252 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:25.290293 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:25.315271 1143252 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:25.315302 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 13:54:25.433219 1143252 logs.go:123] Gathering logs for coredns [f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574] ...
	I0603 13:54:25.433257 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574"
	I0603 13:54:25.473156 1143252 logs.go:123] Gathering logs for kube-scheduler [f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87] ...
	I0603 13:54:25.473194 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87"
	I0603 13:54:25.513988 1143252 logs.go:123] Gathering logs for kube-controller-manager [a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d] ...
	I0603 13:54:25.514015 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d"
	I0603 13:54:25.587224 1143252 logs.go:123] Gathering logs for kube-apiserver [45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a] ...
	I0603 13:54:25.587260 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a"
	I0603 13:54:25.638872 1143252 logs.go:123] Gathering logs for etcd [114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05] ...
	I0603 13:54:25.638909 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05"
	I0603 13:54:25.687323 1143252 logs.go:123] Gathering logs for kube-proxy [c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1] ...
	I0603 13:54:25.687372 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1"
	I0603 13:54:25.739508 1143252 logs.go:123] Gathering logs for storage-provisioner [e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b] ...
	I0603 13:54:25.739539 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b"
	I0603 13:54:25.775066 1143252 logs.go:123] Gathering logs for storage-provisioner [141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88] ...
	I0603 13:54:25.775096 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88"
	I0603 13:54:25.811982 1143252 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:25.812016 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:28.685228 1143252 system_pods.go:59] 8 kube-system pods found
	I0603 13:54:28.685261 1143252 system_pods.go:61] "coredns-7db6d8ff4d-qdjrv" [9a490ea5-c189-4d28-bd6b-509610d35f37] Running
	I0603 13:54:28.685265 1143252 system_pods.go:61] "etcd-embed-certs-223260" [97807b62-195b-4d94-a7f8-754f68ad4f03] Running
	I0603 13:54:28.685269 1143252 system_pods.go:61] "kube-apiserver-embed-certs-223260" [df2f6cde-407c-4ed2-8fec-5fa61a428a88] Running
	I0603 13:54:28.685272 1143252 system_pods.go:61] "kube-controller-manager-embed-certs-223260" [9b8bc1b7-3f43-4626-b9ee-37f5176b7fd6] Running
	I0603 13:54:28.685276 1143252 system_pods.go:61] "kube-proxy-s5vdl" [4c515f67-d265-4140-82ec-ba9ac4ddda80] Running
	I0603 13:54:28.685279 1143252 system_pods.go:61] "kube-scheduler-embed-certs-223260" [d23001bf-d971-42d2-a901-b2ec4b4db649] Running
	I0603 13:54:28.685285 1143252 system_pods.go:61] "metrics-server-569cc877fc-v7d9t" [e89c698d-7aab-4acd-a9b3-5ba0315ad681] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:54:28.685290 1143252 system_pods.go:61] "storage-provisioner" [6ff65744-2d90-4589-a97f-d6b4d792eab4] Running
	I0603 13:54:28.685298 1143252 system_pods.go:74] duration metric: took 3.908982484s to wait for pod list to return data ...
	I0603 13:54:28.685305 1143252 default_sa.go:34] waiting for default service account to be created ...
	I0603 13:54:28.687914 1143252 default_sa.go:45] found service account: "default"
	I0603 13:54:28.687939 1143252 default_sa.go:55] duration metric: took 2.627402ms for default service account to be created ...
	I0603 13:54:28.687947 1143252 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 13:54:28.693336 1143252 system_pods.go:86] 8 kube-system pods found
	I0603 13:54:28.693369 1143252 system_pods.go:89] "coredns-7db6d8ff4d-qdjrv" [9a490ea5-c189-4d28-bd6b-509610d35f37] Running
	I0603 13:54:28.693375 1143252 system_pods.go:89] "etcd-embed-certs-223260" [97807b62-195b-4d94-a7f8-754f68ad4f03] Running
	I0603 13:54:28.693379 1143252 system_pods.go:89] "kube-apiserver-embed-certs-223260" [df2f6cde-407c-4ed2-8fec-5fa61a428a88] Running
	I0603 13:54:28.693385 1143252 system_pods.go:89] "kube-controller-manager-embed-certs-223260" [9b8bc1b7-3f43-4626-b9ee-37f5176b7fd6] Running
	I0603 13:54:28.693389 1143252 system_pods.go:89] "kube-proxy-s5vdl" [4c515f67-d265-4140-82ec-ba9ac4ddda80] Running
	I0603 13:54:28.693393 1143252 system_pods.go:89] "kube-scheduler-embed-certs-223260" [d23001bf-d971-42d2-a901-b2ec4b4db649] Running
	I0603 13:54:28.693401 1143252 system_pods.go:89] "metrics-server-569cc877fc-v7d9t" [e89c698d-7aab-4acd-a9b3-5ba0315ad681] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:54:28.693418 1143252 system_pods.go:89] "storage-provisioner" [6ff65744-2d90-4589-a97f-d6b4d792eab4] Running
	I0603 13:54:28.693438 1143252 system_pods.go:126] duration metric: took 5.484487ms to wait for k8s-apps to be running ...
	I0603 13:54:28.693450 1143252 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 13:54:28.693497 1143252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:54:28.710364 1143252 system_svc.go:56] duration metric: took 16.901982ms WaitForService to wait for kubelet
	I0603 13:54:28.710399 1143252 kubeadm.go:576] duration metric: took 4m23.786738812s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 13:54:28.710444 1143252 node_conditions.go:102] verifying NodePressure condition ...
	I0603 13:54:28.713300 1143252 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 13:54:28.713328 1143252 node_conditions.go:123] node cpu capacity is 2
	I0603 13:54:28.713362 1143252 node_conditions.go:105] duration metric: took 2.909242ms to run NodePressure ...
	I0603 13:54:28.713382 1143252 start.go:240] waiting for startup goroutines ...
	I0603 13:54:28.713392 1143252 start.go:245] waiting for cluster config update ...
	I0603 13:54:28.713424 1143252 start.go:254] writing updated cluster config ...
	I0603 13:54:28.713798 1143252 ssh_runner.go:195] Run: rm -f paused
	I0603 13:54:28.767538 1143252 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 13:54:28.769737 1143252 out.go:177] * Done! kubectl is now configured to use "embed-certs-223260" cluster and "default" namespace by default
	I0603 13:54:27.370695 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:29.870214 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:25.990136 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:28.489276 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:30.489392 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:27.495908 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:27.509885 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:27.509968 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:27.545591 1143678 cri.go:89] found id: ""
	I0603 13:54:27.545626 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.545635 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:27.545641 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:27.545695 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:27.583699 1143678 cri.go:89] found id: ""
	I0603 13:54:27.583728 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.583740 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:27.583748 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:27.583835 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:27.623227 1143678 cri.go:89] found id: ""
	I0603 13:54:27.623268 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.623277 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:27.623283 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:27.623341 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:27.663057 1143678 cri.go:89] found id: ""
	I0603 13:54:27.663090 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.663102 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:27.663109 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:27.663187 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:27.708448 1143678 cri.go:89] found id: ""
	I0603 13:54:27.708481 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.708489 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:27.708495 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:27.708551 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:27.743629 1143678 cri.go:89] found id: ""
	I0603 13:54:27.743663 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.743674 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:27.743682 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:27.743748 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:27.778094 1143678 cri.go:89] found id: ""
	I0603 13:54:27.778128 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.778137 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:27.778147 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:27.778210 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:27.813137 1143678 cri.go:89] found id: ""
	I0603 13:54:27.813170 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.813180 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:27.813192 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:27.813208 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:27.861100 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:27.861136 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:27.914752 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:27.914794 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:27.929479 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:27.929511 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:28.002898 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:28.002926 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:28.002942 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:30.581890 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:30.595982 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:30.596068 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:30.638804 1143678 cri.go:89] found id: ""
	I0603 13:54:30.638841 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.638853 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:30.638862 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:30.638942 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:30.677202 1143678 cri.go:89] found id: ""
	I0603 13:54:30.677242 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.677253 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:30.677262 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:30.677329 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:30.717382 1143678 cri.go:89] found id: ""
	I0603 13:54:30.717436 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.717446 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:30.717455 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:30.717523 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:30.753691 1143678 cri.go:89] found id: ""
	I0603 13:54:30.753719 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.753728 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:30.753734 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:30.753798 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:30.790686 1143678 cri.go:89] found id: ""
	I0603 13:54:30.790714 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.790723 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:30.790729 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:30.790783 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:30.830196 1143678 cri.go:89] found id: ""
	I0603 13:54:30.830224 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.830237 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:30.830245 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:30.830299 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:30.865952 1143678 cri.go:89] found id: ""
	I0603 13:54:30.865980 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.865992 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:30.866000 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:30.866066 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:30.901561 1143678 cri.go:89] found id: ""
	I0603 13:54:30.901592 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.901601 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:30.901610 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:30.901627 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:30.979416 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:30.979459 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:31.035024 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:31.035061 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:31.089005 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:31.089046 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:31.105176 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:31.105210 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:31.172862 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:32.371040 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:34.870810 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:32.989041 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:34.989599 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:33.674069 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:33.688423 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:33.688499 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:33.729840 1143678 cri.go:89] found id: ""
	I0603 13:54:33.729876 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.729886 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:33.729893 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:33.729945 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:33.764984 1143678 cri.go:89] found id: ""
	I0603 13:54:33.765010 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.765018 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:33.765025 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:33.765075 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:33.798411 1143678 cri.go:89] found id: ""
	I0603 13:54:33.798446 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.798459 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:33.798468 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:33.798547 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:33.831565 1143678 cri.go:89] found id: ""
	I0603 13:54:33.831600 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.831611 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:33.831620 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:33.831688 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:33.869701 1143678 cri.go:89] found id: ""
	I0603 13:54:33.869727 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.869735 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:33.869741 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:33.869802 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:33.906108 1143678 cri.go:89] found id: ""
	I0603 13:54:33.906134 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.906144 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:33.906153 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:33.906218 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:33.946577 1143678 cri.go:89] found id: ""
	I0603 13:54:33.946607 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.946615 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:33.946621 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:33.946673 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:33.986691 1143678 cri.go:89] found id: ""
	I0603 13:54:33.986724 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.986743 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:33.986757 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:33.986775 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:34.044068 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:34.044110 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:34.059686 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:34.059724 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:34.141490 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:34.141514 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:34.141531 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:34.227890 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:34.227930 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:36.778969 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:36.792527 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:36.792612 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:36.828044 1143678 cri.go:89] found id: ""
	I0603 13:54:36.828083 1143678 logs.go:276] 0 containers: []
	W0603 13:54:36.828096 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:36.828102 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:36.828166 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:36.863869 1143678 cri.go:89] found id: ""
	I0603 13:54:36.863905 1143678 logs.go:276] 0 containers: []
	W0603 13:54:36.863917 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:36.863926 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:36.863996 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:36.899610 1143678 cri.go:89] found id: ""
	I0603 13:54:36.899649 1143678 logs.go:276] 0 containers: []
	W0603 13:54:36.899661 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:36.899669 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:36.899742 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:36.938627 1143678 cri.go:89] found id: ""
	I0603 13:54:36.938664 1143678 logs.go:276] 0 containers: []
	W0603 13:54:36.938675 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:36.938683 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:36.938739 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:36.973810 1143678 cri.go:89] found id: ""
	I0603 13:54:36.973842 1143678 logs.go:276] 0 containers: []
	W0603 13:54:36.973857 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:36.973863 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:36.973915 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:37.013759 1143678 cri.go:89] found id: ""
	I0603 13:54:37.013792 1143678 logs.go:276] 0 containers: []
	W0603 13:54:37.013805 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:37.013813 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:37.013881 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:37.049665 1143678 cri.go:89] found id: ""
	I0603 13:54:37.049697 1143678 logs.go:276] 0 containers: []
	W0603 13:54:37.049706 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:37.049712 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:37.049787 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:37.087405 1143678 cri.go:89] found id: ""
	I0603 13:54:37.087436 1143678 logs.go:276] 0 containers: []
	W0603 13:54:37.087446 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:37.087457 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:37.087470 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:37.126443 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:37.126476 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:37.177976 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:37.178015 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:37.192821 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:37.192860 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:37.267895 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:37.267926 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:37.267945 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:36.871536 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:37.371048 1143450 pod_ready.go:81] duration metric: took 4m0.007102739s for pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace to be "Ready" ...
	E0603 13:54:37.371080 1143450 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0603 13:54:37.371092 1143450 pod_ready.go:38] duration metric: took 4m5.236838117s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:54:37.371111 1143450 api_server.go:52] waiting for apiserver process to appear ...
	I0603 13:54:37.371145 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:37.371202 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:37.428454 1143450 cri.go:89] found id: "50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836"
	I0603 13:54:37.428487 1143450 cri.go:89] found id: ""
	I0603 13:54:37.428498 1143450 logs.go:276] 1 containers: [50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836]
	I0603 13:54:37.428564 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:37.434473 1143450 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:37.434552 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:37.476251 1143450 cri.go:89] found id: "c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d"
	I0603 13:54:37.476288 1143450 cri.go:89] found id: ""
	I0603 13:54:37.476300 1143450 logs.go:276] 1 containers: [c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d]
	I0603 13:54:37.476368 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:37.483190 1143450 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:37.483280 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:37.528660 1143450 cri.go:89] found id: "bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d"
	I0603 13:54:37.528693 1143450 cri.go:89] found id: ""
	I0603 13:54:37.528704 1143450 logs.go:276] 1 containers: [bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d]
	I0603 13:54:37.528797 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:37.533716 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:37.533809 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:37.573995 1143450 cri.go:89] found id: "7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a"
	I0603 13:54:37.574016 1143450 cri.go:89] found id: ""
	I0603 13:54:37.574025 1143450 logs.go:276] 1 containers: [7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a]
	I0603 13:54:37.574071 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:37.578385 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:37.578465 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:37.616468 1143450 cri.go:89] found id: "9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b"
	I0603 13:54:37.616511 1143450 cri.go:89] found id: ""
	I0603 13:54:37.616522 1143450 logs.go:276] 1 containers: [9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b]
	I0603 13:54:37.616603 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:37.621204 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:37.621277 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:37.661363 1143450 cri.go:89] found id: "b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7"
	I0603 13:54:37.661390 1143450 cri.go:89] found id: ""
	I0603 13:54:37.661401 1143450 logs.go:276] 1 containers: [b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7]
	I0603 13:54:37.661507 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:37.665969 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:37.666055 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:37.705096 1143450 cri.go:89] found id: ""
	I0603 13:54:37.705128 1143450 logs.go:276] 0 containers: []
	W0603 13:54:37.705136 1143450 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:37.705142 1143450 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0603 13:54:37.705210 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0603 13:54:37.746365 1143450 cri.go:89] found id: "969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240"
	I0603 13:54:37.746400 1143450 cri.go:89] found id: "bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4"
	I0603 13:54:37.746404 1143450 cri.go:89] found id: ""
	I0603 13:54:37.746412 1143450 logs.go:276] 2 containers: [969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240 bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4]
	I0603 13:54:37.746470 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:37.750874 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:37.755146 1143450 logs.go:123] Gathering logs for kube-controller-manager [b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7] ...
	I0603 13:54:37.755175 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7"
	I0603 13:54:37.811365 1143450 logs.go:123] Gathering logs for storage-provisioner [bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4] ...
	I0603 13:54:37.811403 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4"
	I0603 13:54:37.849687 1143450 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:37.849729 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:37.904870 1143450 logs.go:123] Gathering logs for etcd [c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d] ...
	I0603 13:54:37.904909 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d"
	I0603 13:54:37.955448 1143450 logs.go:123] Gathering logs for coredns [bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d] ...
	I0603 13:54:37.955497 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d"
	I0603 13:54:37.996659 1143450 logs.go:123] Gathering logs for kube-proxy [9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b] ...
	I0603 13:54:37.996687 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b"
	I0603 13:54:38.047501 1143450 logs.go:123] Gathering logs for storage-provisioner [969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240] ...
	I0603 13:54:38.047540 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240"
	I0603 13:54:38.090932 1143450 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:38.090969 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:38.606612 1143450 logs.go:123] Gathering logs for container status ...
	I0603 13:54:38.606672 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:38.652732 1143450 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:38.652774 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:38.670570 1143450 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:38.670620 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 13:54:38.812156 1143450 logs.go:123] Gathering logs for kube-apiserver [50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836] ...
	I0603 13:54:38.812208 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836"
	I0603 13:54:38.862940 1143450 logs.go:123] Gathering logs for kube-scheduler [7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a] ...
	I0603 13:54:38.862988 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a"
	I0603 13:54:37.491134 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:39.990379 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:39.846505 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:39.860426 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:39.860514 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:39.896684 1143678 cri.go:89] found id: ""
	I0603 13:54:39.896712 1143678 logs.go:276] 0 containers: []
	W0603 13:54:39.896726 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:39.896736 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:39.896801 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:39.932437 1143678 cri.go:89] found id: ""
	I0603 13:54:39.932482 1143678 logs.go:276] 0 containers: []
	W0603 13:54:39.932494 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:39.932503 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:39.932571 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:39.967850 1143678 cri.go:89] found id: ""
	I0603 13:54:39.967883 1143678 logs.go:276] 0 containers: []
	W0603 13:54:39.967891 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:39.967898 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:39.967952 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:40.003255 1143678 cri.go:89] found id: ""
	I0603 13:54:40.003284 1143678 logs.go:276] 0 containers: []
	W0603 13:54:40.003292 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:40.003298 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:40.003351 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:40.045865 1143678 cri.go:89] found id: ""
	I0603 13:54:40.045892 1143678 logs.go:276] 0 containers: []
	W0603 13:54:40.045904 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:40.045912 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:40.045976 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:40.082469 1143678 cri.go:89] found id: ""
	I0603 13:54:40.082498 1143678 logs.go:276] 0 containers: []
	W0603 13:54:40.082507 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:40.082513 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:40.082584 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:40.117181 1143678 cri.go:89] found id: ""
	I0603 13:54:40.117231 1143678 logs.go:276] 0 containers: []
	W0603 13:54:40.117242 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:40.117250 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:40.117320 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:40.157776 1143678 cri.go:89] found id: ""
	I0603 13:54:40.157813 1143678 logs.go:276] 0 containers: []
	W0603 13:54:40.157822 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:40.157832 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:40.157848 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:40.213374 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:40.213437 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:40.228298 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:40.228330 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:40.305450 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:40.305485 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:40.305503 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:40.393653 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:40.393704 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:41.405129 1143450 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:41.423234 1143450 api_server.go:72] duration metric: took 4m14.998447047s to wait for apiserver process to appear ...
	I0603 13:54:41.423266 1143450 api_server.go:88] waiting for apiserver healthz status ...
	I0603 13:54:41.423312 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:41.423374 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:41.463540 1143450 cri.go:89] found id: "50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836"
	I0603 13:54:41.463562 1143450 cri.go:89] found id: ""
	I0603 13:54:41.463570 1143450 logs.go:276] 1 containers: [50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836]
	I0603 13:54:41.463620 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:41.468145 1143450 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:41.468226 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:41.511977 1143450 cri.go:89] found id: "c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d"
	I0603 13:54:41.512000 1143450 cri.go:89] found id: ""
	I0603 13:54:41.512017 1143450 logs.go:276] 1 containers: [c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d]
	I0603 13:54:41.512081 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:41.516600 1143450 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:41.516674 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:41.554392 1143450 cri.go:89] found id: "bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d"
	I0603 13:54:41.554420 1143450 cri.go:89] found id: ""
	I0603 13:54:41.554443 1143450 logs.go:276] 1 containers: [bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d]
	I0603 13:54:41.554508 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:41.558983 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:41.559039 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:41.597710 1143450 cri.go:89] found id: "7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a"
	I0603 13:54:41.597737 1143450 cri.go:89] found id: ""
	I0603 13:54:41.597747 1143450 logs.go:276] 1 containers: [7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a]
	I0603 13:54:41.597811 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:41.602164 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:41.602227 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:41.639422 1143450 cri.go:89] found id: "9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b"
	I0603 13:54:41.639452 1143450 cri.go:89] found id: ""
	I0603 13:54:41.639462 1143450 logs.go:276] 1 containers: [9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b]
	I0603 13:54:41.639532 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:41.644093 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:41.644171 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:41.682475 1143450 cri.go:89] found id: "b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7"
	I0603 13:54:41.682506 1143450 cri.go:89] found id: ""
	I0603 13:54:41.682515 1143450 logs.go:276] 1 containers: [b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7]
	I0603 13:54:41.682578 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:41.687654 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:41.687734 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:41.724804 1143450 cri.go:89] found id: ""
	I0603 13:54:41.724839 1143450 logs.go:276] 0 containers: []
	W0603 13:54:41.724850 1143450 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:41.724858 1143450 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0603 13:54:41.724928 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0603 13:54:41.764625 1143450 cri.go:89] found id: "969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240"
	I0603 13:54:41.764653 1143450 cri.go:89] found id: "bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4"
	I0603 13:54:41.764659 1143450 cri.go:89] found id: ""
	I0603 13:54:41.764670 1143450 logs.go:276] 2 containers: [969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240 bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4]
	I0603 13:54:41.764736 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:41.769499 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:41.773782 1143450 logs.go:123] Gathering logs for container status ...
	I0603 13:54:41.773806 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:41.816486 1143450 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:41.816520 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:41.833538 1143450 logs.go:123] Gathering logs for kube-apiserver [50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836] ...
	I0603 13:54:41.833569 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836"
	I0603 13:54:41.877958 1143450 logs.go:123] Gathering logs for etcd [c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d] ...
	I0603 13:54:41.878004 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d"
	I0603 13:54:41.922575 1143450 logs.go:123] Gathering logs for kube-controller-manager [b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7] ...
	I0603 13:54:41.922612 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7"
	I0603 13:54:41.983865 1143450 logs.go:123] Gathering logs for storage-provisioner [969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240] ...
	I0603 13:54:41.983900 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240"
	I0603 13:54:42.032746 1143450 logs.go:123] Gathering logs for storage-provisioner [bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4] ...
	I0603 13:54:42.032773 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4"
	I0603 13:54:42.076129 1143450 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:42.076166 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:42.129061 1143450 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:42.129099 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 13:54:42.248179 1143450 logs.go:123] Gathering logs for coredns [bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d] ...
	I0603 13:54:42.248213 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d"
	I0603 13:54:42.292179 1143450 logs.go:123] Gathering logs for kube-scheduler [7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a] ...
	I0603 13:54:42.292288 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a"
	I0603 13:54:42.340447 1143450 logs.go:123] Gathering logs for kube-proxy [9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b] ...
	I0603 13:54:42.340493 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b"
	I0603 13:54:42.381993 1143450 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:42.382024 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:42.488926 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:44.990221 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:42.934691 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:42.948505 1143678 kubeadm.go:591] duration metric: took 4m4.45791317s to restartPrimaryControlPlane
	W0603 13:54:42.948592 1143678 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0603 13:54:42.948629 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 13:54:48.316951 1143678 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.36829775s)
	I0603 13:54:48.317039 1143678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:54:48.333630 1143678 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 13:54:48.345772 1143678 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 13:54:48.357359 1143678 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 13:54:48.357386 1143678 kubeadm.go:156] found existing configuration files:
	
	I0603 13:54:48.357477 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 13:54:48.367844 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 13:54:48.367917 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 13:54:48.379349 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 13:54:48.389684 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 13:54:48.389760 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 13:54:48.401562 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 13:54:48.412670 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 13:54:48.412743 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 13:54:48.424261 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 13:54:48.434598 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 13:54:48.434674 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 13:54:48.446187 1143678 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 13:54:48.527873 1143678 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0603 13:54:48.528073 1143678 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 13:54:48.695244 1143678 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 13:54:48.695401 1143678 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 13:54:48.695581 1143678 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 13:54:48.930141 1143678 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 13:54:45.281199 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:54:45.286305 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 200:
	ok
	I0603 13:54:45.287421 1143450 api_server.go:141] control plane version: v1.30.1
	I0603 13:54:45.287444 1143450 api_server.go:131] duration metric: took 3.864171356s to wait for apiserver health ...
	I0603 13:54:45.287455 1143450 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 13:54:45.287486 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:45.287540 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:45.328984 1143450 cri.go:89] found id: "50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836"
	I0603 13:54:45.329012 1143450 cri.go:89] found id: ""
	I0603 13:54:45.329022 1143450 logs.go:276] 1 containers: [50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836]
	I0603 13:54:45.329075 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:45.334601 1143450 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:45.334683 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:45.382942 1143450 cri.go:89] found id: "c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d"
	I0603 13:54:45.382967 1143450 cri.go:89] found id: ""
	I0603 13:54:45.382978 1143450 logs.go:276] 1 containers: [c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d]
	I0603 13:54:45.383039 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:45.387904 1143450 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:45.387969 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:45.431948 1143450 cri.go:89] found id: "bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d"
	I0603 13:54:45.431981 1143450 cri.go:89] found id: ""
	I0603 13:54:45.431992 1143450 logs.go:276] 1 containers: [bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d]
	I0603 13:54:45.432052 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:45.440993 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:45.441074 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:45.490086 1143450 cri.go:89] found id: "7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a"
	I0603 13:54:45.490114 1143450 cri.go:89] found id: ""
	I0603 13:54:45.490125 1143450 logs.go:276] 1 containers: [7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a]
	I0603 13:54:45.490194 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:45.494628 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:45.494688 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:45.532264 1143450 cri.go:89] found id: "9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b"
	I0603 13:54:45.532296 1143450 cri.go:89] found id: ""
	I0603 13:54:45.532307 1143450 logs.go:276] 1 containers: [9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b]
	I0603 13:54:45.532374 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:45.536914 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:45.536985 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:45.576641 1143450 cri.go:89] found id: "b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7"
	I0603 13:54:45.576663 1143450 cri.go:89] found id: ""
	I0603 13:54:45.576671 1143450 logs.go:276] 1 containers: [b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7]
	I0603 13:54:45.576720 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:45.580872 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:45.580926 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:45.628834 1143450 cri.go:89] found id: ""
	I0603 13:54:45.628864 1143450 logs.go:276] 0 containers: []
	W0603 13:54:45.628872 1143450 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:45.628879 1143450 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0603 13:54:45.628931 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0603 13:54:45.671689 1143450 cri.go:89] found id: "969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240"
	I0603 13:54:45.671719 1143450 cri.go:89] found id: "bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4"
	I0603 13:54:45.671727 1143450 cri.go:89] found id: ""
	I0603 13:54:45.671740 1143450 logs.go:276] 2 containers: [969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240 bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4]
	I0603 13:54:45.671799 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:45.677161 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:45.682179 1143450 logs.go:123] Gathering logs for container status ...
	I0603 13:54:45.682219 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:45.731155 1143450 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:45.731192 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 13:54:45.846365 1143450 logs.go:123] Gathering logs for kube-apiserver [50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836] ...
	I0603 13:54:45.846411 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836"
	I0603 13:54:45.907694 1143450 logs.go:123] Gathering logs for coredns [bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d] ...
	I0603 13:54:45.907733 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d"
	I0603 13:54:45.952881 1143450 logs.go:123] Gathering logs for kube-scheduler [7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a] ...
	I0603 13:54:45.952919 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a"
	I0603 13:54:45.998674 1143450 logs.go:123] Gathering logs for kube-controller-manager [b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7] ...
	I0603 13:54:45.998722 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7"
	I0603 13:54:46.061902 1143450 logs.go:123] Gathering logs for storage-provisioner [969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240] ...
	I0603 13:54:46.061949 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240"
	I0603 13:54:46.106017 1143450 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:46.106056 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:46.473915 1143450 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:46.473981 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:46.530212 1143450 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:46.530260 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:46.545954 1143450 logs.go:123] Gathering logs for etcd [c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d] ...
	I0603 13:54:46.545996 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d"
	I0603 13:54:46.595057 1143450 logs.go:123] Gathering logs for kube-proxy [9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b] ...
	I0603 13:54:46.595097 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b"
	I0603 13:54:46.637835 1143450 logs.go:123] Gathering logs for storage-provisioner [bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4] ...
	I0603 13:54:46.637872 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4"
	I0603 13:54:49.190539 1143450 system_pods.go:59] 8 kube-system pods found
	I0603 13:54:49.190572 1143450 system_pods.go:61] "coredns-7db6d8ff4d-flxqj" [a116f363-ca50-4e2d-8c77-e99498c81e36] Running
	I0603 13:54:49.190577 1143450 system_pods.go:61] "etcd-default-k8s-diff-port-030870" [4134b8e4-b7c4-4571-ae7f-f1eff2be2427] Running
	I0603 13:54:49.190582 1143450 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-030870" [38fe3d48-9d20-448a-b8d1-7c3af8ab1d2b] Running
	I0603 13:54:49.190586 1143450 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-030870" [5c8f2fc4-fc4f-48f8-8d81-3b64aa9a93c3] Running
	I0603 13:54:49.190590 1143450 system_pods.go:61] "kube-proxy-thsrx" [96df5442-b343-47c8-a561-681a2d568d50] Running
	I0603 13:54:49.190593 1143450 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-030870" [1f2c23a1-1c2c-463f-a5f0-e8f1bb8956f6] Running
	I0603 13:54:49.190602 1143450 system_pods.go:61] "metrics-server-569cc877fc-8xw9v" [4ab08177-2171-493b-928c-456d8a21fd68] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:54:49.190609 1143450 system_pods.go:61] "storage-provisioner" [64d080e5-d582-4ee5-adbc-a652e8e2b820] Running
	I0603 13:54:49.190620 1143450 system_pods.go:74] duration metric: took 3.903157143s to wait for pod list to return data ...
	I0603 13:54:49.190633 1143450 default_sa.go:34] waiting for default service account to be created ...
	I0603 13:54:49.193192 1143450 default_sa.go:45] found service account: "default"
	I0603 13:54:49.193219 1143450 default_sa.go:55] duration metric: took 2.575016ms for default service account to be created ...
	I0603 13:54:49.193229 1143450 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 13:54:49.202028 1143450 system_pods.go:86] 8 kube-system pods found
	I0603 13:54:49.202065 1143450 system_pods.go:89] "coredns-7db6d8ff4d-flxqj" [a116f363-ca50-4e2d-8c77-e99498c81e36] Running
	I0603 13:54:49.202074 1143450 system_pods.go:89] "etcd-default-k8s-diff-port-030870" [4134b8e4-b7c4-4571-ae7f-f1eff2be2427] Running
	I0603 13:54:49.202081 1143450 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-030870" [38fe3d48-9d20-448a-b8d1-7c3af8ab1d2b] Running
	I0603 13:54:49.202088 1143450 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-030870" [5c8f2fc4-fc4f-48f8-8d81-3b64aa9a93c3] Running
	I0603 13:54:49.202094 1143450 system_pods.go:89] "kube-proxy-thsrx" [96df5442-b343-47c8-a561-681a2d568d50] Running
	I0603 13:54:49.202100 1143450 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-030870" [1f2c23a1-1c2c-463f-a5f0-e8f1bb8956f6] Running
	I0603 13:54:49.202113 1143450 system_pods.go:89] "metrics-server-569cc877fc-8xw9v" [4ab08177-2171-493b-928c-456d8a21fd68] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:54:49.202124 1143450 system_pods.go:89] "storage-provisioner" [64d080e5-d582-4ee5-adbc-a652e8e2b820] Running
	I0603 13:54:49.202135 1143450 system_pods.go:126] duration metric: took 8.899065ms to wait for k8s-apps to be running ...
	I0603 13:54:49.202152 1143450 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 13:54:49.202209 1143450 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:54:49.220199 1143450 system_svc.go:56] duration metric: took 18.025994ms WaitForService to wait for kubelet
	I0603 13:54:49.220242 1143450 kubeadm.go:576] duration metric: took 4m22.79546223s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 13:54:49.220269 1143450 node_conditions.go:102] verifying NodePressure condition ...
	I0603 13:54:49.223327 1143450 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 13:54:49.223354 1143450 node_conditions.go:123] node cpu capacity is 2
	I0603 13:54:49.223367 1143450 node_conditions.go:105] duration metric: took 3.093435ms to run NodePressure ...
	I0603 13:54:49.223383 1143450 start.go:240] waiting for startup goroutines ...
	I0603 13:54:49.223393 1143450 start.go:245] waiting for cluster config update ...
	I0603 13:54:49.223408 1143450 start.go:254] writing updated cluster config ...
	I0603 13:54:49.223704 1143450 ssh_runner.go:195] Run: rm -f paused
	I0603 13:54:49.277924 1143450 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 13:54:49.280442 1143450 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-030870" cluster and "default" namespace by default
	I0603 13:54:48.932024 1143678 out.go:204]   - Generating certificates and keys ...
	I0603 13:54:48.932110 1143678 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 13:54:48.932168 1143678 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 13:54:48.932235 1143678 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 13:54:48.932305 1143678 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 13:54:48.932481 1143678 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 13:54:48.932639 1143678 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 13:54:48.933272 1143678 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 13:54:48.933771 1143678 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 13:54:48.934251 1143678 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 13:54:48.934654 1143678 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 13:54:48.934712 1143678 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 13:54:48.934762 1143678 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 13:54:49.063897 1143678 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 13:54:49.266680 1143678 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 13:54:49.364943 1143678 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 13:54:49.628905 1143678 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 13:54:49.645861 1143678 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 13:54:49.645991 1143678 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 13:54:49.646049 1143678 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 13:54:49.795196 1143678 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 13:54:47.490336 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:49.989543 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:49.798407 1143678 out.go:204]   - Booting up control plane ...
	I0603 13:54:49.798564 1143678 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 13:54:49.800163 1143678 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 13:54:49.802226 1143678 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 13:54:49.803809 1143678 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 13:54:49.806590 1143678 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0603 13:54:52.490088 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:54.990092 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:57.488119 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:59.489775 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:01.490194 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:03.989075 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:05.990054 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:08.489226 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:10.989028 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:13.489118 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:15.489176 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:17.989008 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:20.489091 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:22.989284 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:24.990020 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:27.489326 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:27.983679 1142862 pod_ready.go:81] duration metric: took 4m0.001142992s for pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace to be "Ready" ...
	E0603 13:55:27.983708 1142862 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace to be "Ready" (will not retry!)
	I0603 13:55:27.983731 1142862 pod_ready.go:38] duration metric: took 4m12.038904247s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:55:27.983760 1142862 kubeadm.go:591] duration metric: took 4m21.273943202s to restartPrimaryControlPlane
	W0603 13:55:27.983831 1142862 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0603 13:55:27.983865 1142862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 13:55:29.807867 1143678 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0603 13:55:29.808474 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:55:29.808754 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:55:34.809455 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:55:34.809722 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:55:44.810305 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:55:44.810491 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:55:59.870853 1142862 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.886953189s)
	I0603 13:55:59.870958 1142862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:55:59.889658 1142862 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 13:55:59.901529 1142862 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 13:55:59.914241 1142862 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 13:55:59.914266 1142862 kubeadm.go:156] found existing configuration files:
	
	I0603 13:55:59.914312 1142862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 13:55:59.924884 1142862 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 13:55:59.924950 1142862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 13:55:59.935494 1142862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 13:55:59.946222 1142862 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 13:55:59.946321 1142862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 13:55:59.956749 1142862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 13:55:59.967027 1142862 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 13:55:59.967110 1142862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 13:55:59.979124 1142862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 13:55:59.989689 1142862 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 13:55:59.989751 1142862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 13:56:00.000616 1142862 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 13:56:00.230878 1142862 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 13:56:04.811725 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:56:04.811929 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:56:08.995375 1142862 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0603 13:56:08.995463 1142862 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 13:56:08.995588 1142862 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 13:56:08.995724 1142862 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 13:56:08.995874 1142862 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 13:56:08.995970 1142862 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 13:56:08.997810 1142862 out.go:204]   - Generating certificates and keys ...
	I0603 13:56:08.997914 1142862 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 13:56:08.998045 1142862 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 13:56:08.998154 1142862 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 13:56:08.998321 1142862 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 13:56:08.998423 1142862 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 13:56:08.998506 1142862 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 13:56:08.998578 1142862 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 13:56:08.998665 1142862 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 13:56:08.998764 1142862 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 13:56:08.998860 1142862 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 13:56:08.998919 1142862 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 13:56:08.999011 1142862 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 13:56:08.999111 1142862 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 13:56:08.999202 1142862 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0603 13:56:08.999275 1142862 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 13:56:08.999354 1142862 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 13:56:08.999423 1142862 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 13:56:08.999538 1142862 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 13:56:08.999692 1142862 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 13:56:09.001133 1142862 out.go:204]   - Booting up control plane ...
	I0603 13:56:09.001218 1142862 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 13:56:09.001293 1142862 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 13:56:09.001354 1142862 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 13:56:09.001499 1142862 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 13:56:09.001584 1142862 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 13:56:09.001637 1142862 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 13:56:09.001768 1142862 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0603 13:56:09.001881 1142862 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0603 13:56:09.001941 1142862 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.923053ms
	I0603 13:56:09.002010 1142862 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0603 13:56:09.002090 1142862 kubeadm.go:309] [api-check] The API server is healthy after 5.502208975s
	I0603 13:56:09.002224 1142862 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0603 13:56:09.002363 1142862 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0603 13:56:09.002457 1142862 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0603 13:56:09.002647 1142862 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-817450 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0603 13:56:09.002713 1142862 kubeadm.go:309] [bootstrap-token] Using token: a7hbk8.xb8is7k6ewa3l3ya
	I0603 13:56:09.004666 1142862 out.go:204]   - Configuring RBAC rules ...
	I0603 13:56:09.004792 1142862 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0603 13:56:09.004883 1142862 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0603 13:56:09.005026 1142862 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0603 13:56:09.005234 1142862 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0603 13:56:09.005389 1142862 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0603 13:56:09.005531 1142862 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0603 13:56:09.005651 1142862 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0603 13:56:09.005709 1142862 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0603 13:56:09.005779 1142862 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0603 13:56:09.005787 1142862 kubeadm.go:309] 
	I0603 13:56:09.005869 1142862 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0603 13:56:09.005885 1142862 kubeadm.go:309] 
	I0603 13:56:09.006014 1142862 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0603 13:56:09.006034 1142862 kubeadm.go:309] 
	I0603 13:56:09.006076 1142862 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0603 13:56:09.006136 1142862 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0603 13:56:09.006197 1142862 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0603 13:56:09.006203 1142862 kubeadm.go:309] 
	I0603 13:56:09.006263 1142862 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0603 13:56:09.006273 1142862 kubeadm.go:309] 
	I0603 13:56:09.006330 1142862 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0603 13:56:09.006338 1142862 kubeadm.go:309] 
	I0603 13:56:09.006393 1142862 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0603 13:56:09.006476 1142862 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0603 13:56:09.006542 1142862 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0603 13:56:09.006548 1142862 kubeadm.go:309] 
	I0603 13:56:09.006629 1142862 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0603 13:56:09.006746 1142862 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0603 13:56:09.006758 1142862 kubeadm.go:309] 
	I0603 13:56:09.006850 1142862 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token a7hbk8.xb8is7k6ewa3l3ya \
	I0603 13:56:09.006987 1142862 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c33e9516f6d05db03b44f9194bafe44692a1b8ae1d860b8bc74f77578e93fdb1 \
	I0603 13:56:09.007028 1142862 kubeadm.go:309] 	--control-plane 
	I0603 13:56:09.007037 1142862 kubeadm.go:309] 
	I0603 13:56:09.007141 1142862 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0603 13:56:09.007170 1142862 kubeadm.go:309] 
	I0603 13:56:09.007266 1142862 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token a7hbk8.xb8is7k6ewa3l3ya \
	I0603 13:56:09.007427 1142862 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c33e9516f6d05db03b44f9194bafe44692a1b8ae1d860b8bc74f77578e93fdb1 
	I0603 13:56:09.007451 1142862 cni.go:84] Creating CNI manager for ""
	I0603 13:56:09.007464 1142862 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:56:09.009292 1142862 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 13:56:09.010750 1142862 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 13:56:09.022810 1142862 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 13:56:09.052132 1142862 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 13:56:09.052150 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:09.052150 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-817450 minikube.k8s.io/updated_at=2024_06_03T13_56_09_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354 minikube.k8s.io/name=no-preload-817450 minikube.k8s.io/primary=true
	I0603 13:56:09.291610 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:09.296892 1142862 ops.go:34] apiserver oom_adj: -16
	I0603 13:56:09.792736 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:10.292471 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:10.792688 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:11.291782 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:11.792454 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:12.292056 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:12.792150 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:13.292620 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:13.792024 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:14.292501 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:14.791790 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:15.292128 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:15.792608 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:16.292106 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:16.791876 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:17.292276 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:17.791876 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:18.292644 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:18.792571 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:19.292064 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:19.791908 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:20.292511 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:20.792137 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:21.292153 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:21.791809 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:21.882178 1142862 kubeadm.go:1107] duration metric: took 12.830108615s to wait for elevateKubeSystemPrivileges
	W0603 13:56:21.882223 1142862 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0603 13:56:21.882236 1142862 kubeadm.go:393] duration metric: took 5m15.237452092s to StartCluster
	I0603 13:56:21.882260 1142862 settings.go:142] acquiring lock: {Name:mka7155af15d143794eb08b8670f7d850f44839e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:56:21.882368 1142862 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 13:56:21.883986 1142862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/kubeconfig: {Name:mk082a4c41fd0f4876b4085806e1bc5ef6533b14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:56:21.884288 1142862 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.125 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 13:56:21.885915 1142862 out.go:177] * Verifying Kubernetes components...
	I0603 13:56:21.884411 1142862 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 13:56:21.884504 1142862 config.go:182] Loaded profile config "no-preload-817450": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:56:21.887156 1142862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:56:21.887168 1142862 addons.go:69] Setting storage-provisioner=true in profile "no-preload-817450"
	I0603 13:56:21.887199 1142862 addons.go:69] Setting metrics-server=true in profile "no-preload-817450"
	I0603 13:56:21.887230 1142862 addons.go:234] Setting addon storage-provisioner=true in "no-preload-817450"
	W0603 13:56:21.887245 1142862 addons.go:243] addon storage-provisioner should already be in state true
	I0603 13:56:21.887261 1142862 addons.go:234] Setting addon metrics-server=true in "no-preload-817450"
	W0603 13:56:21.887276 1142862 addons.go:243] addon metrics-server should already be in state true
	I0603 13:56:21.887295 1142862 host.go:66] Checking if "no-preload-817450" exists ...
	I0603 13:56:21.887316 1142862 host.go:66] Checking if "no-preload-817450" exists ...
	I0603 13:56:21.887156 1142862 addons.go:69] Setting default-storageclass=true in profile "no-preload-817450"
	I0603 13:56:21.887366 1142862 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-817450"
	I0603 13:56:21.887709 1142862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:56:21.887711 1142862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:56:21.887749 1142862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:56:21.887752 1142862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:56:21.887779 1142862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:56:21.887778 1142862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:56:21.906019 1142862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37881
	I0603 13:56:21.906319 1142862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42985
	I0603 13:56:21.906563 1142862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43371
	I0603 13:56:21.906601 1142862 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:56:21.906714 1142862 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:56:21.907043 1142862 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:56:21.907126 1142862 main.go:141] libmachine: Using API Version  1
	I0603 13:56:21.907143 1142862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:56:21.907269 1142862 main.go:141] libmachine: Using API Version  1
	I0603 13:56:21.907288 1142862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:56:21.907558 1142862 main.go:141] libmachine: Using API Version  1
	I0603 13:56:21.907578 1142862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:56:21.907752 1142862 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:56:21.907891 1142862 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:56:21.908248 1142862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:56:21.908269 1142862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:56:21.908419 1142862 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:56:21.908487 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetState
	I0603 13:56:21.909150 1142862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:56:21.909175 1142862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:56:21.912898 1142862 addons.go:234] Setting addon default-storageclass=true in "no-preload-817450"
	W0603 13:56:21.912926 1142862 addons.go:243] addon default-storageclass should already be in state true
	I0603 13:56:21.912963 1142862 host.go:66] Checking if "no-preload-817450" exists ...
	I0603 13:56:21.913361 1142862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:56:21.913413 1142862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:56:21.928877 1142862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32983
	I0603 13:56:21.929336 1142862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44735
	I0603 13:56:21.929541 1142862 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:56:21.930006 1142862 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:56:21.930064 1142862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43105
	I0603 13:56:21.930161 1142862 main.go:141] libmachine: Using API Version  1
	I0603 13:56:21.930186 1142862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:56:21.930580 1142862 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:56:21.930723 1142862 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:56:21.930798 1142862 main.go:141] libmachine: Using API Version  1
	I0603 13:56:21.930812 1142862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:56:21.930891 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetState
	I0603 13:56:21.931037 1142862 main.go:141] libmachine: Using API Version  1
	I0603 13:56:21.931052 1142862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:56:21.931187 1142862 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:56:21.931369 1142862 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:56:21.931394 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetState
	I0603 13:56:21.932113 1142862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:56:21.932140 1142862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:56:21.933613 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:56:21.936068 1142862 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:56:21.934518 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:56:21.937788 1142862 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 13:56:21.937821 1142862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 13:56:21.937844 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:56:21.939174 1142862 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0603 13:56:21.940435 1142862 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0603 13:56:21.940458 1142862 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0603 13:56:21.940559 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:56:21.942628 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:56:21.943950 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:56:21.944227 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:56:21.944257 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:56:21.944449 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:56:21.944658 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:56:21.944734 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:56:21.944754 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:56:21.944780 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:56:21.944919 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:56:21.944932 1142862 sshutil.go:53] new ssh client: &{IP:192.168.72.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa Username:docker}
	I0603 13:56:21.945154 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:56:21.945309 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:56:21.945457 1142862 sshutil.go:53] new ssh client: &{IP:192.168.72.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa Username:docker}
	I0603 13:56:21.951140 1142862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45619
	I0603 13:56:21.951606 1142862 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:56:21.952125 1142862 main.go:141] libmachine: Using API Version  1
	I0603 13:56:21.952152 1142862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:56:21.952579 1142862 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:56:21.952808 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetState
	I0603 13:56:21.954505 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:56:21.954760 1142862 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 13:56:21.954781 1142862 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 13:56:21.954801 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:56:21.958298 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:56:21.958816 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:56:21.958851 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:56:21.959086 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:56:21.959325 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:56:21.959515 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:56:21.959678 1142862 sshutil.go:53] new ssh client: &{IP:192.168.72.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa Username:docker}
	I0603 13:56:22.102359 1142862 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:56:22.121380 1142862 node_ready.go:35] waiting up to 6m0s for node "no-preload-817450" to be "Ready" ...
	I0603 13:56:22.135572 1142862 node_ready.go:49] node "no-preload-817450" has status "Ready":"True"
	I0603 13:56:22.135599 1142862 node_ready.go:38] duration metric: took 14.156504ms for node "no-preload-817450" to be "Ready" ...
	I0603 13:56:22.135614 1142862 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:56:22.151036 1142862 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-f8pbl" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:22.283805 1142862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 13:56:22.288913 1142862 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0603 13:56:22.288938 1142862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0603 13:56:22.297769 1142862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 13:56:22.329187 1142862 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0603 13:56:22.329221 1142862 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0603 13:56:22.393569 1142862 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 13:56:22.393594 1142862 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0603 13:56:22.435605 1142862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 13:56:23.470078 1142862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.18622743s)
	I0603 13:56:23.470155 1142862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.172344092s)
	I0603 13:56:23.470171 1142862 main.go:141] libmachine: Making call to close driver server
	I0603 13:56:23.470192 1142862 main.go:141] libmachine: (no-preload-817450) Calling .Close
	I0603 13:56:23.470200 1142862 main.go:141] libmachine: Making call to close driver server
	I0603 13:56:23.470216 1142862 main.go:141] libmachine: (no-preload-817450) Calling .Close
	I0603 13:56:23.470515 1142862 main.go:141] libmachine: (no-preload-817450) DBG | Closing plugin on server side
	I0603 13:56:23.470553 1142862 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:56:23.470567 1142862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:56:23.470576 1142862 main.go:141] libmachine: Making call to close driver server
	I0603 13:56:23.470586 1142862 main.go:141] libmachine: (no-preload-817450) Calling .Close
	I0603 13:56:23.470589 1142862 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:56:23.470602 1142862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:56:23.470613 1142862 main.go:141] libmachine: Making call to close driver server
	I0603 13:56:23.470625 1142862 main.go:141] libmachine: (no-preload-817450) Calling .Close
	I0603 13:56:23.470807 1142862 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:56:23.470823 1142862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:56:23.471108 1142862 main.go:141] libmachine: (no-preload-817450) DBG | Closing plugin on server side
	I0603 13:56:23.471138 1142862 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:56:23.471180 1142862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:56:23.492187 1142862 main.go:141] libmachine: Making call to close driver server
	I0603 13:56:23.492226 1142862 main.go:141] libmachine: (no-preload-817450) Calling .Close
	I0603 13:56:23.492596 1142862 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:56:23.492618 1142862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:56:23.492636 1142862 main.go:141] libmachine: (no-preload-817450) DBG | Closing plugin on server side
	I0603 13:56:23.892903 1142862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.45716212s)
	I0603 13:56:23.892991 1142862 main.go:141] libmachine: Making call to close driver server
	I0603 13:56:23.893006 1142862 main.go:141] libmachine: (no-preload-817450) Calling .Close
	I0603 13:56:23.893418 1142862 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:56:23.893426 1142862 main.go:141] libmachine: (no-preload-817450) DBG | Closing plugin on server side
	I0603 13:56:23.893442 1142862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:56:23.893459 1142862 main.go:141] libmachine: Making call to close driver server
	I0603 13:56:23.893468 1142862 main.go:141] libmachine: (no-preload-817450) Calling .Close
	I0603 13:56:23.893745 1142862 main.go:141] libmachine: (no-preload-817450) DBG | Closing plugin on server side
	I0603 13:56:23.893790 1142862 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:56:23.893811 1142862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:56:23.893832 1142862 addons.go:475] Verifying addon metrics-server=true in "no-preload-817450"
	I0603 13:56:23.895990 1142862 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0603 13:56:23.897968 1142862 addons.go:510] duration metric: took 2.013558036s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0603 13:56:24.157803 1142862 pod_ready.go:102] pod "coredns-7db6d8ff4d-f8pbl" in "kube-system" namespace has status "Ready":"False"
	I0603 13:56:24.658730 1142862 pod_ready.go:92] pod "coredns-7db6d8ff4d-f8pbl" in "kube-system" namespace has status "Ready":"True"
	I0603 13:56:24.658765 1142862 pod_ready.go:81] duration metric: took 2.507699067s for pod "coredns-7db6d8ff4d-f8pbl" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.658779 1142862 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.664053 1142862 pod_ready.go:92] pod "etcd-no-preload-817450" in "kube-system" namespace has status "Ready":"True"
	I0603 13:56:24.664084 1142862 pod_ready.go:81] duration metric: took 5.2962ms for pod "etcd-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.664096 1142862 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.668496 1142862 pod_ready.go:92] pod "kube-apiserver-no-preload-817450" in "kube-system" namespace has status "Ready":"True"
	I0603 13:56:24.668521 1142862 pod_ready.go:81] duration metric: took 4.417565ms for pod "kube-apiserver-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.668533 1142862 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.673549 1142862 pod_ready.go:92] pod "kube-controller-manager-no-preload-817450" in "kube-system" namespace has status "Ready":"True"
	I0603 13:56:24.673568 1142862 pod_ready.go:81] duration metric: took 5.026882ms for pod "kube-controller-manager-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.673577 1142862 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-t45fn" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.678207 1142862 pod_ready.go:92] pod "kube-proxy-t45fn" in "kube-system" namespace has status "Ready":"True"
	I0603 13:56:24.678228 1142862 pod_ready.go:81] duration metric: took 4.644345ms for pod "kube-proxy-t45fn" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.678239 1142862 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:25.056174 1142862 pod_ready.go:92] pod "kube-scheduler-no-preload-817450" in "kube-system" namespace has status "Ready":"True"
	I0603 13:56:25.056204 1142862 pod_ready.go:81] duration metric: took 377.957963ms for pod "kube-scheduler-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:25.056214 1142862 pod_ready.go:38] duration metric: took 2.920586356s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:56:25.056231 1142862 api_server.go:52] waiting for apiserver process to appear ...
	I0603 13:56:25.056294 1142862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:56:25.071253 1142862 api_server.go:72] duration metric: took 3.186917827s to wait for apiserver process to appear ...
	I0603 13:56:25.071291 1142862 api_server.go:88] waiting for apiserver healthz status ...
	I0603 13:56:25.071319 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:56:25.076592 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 200:
	ok
	I0603 13:56:25.077531 1142862 api_server.go:141] control plane version: v1.30.1
	I0603 13:56:25.077553 1142862 api_server.go:131] duration metric: took 6.255263ms to wait for apiserver health ...
	I0603 13:56:25.077561 1142862 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 13:56:25.258520 1142862 system_pods.go:59] 9 kube-system pods found
	I0603 13:56:25.258552 1142862 system_pods.go:61] "coredns-7db6d8ff4d-f8pbl" [201e687b-1c1b-4030-8b59-b0257a0f876c] Running
	I0603 13:56:25.258557 1142862 system_pods.go:61] "coredns-7db6d8ff4d-jgk4p" [75956644-426d-49a7-b80c-492c4284f438] Running
	I0603 13:56:25.258560 1142862 system_pods.go:61] "etcd-no-preload-817450" [51d6541e-42ba-4d69-938d-0f2d379572ec] Running
	I0603 13:56:25.258565 1142862 system_pods.go:61] "kube-apiserver-no-preload-817450" [76c05ee7-8f8c-4280-af34-534c73422c51] Running
	I0603 13:56:25.258569 1142862 system_pods.go:61] "kube-controller-manager-no-preload-817450" [e3394427-3c75-4fb4-bd08-b22b8b6ad9eb] Running
	I0603 13:56:25.258573 1142862 system_pods.go:61] "kube-proxy-t45fn" [0578c151-2b36-4125-83f8-f4fbd62a1dc4] Running
	I0603 13:56:25.258578 1142862 system_pods.go:61] "kube-scheduler-no-preload-817450" [9d7c419f-d671-4b0a-bfee-7fe26c690312] Running
	I0603 13:56:25.258585 1142862 system_pods.go:61] "metrics-server-569cc877fc-j2lpf" [4f776017-1575-4461-a7c8-656e5a170460] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:56:25.258591 1142862 system_pods.go:61] "storage-provisioner" [f22655fc-5571-496e-a93f-3970d1693435] Running
	I0603 13:56:25.258603 1142862 system_pods.go:74] duration metric: took 181.034608ms to wait for pod list to return data ...
	I0603 13:56:25.258618 1142862 default_sa.go:34] waiting for default service account to be created ...
	I0603 13:56:25.454775 1142862 default_sa.go:45] found service account: "default"
	I0603 13:56:25.454810 1142862 default_sa.go:55] duration metric: took 196.18004ms for default service account to be created ...
	I0603 13:56:25.454820 1142862 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 13:56:25.658868 1142862 system_pods.go:86] 9 kube-system pods found
	I0603 13:56:25.658908 1142862 system_pods.go:89] "coredns-7db6d8ff4d-f8pbl" [201e687b-1c1b-4030-8b59-b0257a0f876c] Running
	I0603 13:56:25.658919 1142862 system_pods.go:89] "coredns-7db6d8ff4d-jgk4p" [75956644-426d-49a7-b80c-492c4284f438] Running
	I0603 13:56:25.658926 1142862 system_pods.go:89] "etcd-no-preload-817450" [51d6541e-42ba-4d69-938d-0f2d379572ec] Running
	I0603 13:56:25.658932 1142862 system_pods.go:89] "kube-apiserver-no-preload-817450" [76c05ee7-8f8c-4280-af34-534c73422c51] Running
	I0603 13:56:25.658938 1142862 system_pods.go:89] "kube-controller-manager-no-preload-817450" [e3394427-3c75-4fb4-bd08-b22b8b6ad9eb] Running
	I0603 13:56:25.658944 1142862 system_pods.go:89] "kube-proxy-t45fn" [0578c151-2b36-4125-83f8-f4fbd62a1dc4] Running
	I0603 13:56:25.658950 1142862 system_pods.go:89] "kube-scheduler-no-preload-817450" [9d7c419f-d671-4b0a-bfee-7fe26c690312] Running
	I0603 13:56:25.658959 1142862 system_pods.go:89] "metrics-server-569cc877fc-j2lpf" [4f776017-1575-4461-a7c8-656e5a170460] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:56:25.658970 1142862 system_pods.go:89] "storage-provisioner" [f22655fc-5571-496e-a93f-3970d1693435] Running
	I0603 13:56:25.658983 1142862 system_pods.go:126] duration metric: took 204.156078ms to wait for k8s-apps to be running ...
	I0603 13:56:25.658999 1142862 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 13:56:25.659058 1142862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:56:25.674728 1142862 system_svc.go:56] duration metric: took 15.717684ms WaitForService to wait for kubelet
	I0603 13:56:25.674759 1142862 kubeadm.go:576] duration metric: took 3.790431991s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 13:56:25.674777 1142862 node_conditions.go:102] verifying NodePressure condition ...
	I0603 13:56:25.855640 1142862 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 13:56:25.855671 1142862 node_conditions.go:123] node cpu capacity is 2
	I0603 13:56:25.855684 1142862 node_conditions.go:105] duration metric: took 180.901974ms to run NodePressure ...
	I0603 13:56:25.855696 1142862 start.go:240] waiting for startup goroutines ...
	I0603 13:56:25.855703 1142862 start.go:245] waiting for cluster config update ...
	I0603 13:56:25.855716 1142862 start.go:254] writing updated cluster config ...
	I0603 13:56:25.856020 1142862 ssh_runner.go:195] Run: rm -f paused
	I0603 13:56:25.908747 1142862 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 13:56:25.911049 1142862 out.go:177] * Done! kubectl is now configured to use "no-preload-817450" cluster and "default" namespace by default
	I0603 13:56:44.813650 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:56:44.813933 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:56:44.813964 1143678 kubeadm.go:309] 
	I0603 13:56:44.814039 1143678 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0603 13:56:44.814075 1143678 kubeadm.go:309] 		timed out waiting for the condition
	I0603 13:56:44.814115 1143678 kubeadm.go:309] 
	I0603 13:56:44.814197 1143678 kubeadm.go:309] 	This error is likely caused by:
	I0603 13:56:44.814246 1143678 kubeadm.go:309] 		- The kubelet is not running
	I0603 13:56:44.814369 1143678 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0603 13:56:44.814378 1143678 kubeadm.go:309] 
	I0603 13:56:44.814496 1143678 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0603 13:56:44.814540 1143678 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0603 13:56:44.814573 1143678 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0603 13:56:44.814580 1143678 kubeadm.go:309] 
	I0603 13:56:44.814685 1143678 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0603 13:56:44.814785 1143678 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0603 13:56:44.814798 1143678 kubeadm.go:309] 
	I0603 13:56:44.814896 1143678 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0603 13:56:44.815001 1143678 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0603 13:56:44.815106 1143678 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0603 13:56:44.815208 1143678 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0603 13:56:44.815220 1143678 kubeadm.go:309] 
	I0603 13:56:44.816032 1143678 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 13:56:44.816137 1143678 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0603 13:56:44.816231 1143678 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0603 13:56:44.816405 1143678 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0603 13:56:44.816480 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 13:56:45.288649 1143678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:56:45.305284 1143678 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 13:56:45.316705 1143678 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 13:56:45.316736 1143678 kubeadm.go:156] found existing configuration files:
	
	I0603 13:56:45.316804 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 13:56:45.327560 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 13:56:45.327630 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 13:56:45.337910 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 13:56:45.349864 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 13:56:45.349948 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 13:56:45.361369 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 13:56:45.371797 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 13:56:45.371866 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 13:56:45.382861 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 13:56:45.393310 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 13:56:45.393382 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 13:56:45.403822 1143678 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 13:56:45.476725 1143678 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0603 13:56:45.476794 1143678 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 13:56:45.630786 1143678 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 13:56:45.630956 1143678 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 13:56:45.631125 1143678 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 13:56:45.814370 1143678 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 13:56:45.816372 1143678 out.go:204]   - Generating certificates and keys ...
	I0603 13:56:45.816481 1143678 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 13:56:45.816556 1143678 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 13:56:45.816710 1143678 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 13:56:45.816831 1143678 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 13:56:45.816928 1143678 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 13:56:45.817003 1143678 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 13:56:45.817093 1143678 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 13:56:45.817178 1143678 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 13:56:45.817328 1143678 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 13:56:45.817477 1143678 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 13:56:45.817533 1143678 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 13:56:45.817607 1143678 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 13:56:46.025905 1143678 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 13:56:46.331809 1143678 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 13:56:46.551488 1143678 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 13:56:46.636938 1143678 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 13:56:46.663292 1143678 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 13:56:46.663400 1143678 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 13:56:46.663448 1143678 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 13:56:46.840318 1143678 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 13:56:46.842399 1143678 out.go:204]   - Booting up control plane ...
	I0603 13:56:46.842530 1143678 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 13:56:46.851940 1143678 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 13:56:46.855283 1143678 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 13:56:46.855443 1143678 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 13:56:46.857883 1143678 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0603 13:57:26.860915 1143678 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0603 13:57:26.861047 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:57:26.861296 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:57:31.861724 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:57:31.862046 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:57:41.862803 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:57:41.863057 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:58:01.862907 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:58:01.863136 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:58:41.862069 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:58:41.862391 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:58:41.862430 1143678 kubeadm.go:309] 
	I0603 13:58:41.862535 1143678 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0603 13:58:41.862613 1143678 kubeadm.go:309] 		timed out waiting for the condition
	I0603 13:58:41.862624 1143678 kubeadm.go:309] 
	I0603 13:58:41.862675 1143678 kubeadm.go:309] 	This error is likely caused by:
	I0603 13:58:41.862737 1143678 kubeadm.go:309] 		- The kubelet is not running
	I0603 13:58:41.862895 1143678 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0603 13:58:41.862909 1143678 kubeadm.go:309] 
	I0603 13:58:41.863030 1143678 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0603 13:58:41.863060 1143678 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0603 13:58:41.863090 1143678 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0603 13:58:41.863100 1143678 kubeadm.go:309] 
	I0603 13:58:41.863230 1143678 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0603 13:58:41.863388 1143678 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0603 13:58:41.863406 1143678 kubeadm.go:309] 
	I0603 13:58:41.863583 1143678 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0603 13:58:41.863709 1143678 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0603 13:58:41.863811 1143678 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0603 13:58:41.863894 1143678 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0603 13:58:41.863917 1143678 kubeadm.go:309] 
	I0603 13:58:41.865001 1143678 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 13:58:41.865120 1143678 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0603 13:58:41.865209 1143678 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0603 13:58:41.865361 1143678 kubeadm.go:393] duration metric: took 8m3.432874561s to StartCluster
	I0603 13:58:41.865460 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:58:41.865537 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:58:41.912780 1143678 cri.go:89] found id: ""
	I0603 13:58:41.912812 1143678 logs.go:276] 0 containers: []
	W0603 13:58:41.912826 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:58:41.912832 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:58:41.912901 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:58:41.951372 1143678 cri.go:89] found id: ""
	I0603 13:58:41.951402 1143678 logs.go:276] 0 containers: []
	W0603 13:58:41.951411 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:58:41.951418 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:58:41.951490 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:58:41.989070 1143678 cri.go:89] found id: ""
	I0603 13:58:41.989104 1143678 logs.go:276] 0 containers: []
	W0603 13:58:41.989115 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:58:41.989123 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:58:41.989191 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:58:42.026208 1143678 cri.go:89] found id: ""
	I0603 13:58:42.026238 1143678 logs.go:276] 0 containers: []
	W0603 13:58:42.026246 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:58:42.026252 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:58:42.026312 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:58:42.064899 1143678 cri.go:89] found id: ""
	I0603 13:58:42.064941 1143678 logs.go:276] 0 containers: []
	W0603 13:58:42.064950 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:58:42.064971 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:58:42.065043 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:58:42.098817 1143678 cri.go:89] found id: ""
	I0603 13:58:42.098858 1143678 logs.go:276] 0 containers: []
	W0603 13:58:42.098868 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:58:42.098876 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:58:42.098939 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:58:42.133520 1143678 cri.go:89] found id: ""
	I0603 13:58:42.133558 1143678 logs.go:276] 0 containers: []
	W0603 13:58:42.133570 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:58:42.133579 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:58:42.133639 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:58:42.187356 1143678 cri.go:89] found id: ""
	I0603 13:58:42.187387 1143678 logs.go:276] 0 containers: []
	W0603 13:58:42.187399 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:58:42.187412 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:58:42.187434 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:58:42.249992 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:58:42.250034 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:58:42.272762 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:58:42.272801 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:58:42.362004 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:58:42.362030 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:58:42.362046 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:58:42.468630 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:58:42.468676 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0603 13:58:42.510945 1143678 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0603 13:58:42.511002 1143678 out.go:239] * 
	W0603 13:58:42.511094 1143678 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0603 13:58:42.511119 1143678 out.go:239] * 
	W0603 13:58:42.512307 1143678 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0603 13:58:42.516199 1143678 out.go:177] 
	W0603 13:58:42.517774 1143678 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0603 13:58:42.517848 1143678 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0603 13:58:42.517883 1143678 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0603 13:58:42.519747 1143678 out.go:177] 
	
	
	==> CRI-O <==
	Jun 03 14:03:51 default-k8s-diff-port-030870 crio[725]: time="2024-06-03 14:03:51.556372900Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717423431556352305,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=24a80746-2127-43c4-b12e-7af8d9579058 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:03:51 default-k8s-diff-port-030870 crio[725]: time="2024-06-03 14:03:51.556893258Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f0a929e3-49f3-42b0-a224-ff610b50b275 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:03:51 default-k8s-diff-port-030870 crio[725]: time="2024-06-03 14:03:51.556946660Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f0a929e3-49f3-42b0-a224-ff610b50b275 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:03:51 default-k8s-diff-port-030870 crio[725]: time="2024-06-03 14:03:51.557136047Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240,PodSandboxId:a89474e4dfc767507ab6b4dfa45eea5b464d6a43061f913597fdabefb21b3359,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717422653381235369,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64d080e5-d582-4ee5-adbc-a652e8e2b820,},Annotations:map[string]string{io.kubernetes.container.hash: 310fe3a2,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ac76999daa4b17dee826c4752ad866f2acbc6b38d89b12d2ef962de2a4f20e3,PodSandboxId:5dc28682caa3ee97ac27574a0837308d28e050b5e3c79a879b619d37ce3a5d4c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1717422631195089075,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f50f4fd8-3455-456e-805d-c17087c1ca83,},Annotations:map[string]string{io.kubernetes.container.hash: f28de140,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d,PodSandboxId:b4fafb83fdac4a702b3ef20fd25c696380772cc3a76753f310332677a34b6765,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717422630294360668,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-flxqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a116f363-ca50-4e2d-8c77-e99498c81e36,},Annotations:map[string]string{io.kubernetes.container.hash: f9622600,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b,PodSandboxId:50212807557a6cae3046b392a9f588d9a4a74ddbd31ba2fc4db11ed6b55179df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717422622608060742,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-thsrx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96df5442-b
343-47c8-a561-681a2d568d50,},Annotations:map[string]string{io.kubernetes.container.hash: e9a8b0fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4,PodSandboxId:a89474e4dfc767507ab6b4dfa45eea5b464d6a43061f913597fdabefb21b3359,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717422622572995988,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64d080e5-d582-4ee5-adbc
-a652e8e2b820,},Annotations:map[string]string{io.kubernetes.container.hash: 310fe3a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d,PodSandboxId:105382b122cbc95b14a6aa53c76054737a5391260f189f0dd20e72059cdca7bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717422617925241759,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-030870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b1dfed2df38083366ac860dd7d5c185,},Annotations:map[
string]string{io.kubernetes.container.hash: f5995d14,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a,PodSandboxId:f89c6219a84bbc27f2068d7ee65d9d5f07ba8c06ae07b3319be9f17ffbae51d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717422617879459233,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-030870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e5db6f179904992f4c5d517b64cc96f,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7,PodSandboxId:253ddf921d5b0de5dee2202c0700997ad5d1efd4e620d239a3bdb1f47ac1b445,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717422617865448707,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-030870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec4fa399a65397995760045276de
0216,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836,PodSandboxId:83a56ce979c24bec8630915596eb0ad1317808b60d3bc5c1ab9534f669454d2b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717422617784234081,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-030870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16b03774b0b44028cd4391d23b0016
9b,},Annotations:map[string]string{io.kubernetes.container.hash: 447f56d2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f0a929e3-49f3-42b0-a224-ff610b50b275 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:03:51 default-k8s-diff-port-030870 crio[725]: time="2024-06-03 14:03:51.596822941Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3f5a5db4-edc9-442b-a493-ae8cdea0e5fe name=/runtime.v1.RuntimeService/Version
	Jun 03 14:03:51 default-k8s-diff-port-030870 crio[725]: time="2024-06-03 14:03:51.596898240Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3f5a5db4-edc9-442b-a493-ae8cdea0e5fe name=/runtime.v1.RuntimeService/Version
	Jun 03 14:03:51 default-k8s-diff-port-030870 crio[725]: time="2024-06-03 14:03:51.597962205Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cfc8dd1c-92a9-485b-a6f5-4b0afb0f6c84 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:03:51 default-k8s-diff-port-030870 crio[725]: time="2024-06-03 14:03:51.598391600Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717423431598369707,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cfc8dd1c-92a9-485b-a6f5-4b0afb0f6c84 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:03:51 default-k8s-diff-port-030870 crio[725]: time="2024-06-03 14:03:51.599166464Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3a05f9e1-4621-48a5-842a-7bece548351d name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:03:51 default-k8s-diff-port-030870 crio[725]: time="2024-06-03 14:03:51.599244104Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3a05f9e1-4621-48a5-842a-7bece548351d name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:03:51 default-k8s-diff-port-030870 crio[725]: time="2024-06-03 14:03:51.599421849Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240,PodSandboxId:a89474e4dfc767507ab6b4dfa45eea5b464d6a43061f913597fdabefb21b3359,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717422653381235369,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64d080e5-d582-4ee5-adbc-a652e8e2b820,},Annotations:map[string]string{io.kubernetes.container.hash: 310fe3a2,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ac76999daa4b17dee826c4752ad866f2acbc6b38d89b12d2ef962de2a4f20e3,PodSandboxId:5dc28682caa3ee97ac27574a0837308d28e050b5e3c79a879b619d37ce3a5d4c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1717422631195089075,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f50f4fd8-3455-456e-805d-c17087c1ca83,},Annotations:map[string]string{io.kubernetes.container.hash: f28de140,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d,PodSandboxId:b4fafb83fdac4a702b3ef20fd25c696380772cc3a76753f310332677a34b6765,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717422630294360668,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-flxqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a116f363-ca50-4e2d-8c77-e99498c81e36,},Annotations:map[string]string{io.kubernetes.container.hash: f9622600,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b,PodSandboxId:50212807557a6cae3046b392a9f588d9a4a74ddbd31ba2fc4db11ed6b55179df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717422622608060742,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-thsrx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96df5442-b
343-47c8-a561-681a2d568d50,},Annotations:map[string]string{io.kubernetes.container.hash: e9a8b0fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4,PodSandboxId:a89474e4dfc767507ab6b4dfa45eea5b464d6a43061f913597fdabefb21b3359,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717422622572995988,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64d080e5-d582-4ee5-adbc
-a652e8e2b820,},Annotations:map[string]string{io.kubernetes.container.hash: 310fe3a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d,PodSandboxId:105382b122cbc95b14a6aa53c76054737a5391260f189f0dd20e72059cdca7bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717422617925241759,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-030870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b1dfed2df38083366ac860dd7d5c185,},Annotations:map[
string]string{io.kubernetes.container.hash: f5995d14,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a,PodSandboxId:f89c6219a84bbc27f2068d7ee65d9d5f07ba8c06ae07b3319be9f17ffbae51d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717422617879459233,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-030870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e5db6f179904992f4c5d517b64cc96f,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7,PodSandboxId:253ddf921d5b0de5dee2202c0700997ad5d1efd4e620d239a3bdb1f47ac1b445,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717422617865448707,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-030870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec4fa399a65397995760045276de
0216,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836,PodSandboxId:83a56ce979c24bec8630915596eb0ad1317808b60d3bc5c1ab9534f669454d2b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717422617784234081,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-030870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16b03774b0b44028cd4391d23b0016
9b,},Annotations:map[string]string{io.kubernetes.container.hash: 447f56d2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3a05f9e1-4621-48a5-842a-7bece548351d name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:03:51 default-k8s-diff-port-030870 crio[725]: time="2024-06-03 14:03:51.640773760Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a69e35d2-1a6f-44e4-bad2-dbb56069e6b6 name=/runtime.v1.RuntimeService/Version
	Jun 03 14:03:51 default-k8s-diff-port-030870 crio[725]: time="2024-06-03 14:03:51.640847096Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a69e35d2-1a6f-44e4-bad2-dbb56069e6b6 name=/runtime.v1.RuntimeService/Version
	Jun 03 14:03:51 default-k8s-diff-port-030870 crio[725]: time="2024-06-03 14:03:51.642270104Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e6699429-16d4-4a80-bb8c-cf6c8fa630c5 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:03:51 default-k8s-diff-port-030870 crio[725]: time="2024-06-03 14:03:51.642952636Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717423431642929807,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e6699429-16d4-4a80-bb8c-cf6c8fa630c5 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:03:51 default-k8s-diff-port-030870 crio[725]: time="2024-06-03 14:03:51.643588515Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0a9c5899-6674-438a-af1c-bacffb8da8ec name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:03:51 default-k8s-diff-port-030870 crio[725]: time="2024-06-03 14:03:51.643643179Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0a9c5899-6674-438a-af1c-bacffb8da8ec name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:03:51 default-k8s-diff-port-030870 crio[725]: time="2024-06-03 14:03:51.643822054Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240,PodSandboxId:a89474e4dfc767507ab6b4dfa45eea5b464d6a43061f913597fdabefb21b3359,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717422653381235369,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64d080e5-d582-4ee5-adbc-a652e8e2b820,},Annotations:map[string]string{io.kubernetes.container.hash: 310fe3a2,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ac76999daa4b17dee826c4752ad866f2acbc6b38d89b12d2ef962de2a4f20e3,PodSandboxId:5dc28682caa3ee97ac27574a0837308d28e050b5e3c79a879b619d37ce3a5d4c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1717422631195089075,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f50f4fd8-3455-456e-805d-c17087c1ca83,},Annotations:map[string]string{io.kubernetes.container.hash: f28de140,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d,PodSandboxId:b4fafb83fdac4a702b3ef20fd25c696380772cc3a76753f310332677a34b6765,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717422630294360668,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-flxqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a116f363-ca50-4e2d-8c77-e99498c81e36,},Annotations:map[string]string{io.kubernetes.container.hash: f9622600,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b,PodSandboxId:50212807557a6cae3046b392a9f588d9a4a74ddbd31ba2fc4db11ed6b55179df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717422622608060742,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-thsrx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96df5442-b
343-47c8-a561-681a2d568d50,},Annotations:map[string]string{io.kubernetes.container.hash: e9a8b0fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4,PodSandboxId:a89474e4dfc767507ab6b4dfa45eea5b464d6a43061f913597fdabefb21b3359,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717422622572995988,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64d080e5-d582-4ee5-adbc
-a652e8e2b820,},Annotations:map[string]string{io.kubernetes.container.hash: 310fe3a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d,PodSandboxId:105382b122cbc95b14a6aa53c76054737a5391260f189f0dd20e72059cdca7bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717422617925241759,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-030870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b1dfed2df38083366ac860dd7d5c185,},Annotations:map[
string]string{io.kubernetes.container.hash: f5995d14,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a,PodSandboxId:f89c6219a84bbc27f2068d7ee65d9d5f07ba8c06ae07b3319be9f17ffbae51d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717422617879459233,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-030870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e5db6f179904992f4c5d517b64cc96f,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7,PodSandboxId:253ddf921d5b0de5dee2202c0700997ad5d1efd4e620d239a3bdb1f47ac1b445,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717422617865448707,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-030870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec4fa399a65397995760045276de
0216,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836,PodSandboxId:83a56ce979c24bec8630915596eb0ad1317808b60d3bc5c1ab9534f669454d2b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717422617784234081,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-030870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16b03774b0b44028cd4391d23b0016
9b,},Annotations:map[string]string{io.kubernetes.container.hash: 447f56d2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0a9c5899-6674-438a-af1c-bacffb8da8ec name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:03:51 default-k8s-diff-port-030870 crio[725]: time="2024-06-03 14:03:51.678544564Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=90227a7b-c8b3-4d78-858f-f196a3eab376 name=/runtime.v1.RuntimeService/Version
	Jun 03 14:03:51 default-k8s-diff-port-030870 crio[725]: time="2024-06-03 14:03:51.678619620Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=90227a7b-c8b3-4d78-858f-f196a3eab376 name=/runtime.v1.RuntimeService/Version
	Jun 03 14:03:51 default-k8s-diff-port-030870 crio[725]: time="2024-06-03 14:03:51.682139701Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e58e94a0-ec97-4d2c-b7f7-c748fbeb4b0a name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:03:51 default-k8s-diff-port-030870 crio[725]: time="2024-06-03 14:03:51.682614756Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717423431682591827,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e58e94a0-ec97-4d2c-b7f7-c748fbeb4b0a name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:03:51 default-k8s-diff-port-030870 crio[725]: time="2024-06-03 14:03:51.683326538Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d1249998-eaf7-4aac-9a08-3b637889a675 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:03:51 default-k8s-diff-port-030870 crio[725]: time="2024-06-03 14:03:51.683380054Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d1249998-eaf7-4aac-9a08-3b637889a675 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:03:51 default-k8s-diff-port-030870 crio[725]: time="2024-06-03 14:03:51.683638989Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240,PodSandboxId:a89474e4dfc767507ab6b4dfa45eea5b464d6a43061f913597fdabefb21b3359,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717422653381235369,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64d080e5-d582-4ee5-adbc-a652e8e2b820,},Annotations:map[string]string{io.kubernetes.container.hash: 310fe3a2,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ac76999daa4b17dee826c4752ad866f2acbc6b38d89b12d2ef962de2a4f20e3,PodSandboxId:5dc28682caa3ee97ac27574a0837308d28e050b5e3c79a879b619d37ce3a5d4c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1717422631195089075,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f50f4fd8-3455-456e-805d-c17087c1ca83,},Annotations:map[string]string{io.kubernetes.container.hash: f28de140,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d,PodSandboxId:b4fafb83fdac4a702b3ef20fd25c696380772cc3a76753f310332677a34b6765,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717422630294360668,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-flxqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a116f363-ca50-4e2d-8c77-e99498c81e36,},Annotations:map[string]string{io.kubernetes.container.hash: f9622600,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b,PodSandboxId:50212807557a6cae3046b392a9f588d9a4a74ddbd31ba2fc4db11ed6b55179df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717422622608060742,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-thsrx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96df5442-b
343-47c8-a561-681a2d568d50,},Annotations:map[string]string{io.kubernetes.container.hash: e9a8b0fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4,PodSandboxId:a89474e4dfc767507ab6b4dfa45eea5b464d6a43061f913597fdabefb21b3359,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717422622572995988,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64d080e5-d582-4ee5-adbc
-a652e8e2b820,},Annotations:map[string]string{io.kubernetes.container.hash: 310fe3a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d,PodSandboxId:105382b122cbc95b14a6aa53c76054737a5391260f189f0dd20e72059cdca7bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717422617925241759,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-030870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b1dfed2df38083366ac860dd7d5c185,},Annotations:map[
string]string{io.kubernetes.container.hash: f5995d14,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a,PodSandboxId:f89c6219a84bbc27f2068d7ee65d9d5f07ba8c06ae07b3319be9f17ffbae51d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717422617879459233,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-030870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e5db6f179904992f4c5d517b64cc96f,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7,PodSandboxId:253ddf921d5b0de5dee2202c0700997ad5d1efd4e620d239a3bdb1f47ac1b445,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717422617865448707,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-030870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec4fa399a65397995760045276de
0216,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836,PodSandboxId:83a56ce979c24bec8630915596eb0ad1317808b60d3bc5c1ab9534f669454d2b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717422617784234081,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-030870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16b03774b0b44028cd4391d23b0016
9b,},Annotations:map[string]string{io.kubernetes.container.hash: 447f56d2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d1249998-eaf7-4aac-9a08-3b637889a675 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	969178964b33d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   a89474e4dfc76       storage-provisioner
	6ac76999daa4b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   5dc28682caa3e       busybox
	bc9ddfc8f250b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   b4fafb83fdac4       coredns-7db6d8ff4d-flxqj
	9359de3110480       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      13 minutes ago      Running             kube-proxy                1                   50212807557a6       kube-proxy-thsrx
	bc407a1d19d20       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   a89474e4dfc76       storage-provisioner
	c1051588032f5       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago      Running             etcd                      1                   105382b122cbc       etcd-default-k8s-diff-port-030870
	7aab9931698b9       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      13 minutes ago      Running             kube-scheduler            1                   f89c6219a84bb       kube-scheduler-default-k8s-diff-port-030870
	b97dd1f775dd3       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      13 minutes ago      Running             kube-controller-manager   1                   253ddf921d5b0       kube-controller-manager-default-k8s-diff-port-030870
	50541b09cc089       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      13 minutes ago      Running             kube-apiserver            1                   83a56ce979c24       kube-apiserver-default-k8s-diff-port-030870
	
	
	==> coredns [bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:39746 - 11886 "HINFO IN 1972896720099381992.1985859716288422354. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030146491s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-030870
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-030870
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	                    minikube.k8s.io/name=default-k8s-diff-port-030870
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_03T13_42_34_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 13:42:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-030870
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 14:03:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 14:01:04 +0000   Mon, 03 Jun 2024 13:42:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 14:01:04 +0000   Mon, 03 Jun 2024 13:42:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 14:01:04 +0000   Mon, 03 Jun 2024 13:42:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 14:01:04 +0000   Mon, 03 Jun 2024 13:50:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.177
	  Hostname:    default-k8s-diff-port-030870
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 542b7249b64443b180ba274289f8f2ee
	  System UUID:                542b7249-b644-43b1-80ba-274289f8f2ee
	  Boot ID:                    cfbcbd2e-8522-45d1-b37a-c0a941b08c1e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 coredns-7db6d8ff4d-flxqj                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-default-k8s-diff-port-030870                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kube-apiserver-default-k8s-diff-port-030870             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-030870    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-thsrx                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-default-k8s-diff-port-030870             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 metrics-server-569cc877fc-8xw9v                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-030870 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-030870 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-030870 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node default-k8s-diff-port-030870 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node default-k8s-diff-port-030870 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     21m                kubelet          Node default-k8s-diff-port-030870 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                21m                kubelet          Node default-k8s-diff-port-030870 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-030870 event: Registered Node default-k8s-diff-port-030870 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-030870 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-030870 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-030870 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-030870 event: Registered Node default-k8s-diff-port-030870 in Controller
	
	
	==> dmesg <==
	[Jun 3 13:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.056780] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041745] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.708923] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.416866] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.635491] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jun 3 13:50] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.066610] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.084347] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.205695] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.172925] systemd-fstab-generator[682]: Ignoring "noauto" option for root device
	[  +0.351117] systemd-fstab-generator[711]: Ignoring "noauto" option for root device
	[  +4.868188] systemd-fstab-generator[807]: Ignoring "noauto" option for root device
	[  +0.063444] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.516215] systemd-fstab-generator[931]: Ignoring "noauto" option for root device
	[  +5.623561] kauditd_printk_skb: 97 callbacks suppressed
	[  +3.998942] systemd-fstab-generator[1545]: Ignoring "noauto" option for root device
	[  +1.728805] kauditd_printk_skb: 62 callbacks suppressed
	[  +7.900832] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d] <==
	{"level":"info","ts":"2024-06-03T13:50:19.791969Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b3a0188682bd7022 received MsgVoteResp from b3a0188682bd7022 at term 3"}
	{"level":"info","ts":"2024-06-03T13:50:19.792Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b3a0188682bd7022 became leader at term 3"}
	{"level":"info","ts":"2024-06-03T13:50:19.792034Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b3a0188682bd7022 elected leader b3a0188682bd7022 at term 3"}
	{"level":"info","ts":"2024-06-03T13:50:19.802348Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-03T13:50:19.803644Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"b3a0188682bd7022","local-member-attributes":"{Name:default-k8s-diff-port-030870 ClientURLs:[https://192.168.39.177:2379]}","request-path":"/0/members/b3a0188682bd7022/attributes","cluster-id":"e6df60d153d3d688","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-03T13:50:19.804162Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-03T13:50:19.804723Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-03T13:50:19.804769Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-03T13:50:19.805944Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.177:2379"}
	{"level":"info","ts":"2024-06-03T13:50:19.807659Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-03T13:50:37.315934Z","caller":"traceutil/trace.go:171","msg":"trace[2081468496] linearizableReadLoop","detail":"{readStateIndex:680; appliedIndex:679; }","duration":"312.217531ms","start":"2024-06-03T13:50:37.00369Z","end":"2024-06-03T13:50:37.315907Z","steps":["trace[2081468496] 'read index received'  (duration: 311.919471ms)","trace[2081468496] 'applied index is now lower than readState.Index'  (duration: 297.181µs)"],"step_count":2}
	{"level":"info","ts":"2024-06-03T13:50:37.316028Z","caller":"traceutil/trace.go:171","msg":"trace[211873187] transaction","detail":"{read_only:false; response_revision:636; number_of_response:1; }","duration":"409.386258ms","start":"2024-06-03T13:50:36.906634Z","end":"2024-06-03T13:50:37.316021Z","steps":["trace[211873187] 'process raft request'  (duration: 409.103448ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T13:50:37.316342Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.529038ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-030870\" ","response":"range_response_count:1 size:5475"}
	{"level":"info","ts":"2024-06-03T13:50:37.316427Z","caller":"traceutil/trace.go:171","msg":"trace[1654601635] range","detail":"{range_begin:/registry/pods/kube-system/etcd-default-k8s-diff-port-030870; range_end:; response_count:1; response_revision:636; }","duration":"171.717362ms","start":"2024-06-03T13:50:37.144698Z","end":"2024-06-03T13:50:37.316415Z","steps":["trace[1654601635] 'agreement among raft nodes before linearized reading'  (duration: 171.534066ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T13:50:37.316635Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"312.940319ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-03T13:50:37.316657Z","caller":"traceutil/trace.go:171","msg":"trace[1157352558] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:636; }","duration":"312.992521ms","start":"2024-06-03T13:50:37.003658Z","end":"2024-06-03T13:50:37.316651Z","steps":["trace[1157352558] 'agreement among raft nodes before linearized reading'  (duration: 312.95397ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T13:50:37.316675Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-03T13:50:37.003643Z","time spent":"313.027787ms","remote":"127.0.0.1:40062","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-06-03T13:50:37.316741Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-03T13:50:36.906616Z","time spent":"409.433487ms","remote":"127.0.0.1:40242","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5460,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-030870\" mod_revision:574 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-030870\" value_size:5392 >> failure:<request_range:<key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-030870\" > >"}
	{"level":"warn","ts":"2024-06-03T13:50:37.935733Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"390.850915ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-030870\" ","response":"range_response_count:1 size:5550"}
	{"level":"info","ts":"2024-06-03T13:50:37.935883Z","caller":"traceutil/trace.go:171","msg":"trace[1679571383] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-030870; range_end:; response_count:1; response_revision:636; }","duration":"391.044047ms","start":"2024-06-03T13:50:37.544824Z","end":"2024-06-03T13:50:37.935868Z","steps":["trace[1679571383] 'range keys from in-memory index tree'  (duration: 390.708926ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T13:50:37.935947Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-03T13:50:37.544807Z","time spent":"391.129612ms","remote":"127.0.0.1:40240","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":5574,"request content":"key:\"/registry/minions/default-k8s-diff-port-030870\" "}
	{"level":"warn","ts":"2024-06-03T13:50:59.606114Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.920762ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8080178866669689405 > lease_revoke:<id:70228fde5d7719c9>","response":"size:29"}
	{"level":"info","ts":"2024-06-03T14:00:19.845782Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":879}
	{"level":"info","ts":"2024-06-03T14:00:19.859098Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":879,"took":"12.905628ms","hash":3172330257,"current-db-size-bytes":2895872,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":2895872,"current-db-size-in-use":"2.9 MB"}
	{"level":"info","ts":"2024-06-03T14:00:19.859171Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3172330257,"revision":879,"compact-revision":-1}
	
	
	==> kernel <==
	 14:03:52 up 14 min,  0 users,  load average: 0.14, 0.18, 0.12
	Linux default-k8s-diff-port-030870 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836] <==
	I0603 13:58:22.276586       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 14:00:21.278423       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 14:00:21.278580       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0603 14:00:22.278758       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 14:00:22.278811       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0603 14:00:22.278820       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 14:00:22.278776       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 14:00:22.279021       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0603 14:00:22.280167       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 14:01:22.279811       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 14:01:22.279879       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0603 14:01:22.279888       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 14:01:22.281090       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 14:01:22.281148       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0603 14:01:22.281155       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 14:03:22.281042       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 14:03:22.281396       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0603 14:03:22.281462       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 14:03:22.281407       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 14:03:22.281642       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0603 14:03:22.283233       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7] <==
	I0603 13:58:06.556013       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 13:58:35.990313       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 13:58:36.564574       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 13:59:05.996268       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 13:59:06.572842       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 13:59:36.006931       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 13:59:36.580280       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 14:00:06.012980       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:00:06.589035       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 14:00:36.018465       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:00:36.596122       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 14:01:06.025648       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:01:06.604802       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0603 14:01:27.196955       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="287.921µs"
	E0603 14:01:36.030040       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:01:36.614218       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0603 14:01:41.199165       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="173.692µs"
	E0603 14:02:06.035189       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:02:06.621667       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 14:02:36.040386       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:02:36.628826       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 14:03:06.045473       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:03:06.636907       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 14:03:36.049728       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:03:36.644255       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b] <==
	I0603 13:50:22.811547       1 server_linux.go:69] "Using iptables proxy"
	I0603 13:50:22.823458       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.177"]
	I0603 13:50:22.873189       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 13:50:22.873271       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 13:50:22.873298       1 server_linux.go:165] "Using iptables Proxier"
	I0603 13:50:22.876637       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 13:50:22.876958       1 server.go:872] "Version info" version="v1.30.1"
	I0603 13:50:22.876989       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 13:50:22.878193       1 config.go:192] "Starting service config controller"
	I0603 13:50:22.878228       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 13:50:22.878261       1 config.go:101] "Starting endpoint slice config controller"
	I0603 13:50:22.878265       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 13:50:22.879453       1 config.go:319] "Starting node config controller"
	I0603 13:50:22.879595       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 13:50:22.979232       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 13:50:22.979309       1 shared_informer.go:320] Caches are synced for service config
	I0603 13:50:22.979941       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a] <==
	I0603 13:50:18.797414       1 serving.go:380] Generated self-signed cert in-memory
	W0603 13:50:21.239685       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0603 13:50:21.239816       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0603 13:50:21.239926       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0603 13:50:21.239951       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0603 13:50:21.276905       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0603 13:50:21.277016       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 13:50:21.280874       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0603 13:50:21.280945       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 13:50:21.281444       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0603 13:50:21.281832       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 13:50:21.382144       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 03 14:01:17 default-k8s-diff-port-030870 kubelet[938]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 14:01:17 default-k8s-diff-port-030870 kubelet[938]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 14:01:17 default-k8s-diff-port-030870 kubelet[938]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 14:01:27 default-k8s-diff-port-030870 kubelet[938]: E0603 14:01:27.179557     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-8xw9v" podUID="4ab08177-2171-493b-928c-456d8a21fd68"
	Jun 03 14:01:41 default-k8s-diff-port-030870 kubelet[938]: E0603 14:01:41.181580     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-8xw9v" podUID="4ab08177-2171-493b-928c-456d8a21fd68"
	Jun 03 14:01:52 default-k8s-diff-port-030870 kubelet[938]: E0603 14:01:52.178977     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-8xw9v" podUID="4ab08177-2171-493b-928c-456d8a21fd68"
	Jun 03 14:02:04 default-k8s-diff-port-030870 kubelet[938]: E0603 14:02:04.178954     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-8xw9v" podUID="4ab08177-2171-493b-928c-456d8a21fd68"
	Jun 03 14:02:15 default-k8s-diff-port-030870 kubelet[938]: E0603 14:02:15.183101     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-8xw9v" podUID="4ab08177-2171-493b-928c-456d8a21fd68"
	Jun 03 14:02:17 default-k8s-diff-port-030870 kubelet[938]: E0603 14:02:17.204260     938 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 14:02:17 default-k8s-diff-port-030870 kubelet[938]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 14:02:17 default-k8s-diff-port-030870 kubelet[938]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 14:02:17 default-k8s-diff-port-030870 kubelet[938]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 14:02:17 default-k8s-diff-port-030870 kubelet[938]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 14:02:30 default-k8s-diff-port-030870 kubelet[938]: E0603 14:02:30.178718     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-8xw9v" podUID="4ab08177-2171-493b-928c-456d8a21fd68"
	Jun 03 14:02:45 default-k8s-diff-port-030870 kubelet[938]: E0603 14:02:45.179242     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-8xw9v" podUID="4ab08177-2171-493b-928c-456d8a21fd68"
	Jun 03 14:02:56 default-k8s-diff-port-030870 kubelet[938]: E0603 14:02:56.179112     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-8xw9v" podUID="4ab08177-2171-493b-928c-456d8a21fd68"
	Jun 03 14:03:07 default-k8s-diff-port-030870 kubelet[938]: E0603 14:03:07.179113     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-8xw9v" podUID="4ab08177-2171-493b-928c-456d8a21fd68"
	Jun 03 14:03:17 default-k8s-diff-port-030870 kubelet[938]: E0603 14:03:17.203964     938 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 14:03:17 default-k8s-diff-port-030870 kubelet[938]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 14:03:17 default-k8s-diff-port-030870 kubelet[938]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 14:03:17 default-k8s-diff-port-030870 kubelet[938]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 14:03:17 default-k8s-diff-port-030870 kubelet[938]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 14:03:22 default-k8s-diff-port-030870 kubelet[938]: E0603 14:03:22.180734     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-8xw9v" podUID="4ab08177-2171-493b-928c-456d8a21fd68"
	Jun 03 14:03:37 default-k8s-diff-port-030870 kubelet[938]: E0603 14:03:37.179206     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-8xw9v" podUID="4ab08177-2171-493b-928c-456d8a21fd68"
	Jun 03 14:03:50 default-k8s-diff-port-030870 kubelet[938]: E0603 14:03:50.179740     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-8xw9v" podUID="4ab08177-2171-493b-928c-456d8a21fd68"
	
	
	==> storage-provisioner [969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240] <==
	I0603 13:50:53.493461       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0603 13:50:53.512767       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0603 13:50:53.512853       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0603 13:51:10.919097       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0603 13:51:10.921354       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-030870_5756df9b-0457-439a-9273-51b749b46572!
	I0603 13:51:10.922231       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2e9bfded-55bf-4dea-97b9-05156a907d75", APIVersion:"v1", ResourceVersion:"663", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-030870_5756df9b-0457-439a-9273-51b749b46572 became leader
	I0603 13:51:11.022553       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-030870_5756df9b-0457-439a-9273-51b749b46572!
	
	
	==> storage-provisioner [bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4] <==
	I0603 13:50:22.750285       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0603 13:50:52.753832       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-030870 -n default-k8s-diff-port-030870
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-030870 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-8xw9v
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-030870 describe pod metrics-server-569cc877fc-8xw9v
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-030870 describe pod metrics-server-569cc877fc-8xw9v: exit status 1 (67.071441ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-8xw9v" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-030870 describe pod metrics-server-569cc877fc-8xw9v: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0603 13:57:27.933503 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/auto-021279/client.crt: no such file or directory
E0603 13:57:45.541236 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.crt: no such file or directory
E0603 13:58:22.013277 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/calico-021279/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-817450 -n no-preload-817450
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-06-03 14:05:26.469351215 +0000 UTC m=+6078.673909737
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-817450 -n no-preload-817450
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-817450 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-817450 logs -n 25: (2.360099144s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-021279 sudo cat                              | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-021279 sudo                                  | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-021279 sudo                                  | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-021279 sudo                                  | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-021279 sudo find                             | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-021279 sudo crio                             | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-021279                                       | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	| delete  | -p                                                     | disable-driver-mounts-069000 | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | disable-driver-mounts-069000                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-030870 | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:43 UTC |
	|         | default-k8s-diff-port-030870                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-817450             | no-preload-817450            | jenkins | v1.33.1 | 03 Jun 24 13:42 UTC | 03 Jun 24 13:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-817450                                   | no-preload-817450            | jenkins | v1.33.1 | 03 Jun 24 13:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-223260            | embed-certs-223260           | jenkins | v1.33.1 | 03 Jun 24 13:43 UTC | 03 Jun 24 13:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-223260                                  | embed-certs-223260           | jenkins | v1.33.1 | 03 Jun 24 13:43 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-030870  | default-k8s-diff-port-030870 | jenkins | v1.33.1 | 03 Jun 24 13:43 UTC | 03 Jun 24 13:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-030870 | jenkins | v1.33.1 | 03 Jun 24 13:43 UTC |                     |
	|         | default-k8s-diff-port-030870                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-817450                  | no-preload-817450            | jenkins | v1.33.1 | 03 Jun 24 13:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-151788        | old-k8s-version-151788       | jenkins | v1.33.1 | 03 Jun 24 13:44 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p no-preload-817450                                   | no-preload-817450            | jenkins | v1.33.1 | 03 Jun 24 13:44 UTC | 03 Jun 24 13:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-223260                 | embed-certs-223260           | jenkins | v1.33.1 | 03 Jun 24 13:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-223260                                  | embed-certs-223260           | jenkins | v1.33.1 | 03 Jun 24 13:45 UTC | 03 Jun 24 13:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-030870       | default-k8s-diff-port-030870 | jenkins | v1.33.1 | 03 Jun 24 13:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-030870 | jenkins | v1.33.1 | 03 Jun 24 13:46 UTC | 03 Jun 24 13:54 UTC |
	|         | default-k8s-diff-port-030870                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-151788                              | old-k8s-version-151788       | jenkins | v1.33.1 | 03 Jun 24 13:46 UTC | 03 Jun 24 13:46 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-151788             | old-k8s-version-151788       | jenkins | v1.33.1 | 03 Jun 24 13:46 UTC | 03 Jun 24 13:46 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-151788                              | old-k8s-version-151788       | jenkins | v1.33.1 | 03 Jun 24 13:46 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 13:46:22
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 13:46:22.347386 1143678 out.go:291] Setting OutFile to fd 1 ...
	I0603 13:46:22.347655 1143678 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:46:22.347666 1143678 out.go:304] Setting ErrFile to fd 2...
	I0603 13:46:22.347672 1143678 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:46:22.347855 1143678 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
	I0603 13:46:22.348458 1143678 out.go:298] Setting JSON to false
	I0603 13:46:22.349502 1143678 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":16129,"bootTime":1717406253,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 13:46:22.349571 1143678 start.go:139] virtualization: kvm guest
	I0603 13:46:22.351720 1143678 out.go:177] * [old-k8s-version-151788] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 13:46:22.353180 1143678 out.go:177]   - MINIKUBE_LOCATION=19011
	I0603 13:46:22.353235 1143678 notify.go:220] Checking for updates...
	I0603 13:46:22.354400 1143678 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 13:46:22.355680 1143678 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 13:46:22.356796 1143678 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 13:46:22.357952 1143678 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 13:46:22.359052 1143678 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 13:46:22.360807 1143678 config.go:182] Loaded profile config "old-k8s-version-151788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0603 13:46:22.361230 1143678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:46:22.361306 1143678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:46:22.376241 1143678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42729
	I0603 13:46:22.376679 1143678 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:46:22.377267 1143678 main.go:141] libmachine: Using API Version  1
	I0603 13:46:22.377292 1143678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:46:22.377663 1143678 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:46:22.377897 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:46:22.379705 1143678 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0603 13:46:22.380895 1143678 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 13:46:22.381188 1143678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:46:22.381222 1143678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:46:22.396163 1143678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43961
	I0603 13:46:22.396669 1143678 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:46:22.397158 1143678 main.go:141] libmachine: Using API Version  1
	I0603 13:46:22.397180 1143678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:46:22.397509 1143678 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:46:22.397693 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:46:22.433731 1143678 out.go:177] * Using the kvm2 driver based on existing profile
	I0603 13:46:22.434876 1143678 start.go:297] selected driver: kvm2
	I0603 13:46:22.434897 1143678 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-151788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-151788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:46:22.435028 1143678 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 13:46:22.435716 1143678 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 13:46:22.435807 1143678 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19011-1078924/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0603 13:46:22.451200 1143678 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0603 13:46:22.451663 1143678 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 13:46:22.451755 1143678 cni.go:84] Creating CNI manager for ""
	I0603 13:46:22.451773 1143678 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:46:22.451832 1143678 start.go:340] cluster config:
	{Name:old-k8s-version-151788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-151788 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:46:22.451961 1143678 iso.go:125] acquiring lock: {Name:mka26d6a83f88b83737ccc78b57cc462fbe70fe1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 13:46:22.454327 1143678 out.go:177] * Starting "old-k8s-version-151788" primary control-plane node in "old-k8s-version-151788" cluster
	I0603 13:46:22.057705 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:46:22.455453 1143678 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0603 13:46:22.455492 1143678 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0603 13:46:22.455501 1143678 cache.go:56] Caching tarball of preloaded images
	I0603 13:46:22.455591 1143678 preload.go:173] Found /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 13:46:22.455604 1143678 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0603 13:46:22.455685 1143678 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/config.json ...
	I0603 13:46:22.455860 1143678 start.go:360] acquireMachinesLock for old-k8s-version-151788: {Name:mk20baaab39609d00406b78ad309423511e633ec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 13:46:28.137725 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:46:31.209684 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:46:37.289692 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:46:40.361614 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:46:46.441692 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:46:49.513686 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:46:55.593727 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:46:58.665749 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:04.745752 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:07.817726 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:13.897702 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:16.969727 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:23.049716 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:26.121758 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:32.201765 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:35.273759 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:41.353716 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:44.425767 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:50.505743 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:53.577777 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:59.657729 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:02.729769 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:08.809709 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:11.881708 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:17.961759 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:21.033726 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:27.113698 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:30.185691 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:36.265722 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:39.337764 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:45.417711 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:48.489729 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:54.569746 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:57.641701 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:49:03.721772 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:49:06.793709 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:49:12.873710 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:49:15.945728 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:49:22.025678 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:49:25.097675 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:49:28.102218 1143252 start.go:364] duration metric: took 3m44.709006863s to acquireMachinesLock for "embed-certs-223260"
	I0603 13:49:28.102293 1143252 start.go:96] Skipping create...Using existing machine configuration
	I0603 13:49:28.102302 1143252 fix.go:54] fixHost starting: 
	I0603 13:49:28.102635 1143252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:49:28.102666 1143252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:49:28.118384 1143252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44729
	I0603 13:49:28.119014 1143252 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:49:28.119601 1143252 main.go:141] libmachine: Using API Version  1
	I0603 13:49:28.119630 1143252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:49:28.119930 1143252 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:49:28.120116 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:49:28.120302 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetState
	I0603 13:49:28.122003 1143252 fix.go:112] recreateIfNeeded on embed-certs-223260: state=Stopped err=<nil>
	I0603 13:49:28.122030 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	W0603 13:49:28.122167 1143252 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 13:49:28.123963 1143252 out.go:177] * Restarting existing kvm2 VM for "embed-certs-223260" ...
	I0603 13:49:28.125564 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .Start
	I0603 13:49:28.125750 1143252 main.go:141] libmachine: (embed-certs-223260) Ensuring networks are active...
	I0603 13:49:28.126598 1143252 main.go:141] libmachine: (embed-certs-223260) Ensuring network default is active
	I0603 13:49:28.126965 1143252 main.go:141] libmachine: (embed-certs-223260) Ensuring network mk-embed-certs-223260 is active
	I0603 13:49:28.127319 1143252 main.go:141] libmachine: (embed-certs-223260) Getting domain xml...
	I0603 13:49:28.128017 1143252 main.go:141] libmachine: (embed-certs-223260) Creating domain...
	I0603 13:49:28.099474 1142862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 13:49:28.099536 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetMachineName
	I0603 13:49:28.099883 1142862 buildroot.go:166] provisioning hostname "no-preload-817450"
	I0603 13:49:28.099915 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetMachineName
	I0603 13:49:28.100115 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:49:28.102052 1142862 machine.go:97] duration metric: took 4m37.409499751s to provisionDockerMachine
	I0603 13:49:28.102123 1142862 fix.go:56] duration metric: took 4m37.432963538s for fixHost
	I0603 13:49:28.102135 1142862 start.go:83] releasing machines lock for "no-preload-817450", held for 4m37.432994587s
	W0603 13:49:28.102158 1142862 start.go:713] error starting host: provision: host is not running
	W0603 13:49:28.102317 1142862 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0603 13:49:28.102332 1142862 start.go:728] Will try again in 5 seconds ...
	I0603 13:49:29.332986 1143252 main.go:141] libmachine: (embed-certs-223260) Waiting to get IP...
	I0603 13:49:29.333963 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:29.334430 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:29.334475 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:29.334403 1144333 retry.go:31] will retry after 203.681987ms: waiting for machine to come up
	I0603 13:49:29.539995 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:29.540496 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:29.540564 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:29.540457 1144333 retry.go:31] will retry after 368.548292ms: waiting for machine to come up
	I0603 13:49:29.911212 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:29.911632 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:29.911665 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:29.911566 1144333 retry.go:31] will retry after 402.690969ms: waiting for machine to come up
	I0603 13:49:30.316480 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:30.316889 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:30.316920 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:30.316852 1144333 retry.go:31] will retry after 500.397867ms: waiting for machine to come up
	I0603 13:49:30.818653 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:30.819082 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:30.819107 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:30.819026 1144333 retry.go:31] will retry after 663.669804ms: waiting for machine to come up
	I0603 13:49:31.483776 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:31.484117 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:31.484144 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:31.484079 1144333 retry.go:31] will retry after 938.433137ms: waiting for machine to come up
	I0603 13:49:32.424128 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:32.424609 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:32.424640 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:32.424548 1144333 retry.go:31] will retry after 919.793328ms: waiting for machine to come up
	I0603 13:49:33.103895 1142862 start.go:360] acquireMachinesLock for no-preload-817450: {Name:mk20baaab39609d00406b78ad309423511e633ec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 13:49:33.346091 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:33.346549 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:33.346574 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:33.346511 1144333 retry.go:31] will retry after 1.115349726s: waiting for machine to come up
	I0603 13:49:34.463875 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:34.464588 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:34.464616 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:34.464529 1144333 retry.go:31] will retry after 1.153940362s: waiting for machine to come up
	I0603 13:49:35.619844 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:35.620243 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:35.620275 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:35.620176 1144333 retry.go:31] will retry after 1.514504154s: waiting for machine to come up
	I0603 13:49:37.135961 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:37.136409 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:37.136431 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:37.136382 1144333 retry.go:31] will retry after 2.757306897s: waiting for machine to come up
	I0603 13:49:39.895589 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:39.895942 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:39.895970 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:39.895881 1144333 retry.go:31] will retry after 3.019503072s: waiting for machine to come up
	I0603 13:49:42.919177 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:42.919640 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:42.919670 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:42.919588 1144333 retry.go:31] will retry after 3.150730989s: waiting for machine to come up
	I0603 13:49:47.494462 1143450 start.go:364] duration metric: took 3m37.207410663s to acquireMachinesLock for "default-k8s-diff-port-030870"
	I0603 13:49:47.494544 1143450 start.go:96] Skipping create...Using existing machine configuration
	I0603 13:49:47.494557 1143450 fix.go:54] fixHost starting: 
	I0603 13:49:47.494876 1143450 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:49:47.494918 1143450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:49:47.511570 1143450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44939
	I0603 13:49:47.512072 1143450 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:49:47.512568 1143450 main.go:141] libmachine: Using API Version  1
	I0603 13:49:47.512593 1143450 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:49:47.512923 1143450 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:49:47.513117 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:49:47.513276 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetState
	I0603 13:49:47.514783 1143450 fix.go:112] recreateIfNeeded on default-k8s-diff-port-030870: state=Stopped err=<nil>
	I0603 13:49:47.514817 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	W0603 13:49:47.514999 1143450 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 13:49:47.517441 1143450 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-030870" ...
	I0603 13:49:46.071609 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.072094 1143252 main.go:141] libmachine: (embed-certs-223260) Found IP for machine: 192.168.83.246
	I0603 13:49:46.072117 1143252 main.go:141] libmachine: (embed-certs-223260) Reserving static IP address...
	I0603 13:49:46.072132 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has current primary IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.072552 1143252 main.go:141] libmachine: (embed-certs-223260) Reserved static IP address: 192.168.83.246
	I0603 13:49:46.072585 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "embed-certs-223260", mac: "52:54:00:8e:14:a8", ip: "192.168.83.246"} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.072593 1143252 main.go:141] libmachine: (embed-certs-223260) Waiting for SSH to be available...
	I0603 13:49:46.072632 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | skip adding static IP to network mk-embed-certs-223260 - found existing host DHCP lease matching {name: "embed-certs-223260", mac: "52:54:00:8e:14:a8", ip: "192.168.83.246"}
	I0603 13:49:46.072655 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | Getting to WaitForSSH function...
	I0603 13:49:46.074738 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.075059 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.075091 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.075189 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | Using SSH client type: external
	I0603 13:49:46.075213 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | Using SSH private key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa (-rw-------)
	I0603 13:49:46.075249 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.246 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 13:49:46.075271 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | About to run SSH command:
	I0603 13:49:46.075283 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | exit 0
	I0603 13:49:46.197971 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | SSH cmd err, output: <nil>: 
	I0603 13:49:46.198498 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetConfigRaw
	I0603 13:49:46.199179 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetIP
	I0603 13:49:46.201821 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.202239 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.202277 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.202533 1143252 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260/config.json ...
	I0603 13:49:46.202727 1143252 machine.go:94] provisionDockerMachine start ...
	I0603 13:49:46.202745 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:49:46.202964 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:46.205259 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.205636 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.205663 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.205773 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:46.205954 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.206100 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.206318 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:46.206538 1143252 main.go:141] libmachine: Using SSH client type: native
	I0603 13:49:46.206819 1143252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.246 22 <nil> <nil>}
	I0603 13:49:46.206837 1143252 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 13:49:46.310241 1143252 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 13:49:46.310277 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetMachineName
	I0603 13:49:46.310583 1143252 buildroot.go:166] provisioning hostname "embed-certs-223260"
	I0603 13:49:46.310616 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetMachineName
	I0603 13:49:46.310836 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:46.313692 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.314078 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.314116 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.314222 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:46.314446 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.314631 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.314800 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:46.314969 1143252 main.go:141] libmachine: Using SSH client type: native
	I0603 13:49:46.315166 1143252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.246 22 <nil> <nil>}
	I0603 13:49:46.315183 1143252 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-223260 && echo "embed-certs-223260" | sudo tee /etc/hostname
	I0603 13:49:46.428560 1143252 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-223260
	
	I0603 13:49:46.428600 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:46.431381 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.431757 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.431784 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.432021 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:46.432283 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.432477 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.432609 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:46.432785 1143252 main.go:141] libmachine: Using SSH client type: native
	I0603 13:49:46.432960 1143252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.246 22 <nil> <nil>}
	I0603 13:49:46.432976 1143252 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-223260' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-223260/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-223260' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 13:49:46.542400 1143252 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 13:49:46.542446 1143252 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19011-1078924/.minikube CaCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19011-1078924/.minikube}
	I0603 13:49:46.542536 1143252 buildroot.go:174] setting up certificates
	I0603 13:49:46.542557 1143252 provision.go:84] configureAuth start
	I0603 13:49:46.542576 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetMachineName
	I0603 13:49:46.542913 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetIP
	I0603 13:49:46.545940 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.546339 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.546368 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.546499 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:46.548715 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.549097 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.549127 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.549294 1143252 provision.go:143] copyHostCerts
	I0603 13:49:46.549382 1143252 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem, removing ...
	I0603 13:49:46.549397 1143252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 13:49:46.549486 1143252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem (1078 bytes)
	I0603 13:49:46.549578 1143252 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem, removing ...
	I0603 13:49:46.549587 1143252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 13:49:46.549613 1143252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem (1123 bytes)
	I0603 13:49:46.549664 1143252 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem, removing ...
	I0603 13:49:46.549671 1143252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 13:49:46.549690 1143252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem (1675 bytes)
	I0603 13:49:46.549740 1143252 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem org=jenkins.embed-certs-223260 san=[127.0.0.1 192.168.83.246 embed-certs-223260 localhost minikube]
	I0603 13:49:46.807050 1143252 provision.go:177] copyRemoteCerts
	I0603 13:49:46.807111 1143252 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 13:49:46.807140 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:46.809916 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.810303 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.810347 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.810513 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:46.810758 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.810929 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:46.811168 1143252 sshutil.go:53] new ssh client: &{IP:192.168.83.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa Username:docker}
	I0603 13:49:46.892182 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 13:49:46.916657 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0603 13:49:46.941896 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 13:49:46.967292 1143252 provision.go:87] duration metric: took 424.714334ms to configureAuth
	I0603 13:49:46.967331 1143252 buildroot.go:189] setting minikube options for container-runtime
	I0603 13:49:46.967539 1143252 config.go:182] Loaded profile config "embed-certs-223260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:49:46.967626 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:46.970350 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.970668 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.970703 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.970870 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:46.971115 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.971314 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.971454 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:46.971625 1143252 main.go:141] libmachine: Using SSH client type: native
	I0603 13:49:46.971809 1143252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.246 22 <nil> <nil>}
	I0603 13:49:46.971831 1143252 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 13:49:47.264894 1143252 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 13:49:47.264922 1143252 machine.go:97] duration metric: took 1.062182146s to provisionDockerMachine
	I0603 13:49:47.264935 1143252 start.go:293] postStartSetup for "embed-certs-223260" (driver="kvm2")
	I0603 13:49:47.264946 1143252 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 13:49:47.264963 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:49:47.265368 1143252 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 13:49:47.265398 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:47.268412 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.268765 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:47.268796 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.268989 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:47.269223 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:47.269455 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:47.269625 1143252 sshutil.go:53] new ssh client: &{IP:192.168.83.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa Username:docker}
	I0603 13:49:47.348583 1143252 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 13:49:47.352828 1143252 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 13:49:47.352867 1143252 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/addons for local assets ...
	I0603 13:49:47.352949 1143252 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/files for local assets ...
	I0603 13:49:47.353046 1143252 filesync.go:149] local asset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> 10862512.pem in /etc/ssl/certs
	I0603 13:49:47.353164 1143252 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 13:49:47.363222 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:49:47.388132 1143252 start.go:296] duration metric: took 123.177471ms for postStartSetup
	I0603 13:49:47.388202 1143252 fix.go:56] duration metric: took 19.285899119s for fixHost
	I0603 13:49:47.388233 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:47.390960 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.391414 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:47.391477 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.391681 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:47.391937 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:47.392127 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:47.392266 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:47.392436 1143252 main.go:141] libmachine: Using SSH client type: native
	I0603 13:49:47.392670 1143252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.246 22 <nil> <nil>}
	I0603 13:49:47.392687 1143252 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 13:49:47.494294 1143252 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717422587.469729448
	
	I0603 13:49:47.494320 1143252 fix.go:216] guest clock: 1717422587.469729448
	I0603 13:49:47.494328 1143252 fix.go:229] Guest: 2024-06-03 13:49:47.469729448 +0000 UTC Remote: 2024-06-03 13:49:47.388208749 +0000 UTC m=+244.138441135 (delta=81.520699ms)
	I0603 13:49:47.494354 1143252 fix.go:200] guest clock delta is within tolerance: 81.520699ms
	I0603 13:49:47.494361 1143252 start.go:83] releasing machines lock for "embed-certs-223260", held for 19.392103897s
	I0603 13:49:47.494394 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:49:47.494686 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetIP
	I0603 13:49:47.497515 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.497930 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:47.497976 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.498110 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:49:47.498672 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:49:47.498859 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:49:47.498934 1143252 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 13:49:47.498988 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:47.499062 1143252 ssh_runner.go:195] Run: cat /version.json
	I0603 13:49:47.499082 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:47.501788 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.502075 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.502131 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:47.502156 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.502291 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:47.502390 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:47.502427 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.502550 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:47.502647 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:47.502738 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:47.502806 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:47.502942 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:47.502955 1143252 sshutil.go:53] new ssh client: &{IP:192.168.83.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa Username:docker}
	I0603 13:49:47.503078 1143252 sshutil.go:53] new ssh client: &{IP:192.168.83.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa Username:docker}
	I0603 13:49:47.612706 1143252 ssh_runner.go:195] Run: systemctl --version
	I0603 13:49:47.618922 1143252 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 13:49:47.764749 1143252 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 13:49:47.770936 1143252 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 13:49:47.771023 1143252 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 13:49:47.788401 1143252 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 13:49:47.788427 1143252 start.go:494] detecting cgroup driver to use...
	I0603 13:49:47.788486 1143252 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 13:49:47.805000 1143252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 13:49:47.822258 1143252 docker.go:217] disabling cri-docker service (if available) ...
	I0603 13:49:47.822315 1143252 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 13:49:47.837826 1143252 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 13:49:47.853818 1143252 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 13:49:47.978204 1143252 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 13:49:48.106302 1143252 docker.go:233] disabling docker service ...
	I0603 13:49:48.106366 1143252 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 13:49:48.120974 1143252 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 13:49:48.134911 1143252 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 13:49:48.278103 1143252 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 13:49:48.398238 1143252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 13:49:48.413207 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 13:49:48.432211 1143252 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 13:49:48.432281 1143252 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:49:48.443668 1143252 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 13:49:48.443746 1143252 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:49:48.454990 1143252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:49:48.467119 1143252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:49:48.479875 1143252 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 13:49:48.496767 1143252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:49:48.508872 1143252 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:49:48.530972 1143252 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:49:48.542631 1143252 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 13:49:48.552775 1143252 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 13:49:48.552836 1143252 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 13:49:48.566528 1143252 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 13:49:48.582917 1143252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:49:48.716014 1143252 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 13:49:48.860157 1143252 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 13:49:48.860283 1143252 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 13:49:48.865046 1143252 start.go:562] Will wait 60s for crictl version
	I0603 13:49:48.865121 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:49:48.869520 1143252 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 13:49:48.909721 1143252 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 13:49:48.909819 1143252 ssh_runner.go:195] Run: crio --version
	I0603 13:49:48.939080 1143252 ssh_runner.go:195] Run: crio --version
	I0603 13:49:48.970595 1143252 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 13:49:47.518807 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .Start
	I0603 13:49:47.518981 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Ensuring networks are active...
	I0603 13:49:47.519623 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Ensuring network default is active
	I0603 13:49:47.519926 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Ensuring network mk-default-k8s-diff-port-030870 is active
	I0603 13:49:47.520408 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Getting domain xml...
	I0603 13:49:47.521014 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Creating domain...
	I0603 13:49:48.798483 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting to get IP...
	I0603 13:49:48.799695 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:48.800174 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:48.800305 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:48.800165 1144471 retry.go:31] will retry after 204.161843ms: waiting for machine to come up
	I0603 13:49:49.005669 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:49.006143 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:49.006180 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:49.006091 1144471 retry.go:31] will retry after 382.751679ms: waiting for machine to come up
	I0603 13:49:49.391162 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:49.391717 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:49.391750 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:49.391670 1144471 retry.go:31] will retry after 314.248576ms: waiting for machine to come up
	I0603 13:49:49.707349 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:49.707957 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:49.707990 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:49.707856 1144471 retry.go:31] will retry after 446.461931ms: waiting for machine to come up
	I0603 13:49:50.155616 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:50.156238 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:50.156274 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:50.156174 1144471 retry.go:31] will retry after 712.186964ms: waiting for machine to come up
	I0603 13:49:48.971971 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetIP
	I0603 13:49:48.975079 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:48.975439 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:48.975471 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:48.975721 1143252 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0603 13:49:48.980114 1143252 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:49:48.993380 1143252 kubeadm.go:877] updating cluster {Name:embed-certs-223260 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:embed-certs-223260 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.246 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 13:49:48.993543 1143252 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 13:49:48.993636 1143252 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:49:49.032289 1143252 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0603 13:49:49.032364 1143252 ssh_runner.go:195] Run: which lz4
	I0603 13:49:49.036707 1143252 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 13:49:49.040973 1143252 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 13:49:49.041000 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0603 13:49:50.554295 1143252 crio.go:462] duration metric: took 1.517623353s to copy over tarball
	I0603 13:49:50.554387 1143252 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 13:49:52.823733 1143252 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.269303423s)
	I0603 13:49:52.823785 1143252 crio.go:469] duration metric: took 2.269454274s to extract the tarball
	I0603 13:49:52.823799 1143252 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 13:49:52.862060 1143252 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:49:52.906571 1143252 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 13:49:52.906602 1143252 cache_images.go:84] Images are preloaded, skipping loading
	I0603 13:49:52.906618 1143252 kubeadm.go:928] updating node { 192.168.83.246 8443 v1.30.1 crio true true} ...
	I0603 13:49:52.906774 1143252 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-223260 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:embed-certs-223260 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 13:49:52.906866 1143252 ssh_runner.go:195] Run: crio config
	I0603 13:49:52.954082 1143252 cni.go:84] Creating CNI manager for ""
	I0603 13:49:52.954111 1143252 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:49:52.954129 1143252 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 13:49:52.954159 1143252 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.246 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-223260 NodeName:embed-certs-223260 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.246"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.246 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 13:49:52.954355 1143252 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.246
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-223260"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.246
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.246"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 13:49:52.954446 1143252 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 13:49:52.964488 1143252 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 13:49:52.964582 1143252 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 13:49:52.974118 1143252 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0603 13:49:52.990701 1143252 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 13:49:53.007539 1143252 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0603 13:49:53.024943 1143252 ssh_runner.go:195] Run: grep 192.168.83.246	control-plane.minikube.internal$ /etc/hosts
	I0603 13:49:53.029097 1143252 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.246	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:49:53.041234 1143252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:49:53.178449 1143252 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:49:53.195718 1143252 certs.go:68] Setting up /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260 for IP: 192.168.83.246
	I0603 13:49:53.195750 1143252 certs.go:194] generating shared ca certs ...
	I0603 13:49:53.195769 1143252 certs.go:226] acquiring lock for ca certs: {Name:mkeec5aabce7c9540fcb31b78e4f96c2851d54f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:49:53.195954 1143252 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key
	I0603 13:49:53.196021 1143252 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key
	I0603 13:49:53.196035 1143252 certs.go:256] generating profile certs ...
	I0603 13:49:53.196256 1143252 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260/client.key
	I0603 13:49:53.196341 1143252 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260/apiserver.key.90d43877
	I0603 13:49:53.196437 1143252 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260/proxy-client.key
	I0603 13:49:53.196605 1143252 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem (1338 bytes)
	W0603 13:49:53.196663 1143252 certs.go:480] ignoring /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251_empty.pem, impossibly tiny 0 bytes
	I0603 13:49:53.196678 1143252 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 13:49:53.196708 1143252 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem (1078 bytes)
	I0603 13:49:53.196756 1143252 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem (1123 bytes)
	I0603 13:49:53.196787 1143252 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem (1675 bytes)
	I0603 13:49:53.196838 1143252 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:49:53.197895 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 13:49:53.231612 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 13:49:53.263516 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 13:49:50.870317 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:50.870816 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:50.870841 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:50.870781 1144471 retry.go:31] will retry after 855.15183ms: waiting for machine to come up
	I0603 13:49:51.727393 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:51.727926 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:51.727960 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:51.727869 1144471 retry.go:31] will retry after 997.293541ms: waiting for machine to come up
	I0603 13:49:52.726578 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:52.727036 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:52.727073 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:52.726953 1144471 retry.go:31] will retry after 1.4233414s: waiting for machine to come up
	I0603 13:49:54.151594 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:54.152072 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:54.152099 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:54.152021 1144471 retry.go:31] will retry after 1.348888248s: waiting for machine to come up
	I0603 13:49:53.303724 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0603 13:49:53.334700 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0603 13:49:53.371594 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 13:49:53.396381 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 13:49:53.420985 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0603 13:49:53.445334 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem --> /usr/share/ca-certificates/1086251.pem (1338 bytes)
	I0603 13:49:53.469632 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /usr/share/ca-certificates/10862512.pem (1708 bytes)
	I0603 13:49:53.495720 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 13:49:53.522416 1143252 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 13:49:53.541593 1143252 ssh_runner.go:195] Run: openssl version
	I0603 13:49:53.547653 1143252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1086251.pem && ln -fs /usr/share/ca-certificates/1086251.pem /etc/ssl/certs/1086251.pem"
	I0603 13:49:53.558802 1143252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1086251.pem
	I0603 13:49:53.563511 1143252 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:37 /usr/share/ca-certificates/1086251.pem
	I0603 13:49:53.563579 1143252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1086251.pem
	I0603 13:49:53.569691 1143252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1086251.pem /etc/ssl/certs/51391683.0"
	I0603 13:49:53.582814 1143252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10862512.pem && ln -fs /usr/share/ca-certificates/10862512.pem /etc/ssl/certs/10862512.pem"
	I0603 13:49:53.595684 1143252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10862512.pem
	I0603 13:49:53.600613 1143252 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:37 /usr/share/ca-certificates/10862512.pem
	I0603 13:49:53.600675 1143252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10862512.pem
	I0603 13:49:53.607008 1143252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10862512.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 13:49:53.619919 1143252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 13:49:53.632663 1143252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:49:53.637604 1143252 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:24 /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:49:53.637675 1143252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:49:53.643844 1143252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 13:49:53.655934 1143252 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 13:49:53.660801 1143252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 13:49:53.667391 1143252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 13:49:53.674382 1143252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 13:49:53.681121 1143252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 13:49:53.687496 1143252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 13:49:53.693623 1143252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 13:49:53.699764 1143252 kubeadm.go:391] StartCluster: {Name:embed-certs-223260 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:embed-certs-223260 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.246 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:49:53.699871 1143252 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 13:49:53.699928 1143252 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:49:53.736588 1143252 cri.go:89] found id: ""
	I0603 13:49:53.736662 1143252 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0603 13:49:53.750620 1143252 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 13:49:53.750644 1143252 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 13:49:53.750652 1143252 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 13:49:53.750716 1143252 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 13:49:53.765026 1143252 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 13:49:53.766297 1143252 kubeconfig.go:125] found "embed-certs-223260" server: "https://192.168.83.246:8443"
	I0603 13:49:53.768662 1143252 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 13:49:53.779583 1143252 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.83.246
	I0603 13:49:53.779625 1143252 kubeadm.go:1154] stopping kube-system containers ...
	I0603 13:49:53.779639 1143252 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0603 13:49:53.779695 1143252 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:49:53.820312 1143252 cri.go:89] found id: ""
	I0603 13:49:53.820398 1143252 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 13:49:53.838446 1143252 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 13:49:53.849623 1143252 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 13:49:53.849643 1143252 kubeadm.go:156] found existing configuration files:
	
	I0603 13:49:53.849700 1143252 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 13:49:53.859379 1143252 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 13:49:53.859451 1143252 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 13:49:53.869939 1143252 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 13:49:53.880455 1143252 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 13:49:53.880527 1143252 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 13:49:53.890918 1143252 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 13:49:53.900841 1143252 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 13:49:53.900894 1143252 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 13:49:53.910968 1143252 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 13:49:53.921064 1143252 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 13:49:53.921121 1143252 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 13:49:53.931550 1143252 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 13:49:53.942309 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:49:54.078959 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:49:54.842079 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:49:55.043420 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:49:55.111164 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:49:55.220384 1143252 api_server.go:52] waiting for apiserver process to appear ...
	I0603 13:49:55.220475 1143252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:49:55.721612 1143252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:49:56.221513 1143252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:49:56.257801 1143252 api_server.go:72] duration metric: took 1.037411844s to wait for apiserver process to appear ...
	I0603 13:49:56.257845 1143252 api_server.go:88] waiting for apiserver healthz status ...
	I0603 13:49:56.257874 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:49:55.502734 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:55.503282 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:55.503313 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:55.503226 1144471 retry.go:31] will retry after 1.733012887s: waiting for machine to come up
	I0603 13:49:57.238544 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:57.238975 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:57.239006 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:57.238917 1144471 retry.go:31] will retry after 2.565512625s: waiting for machine to come up
	I0603 13:49:59.806662 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:59.807077 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:59.807105 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:59.807024 1144471 retry.go:31] will retry after 2.759375951s: waiting for machine to come up
	I0603 13:49:59.684015 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 13:49:59.684058 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 13:49:59.684078 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:49:59.757751 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 13:49:59.757791 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 13:49:59.758846 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:49:59.779923 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:49:59.779974 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:00.258098 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:50:00.265061 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:00.265089 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:00.758643 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:50:00.764364 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:00.764400 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:01.257950 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:50:01.262846 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:01.262875 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:01.758078 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:50:01.763269 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:01.763301 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:02.258641 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:50:02.263628 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:02.263658 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:02.758205 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:50:02.765436 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:02.765470 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:03.258663 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:50:03.263141 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 200:
	ok
	I0603 13:50:03.269787 1143252 api_server.go:141] control plane version: v1.30.1
	I0603 13:50:03.269817 1143252 api_server.go:131] duration metric: took 7.011964721s to wait for apiserver health ...
	I0603 13:50:03.269827 1143252 cni.go:84] Creating CNI manager for ""
	I0603 13:50:03.269833 1143252 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:50:03.271812 1143252 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 13:50:03.273154 1143252 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 13:50:03.285329 1143252 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 13:50:03.305480 1143252 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 13:50:03.317546 1143252 system_pods.go:59] 8 kube-system pods found
	I0603 13:50:03.317601 1143252 system_pods.go:61] "coredns-7db6d8ff4d-qdjrv" [9a490ea5-c189-4d28-bd6b-509610d35f37] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 13:50:03.317614 1143252 system_pods.go:61] "etcd-embed-certs-223260" [97807b62-195b-4d94-a7f8-754f68ad4f03] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0603 13:50:03.317627 1143252 system_pods.go:61] "kube-apiserver-embed-certs-223260" [df2f6cde-407c-4ed2-8fec-5fa61a428a88] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0603 13:50:03.317637 1143252 system_pods.go:61] "kube-controller-manager-embed-certs-223260" [9b8bc1b7-3f43-4626-b9ee-37f5176b7fd6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0603 13:50:03.317645 1143252 system_pods.go:61] "kube-proxy-s5vdl" [4c515f67-d265-4140-82ec-ba9ac4ddda80] Running
	I0603 13:50:03.317658 1143252 system_pods.go:61] "kube-scheduler-embed-certs-223260" [d23001bf-d971-42d2-a901-b2ec4b4db649] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0603 13:50:03.317667 1143252 system_pods.go:61] "metrics-server-569cc877fc-v7d9t" [e89c698d-7aab-4acd-a9b3-5ba0315ad681] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:50:03.317677 1143252 system_pods.go:61] "storage-provisioner" [6ff65744-2d90-4589-a97f-d6b4d792eab4] Running
	I0603 13:50:03.317686 1143252 system_pods.go:74] duration metric: took 12.177585ms to wait for pod list to return data ...
	I0603 13:50:03.317695 1143252 node_conditions.go:102] verifying NodePressure condition ...
	I0603 13:50:03.321445 1143252 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 13:50:03.321479 1143252 node_conditions.go:123] node cpu capacity is 2
	I0603 13:50:03.321493 1143252 node_conditions.go:105] duration metric: took 3.787651ms to run NodePressure ...
	I0603 13:50:03.321512 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:03.598576 1143252 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0603 13:50:03.604196 1143252 kubeadm.go:733] kubelet initialised
	I0603 13:50:03.604219 1143252 kubeadm.go:734] duration metric: took 5.606021ms waiting for restarted kubelet to initialise ...
	I0603 13:50:03.604236 1143252 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:50:03.611441 1143252 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-qdjrv" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:03.615911 1143252 pod_ready.go:97] node "embed-certs-223260" hosting pod "coredns-7db6d8ff4d-qdjrv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:03.615936 1143252 pod_ready.go:81] duration metric: took 4.468017ms for pod "coredns-7db6d8ff4d-qdjrv" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:03.615945 1143252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-223260" hosting pod "coredns-7db6d8ff4d-qdjrv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:03.615955 1143252 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:03.620663 1143252 pod_ready.go:97] node "embed-certs-223260" hosting pod "etcd-embed-certs-223260" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:03.620683 1143252 pod_ready.go:81] duration metric: took 4.71967ms for pod "etcd-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:03.620691 1143252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-223260" hosting pod "etcd-embed-certs-223260" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:03.620697 1143252 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:03.624894 1143252 pod_ready.go:97] node "embed-certs-223260" hosting pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:03.624917 1143252 pod_ready.go:81] duration metric: took 4.212227ms for pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:03.624925 1143252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-223260" hosting pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:03.624933 1143252 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:03.708636 1143252 pod_ready.go:97] node "embed-certs-223260" hosting pod "kube-controller-manager-embed-certs-223260" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:03.708665 1143252 pod_ready.go:81] duration metric: took 83.72445ms for pod "kube-controller-manager-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:03.708675 1143252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-223260" hosting pod "kube-controller-manager-embed-certs-223260" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:03.708681 1143252 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-s5vdl" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:04.109391 1143252 pod_ready.go:97] node "embed-certs-223260" hosting pod "kube-proxy-s5vdl" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:04.109454 1143252 pod_ready.go:81] duration metric: took 400.761651ms for pod "kube-proxy-s5vdl" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:04.109469 1143252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-223260" hosting pod "kube-proxy-s5vdl" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:04.109478 1143252 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:04.509683 1143252 pod_ready.go:97] node "embed-certs-223260" hosting pod "kube-scheduler-embed-certs-223260" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:04.509712 1143252 pod_ready.go:81] duration metric: took 400.226435ms for pod "kube-scheduler-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:04.509723 1143252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-223260" hosting pod "kube-scheduler-embed-certs-223260" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:04.509730 1143252 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:04.909629 1143252 pod_ready.go:97] node "embed-certs-223260" hosting pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:04.909659 1143252 pod_ready.go:81] duration metric: took 399.917901ms for pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:04.909669 1143252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-223260" hosting pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:04.909679 1143252 pod_ready.go:38] duration metric: took 1.30543039s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:50:04.909697 1143252 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 13:50:04.921682 1143252 ops.go:34] apiserver oom_adj: -16
	I0603 13:50:04.921708 1143252 kubeadm.go:591] duration metric: took 11.171050234s to restartPrimaryControlPlane
	I0603 13:50:04.921717 1143252 kubeadm.go:393] duration metric: took 11.221962831s to StartCluster
	I0603 13:50:04.921737 1143252 settings.go:142] acquiring lock: {Name:mka7155af15d143794eb08b8670f7d850f44839e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:50:04.921807 1143252 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 13:50:04.923342 1143252 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/kubeconfig: {Name:mk082a4c41fd0f4876b4085806e1bc5ef6533b14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:50:04.923628 1143252 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.83.246 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 13:50:04.927063 1143252 out.go:177] * Verifying Kubernetes components...
	I0603 13:50:04.923693 1143252 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 13:50:04.923865 1143252 config.go:182] Loaded profile config "embed-certs-223260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:50:04.928850 1143252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:50:04.928873 1143252 addons.go:69] Setting default-storageclass=true in profile "embed-certs-223260"
	I0603 13:50:04.928872 1143252 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-223260"
	I0603 13:50:04.928889 1143252 addons.go:69] Setting metrics-server=true in profile "embed-certs-223260"
	I0603 13:50:04.928906 1143252 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-223260"
	I0603 13:50:04.928923 1143252 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-223260"
	I0603 13:50:04.928935 1143252 addons.go:234] Setting addon metrics-server=true in "embed-certs-223260"
	W0603 13:50:04.928938 1143252 addons.go:243] addon storage-provisioner should already be in state true
	W0603 13:50:04.928945 1143252 addons.go:243] addon metrics-server should already be in state true
	I0603 13:50:04.928980 1143252 host.go:66] Checking if "embed-certs-223260" exists ...
	I0603 13:50:04.928980 1143252 host.go:66] Checking if "embed-certs-223260" exists ...
	I0603 13:50:04.929307 1143252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:04.929346 1143252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:04.929352 1143252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:04.929372 1143252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:04.929597 1143252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:04.929630 1143252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:04.944948 1143252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38847
	I0603 13:50:04.945071 1143252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44523
	I0603 13:50:04.945489 1143252 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:04.945571 1143252 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:04.946137 1143252 main.go:141] libmachine: Using API Version  1
	I0603 13:50:04.946166 1143252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:04.946299 1143252 main.go:141] libmachine: Using API Version  1
	I0603 13:50:04.946319 1143252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:04.946589 1143252 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:04.946650 1143252 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:04.946798 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetState
	I0603 13:50:04.947022 1143252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35157
	I0603 13:50:04.947210 1143252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:04.947250 1143252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:04.947517 1143252 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:04.948043 1143252 main.go:141] libmachine: Using API Version  1
	I0603 13:50:04.948069 1143252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:04.948437 1143252 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:04.949064 1143252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:04.949107 1143252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:04.950532 1143252 addons.go:234] Setting addon default-storageclass=true in "embed-certs-223260"
	W0603 13:50:04.950558 1143252 addons.go:243] addon default-storageclass should already be in state true
	I0603 13:50:04.950589 1143252 host.go:66] Checking if "embed-certs-223260" exists ...
	I0603 13:50:04.950951 1143252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:04.951008 1143252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:04.964051 1143252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37589
	I0603 13:50:04.964078 1143252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35097
	I0603 13:50:04.964513 1143252 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:04.964562 1143252 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:04.965062 1143252 main.go:141] libmachine: Using API Version  1
	I0603 13:50:04.965088 1143252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:04.965128 1143252 main.go:141] libmachine: Using API Version  1
	I0603 13:50:04.965153 1143252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:04.965473 1143252 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:04.965532 1143252 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:04.965652 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetState
	I0603 13:50:04.965740 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetState
	I0603 13:50:04.967606 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:50:04.967739 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:50:04.969783 1143252 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:04.971193 1143252 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0603 13:50:02.567560 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:02.567988 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:50:02.568020 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:50:02.567915 1144471 retry.go:31] will retry after 3.955051362s: waiting for machine to come up
	I0603 13:50:04.972568 1143252 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0603 13:50:04.972588 1143252 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0603 13:50:04.972606 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:50:04.971275 1143252 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 13:50:04.972634 1143252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 13:50:04.972658 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:50:04.971495 1143252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42379
	I0603 13:50:04.973108 1143252 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:04.973575 1143252 main.go:141] libmachine: Using API Version  1
	I0603 13:50:04.973599 1143252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:04.973931 1143252 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:04.974623 1143252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:04.974658 1143252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:04.976128 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:50:04.976251 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:50:04.976535 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:50:04.976559 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:50:04.976709 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:50:04.976724 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:50:04.976768 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:50:04.976915 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:50:04.976989 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:50:04.977099 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:50:04.977156 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:50:04.977242 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:50:04.977305 1143252 sshutil.go:53] new ssh client: &{IP:192.168.83.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa Username:docker}
	I0603 13:50:04.977500 1143252 sshutil.go:53] new ssh client: &{IP:192.168.83.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa Username:docker}
	I0603 13:50:04.990810 1143252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36607
	I0603 13:50:04.991293 1143252 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:04.991844 1143252 main.go:141] libmachine: Using API Version  1
	I0603 13:50:04.991875 1143252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:04.992279 1143252 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:04.992499 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetState
	I0603 13:50:04.994225 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:50:04.994456 1143252 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 13:50:04.994476 1143252 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 13:50:04.994490 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:50:04.997771 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:50:04.998210 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:50:04.998239 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:50:04.998418 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:50:04.998627 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:50:04.998811 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:50:04.998941 1143252 sshutil.go:53] new ssh client: &{IP:192.168.83.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa Username:docker}
	I0603 13:50:05.119962 1143252 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:50:05.140880 1143252 node_ready.go:35] waiting up to 6m0s for node "embed-certs-223260" to be "Ready" ...
	I0603 13:50:05.271863 1143252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 13:50:05.275815 1143252 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0603 13:50:05.275843 1143252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0603 13:50:05.294572 1143252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 13:50:05.346520 1143252 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0603 13:50:05.346553 1143252 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0603 13:50:05.417100 1143252 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 13:50:05.417141 1143252 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0603 13:50:05.496250 1143252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 13:50:06.207746 1143252 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:06.207781 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .Close
	I0603 13:50:06.207849 1143252 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:06.207873 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .Close
	I0603 13:50:06.208103 1143252 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:06.208152 1143252 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:06.208161 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | Closing plugin on server side
	I0603 13:50:06.208182 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | Closing plugin on server side
	I0603 13:50:06.208157 1143252 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:06.208197 1143252 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:06.208200 1143252 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:06.208216 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .Close
	I0603 13:50:06.208208 1143252 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:06.208284 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .Close
	I0603 13:50:06.208572 1143252 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:06.208590 1143252 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:06.208691 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | Closing plugin on server side
	I0603 13:50:06.208703 1143252 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:06.208724 1143252 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:06.216764 1143252 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:06.216783 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .Close
	I0603 13:50:06.217095 1143252 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:06.217111 1143252 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:06.374254 1143252 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:06.374281 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .Close
	I0603 13:50:06.374603 1143252 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:06.374623 1143252 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:06.374634 1143252 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:06.374638 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | Closing plugin on server side
	I0603 13:50:06.374644 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .Close
	I0603 13:50:06.374901 1143252 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:06.374916 1143252 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:06.374933 1143252 addons.go:475] Verifying addon metrics-server=true in "embed-certs-223260"
	I0603 13:50:06.374948 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | Closing plugin on server side
	I0603 13:50:06.377491 1143252 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0603 13:50:08.083130 1143678 start.go:364] duration metric: took 3m45.627229097s to acquireMachinesLock for "old-k8s-version-151788"
	I0603 13:50:08.083256 1143678 start.go:96] Skipping create...Using existing machine configuration
	I0603 13:50:08.083266 1143678 fix.go:54] fixHost starting: 
	I0603 13:50:08.083762 1143678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:08.083812 1143678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:08.103187 1143678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36483
	I0603 13:50:08.103693 1143678 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:08.104269 1143678 main.go:141] libmachine: Using API Version  1
	I0603 13:50:08.104299 1143678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:08.104746 1143678 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:08.105115 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:50:08.105347 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetState
	I0603 13:50:08.107125 1143678 fix.go:112] recreateIfNeeded on old-k8s-version-151788: state=Stopped err=<nil>
	I0603 13:50:08.107173 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	W0603 13:50:08.107340 1143678 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 13:50:08.109207 1143678 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-151788" ...
	I0603 13:50:06.378684 1143252 addons.go:510] duration metric: took 1.4549999s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0603 13:50:07.145643 1143252 node_ready.go:53] node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:06.526793 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.527302 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Found IP for machine: 192.168.39.177
	I0603 13:50:06.527341 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has current primary IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.527366 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Reserving static IP address...
	I0603 13:50:06.527822 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Reserved static IP address: 192.168.39.177
	I0603 13:50:06.527857 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for SSH to be available...
	I0603 13:50:06.527902 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-030870", mac: "52:54:00:62:09:d4", ip: "192.168.39.177"} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:06.527956 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | skip adding static IP to network mk-default-k8s-diff-port-030870 - found existing host DHCP lease matching {name: "default-k8s-diff-port-030870", mac: "52:54:00:62:09:d4", ip: "192.168.39.177"}
	I0603 13:50:06.527973 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | Getting to WaitForSSH function...
	I0603 13:50:06.530287 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.530662 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:06.530696 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.530802 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | Using SSH client type: external
	I0603 13:50:06.530827 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | Using SSH private key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa (-rw-------)
	I0603 13:50:06.530849 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.177 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 13:50:06.530866 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | About to run SSH command:
	I0603 13:50:06.530877 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | exit 0
	I0603 13:50:06.653910 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | SSH cmd err, output: <nil>: 
	I0603 13:50:06.654259 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetConfigRaw
	I0603 13:50:06.654981 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetIP
	I0603 13:50:06.658094 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.658561 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:06.658600 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.658921 1143450 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870/config.json ...
	I0603 13:50:06.659144 1143450 machine.go:94] provisionDockerMachine start ...
	I0603 13:50:06.659168 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:06.659486 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:06.662534 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.662915 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:06.662959 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.663059 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:06.663258 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:06.663476 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:06.663660 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:06.663866 1143450 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:06.664103 1143450 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0603 13:50:06.664115 1143450 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 13:50:06.766054 1143450 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 13:50:06.766083 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetMachineName
	I0603 13:50:06.766406 1143450 buildroot.go:166] provisioning hostname "default-k8s-diff-port-030870"
	I0603 13:50:06.766440 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetMachineName
	I0603 13:50:06.766708 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:06.769445 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.769820 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:06.769871 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.770029 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:06.770244 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:06.770423 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:06.770670 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:06.770893 1143450 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:06.771057 1143450 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0603 13:50:06.771070 1143450 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-030870 && echo "default-k8s-diff-port-030870" | sudo tee /etc/hostname
	I0603 13:50:06.889997 1143450 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-030870
	
	I0603 13:50:06.890029 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:06.893778 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.894260 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:06.894297 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.894614 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:06.894826 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:06.895029 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:06.895211 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:06.895423 1143450 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:06.895608 1143450 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0603 13:50:06.895625 1143450 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-030870' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-030870/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-030870' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 13:50:07.007930 1143450 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 13:50:07.007971 1143450 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19011-1078924/.minikube CaCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19011-1078924/.minikube}
	I0603 13:50:07.008009 1143450 buildroot.go:174] setting up certificates
	I0603 13:50:07.008020 1143450 provision.go:84] configureAuth start
	I0603 13:50:07.008034 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetMachineName
	I0603 13:50:07.008433 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetIP
	I0603 13:50:07.011208 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.011607 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:07.011640 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.011774 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:07.013986 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.014431 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:07.014462 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.014655 1143450 provision.go:143] copyHostCerts
	I0603 13:50:07.014726 1143450 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem, removing ...
	I0603 13:50:07.014737 1143450 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 13:50:07.014787 1143450 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem (1078 bytes)
	I0603 13:50:07.014874 1143450 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem, removing ...
	I0603 13:50:07.014882 1143450 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 13:50:07.014902 1143450 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem (1123 bytes)
	I0603 13:50:07.014952 1143450 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem, removing ...
	I0603 13:50:07.014959 1143450 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 13:50:07.014974 1143450 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem (1675 bytes)
	I0603 13:50:07.015020 1143450 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-030870 san=[127.0.0.1 192.168.39.177 default-k8s-diff-port-030870 localhost minikube]
	I0603 13:50:07.402535 1143450 provision.go:177] copyRemoteCerts
	I0603 13:50:07.402595 1143450 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 13:50:07.402626 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:07.405891 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.406240 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:07.406272 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.406484 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:07.406718 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:07.406943 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:07.407132 1143450 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa Username:docker}
	I0603 13:50:07.489480 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 13:50:07.517212 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0603 13:50:07.543510 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 13:50:07.570284 1143450 provision.go:87] duration metric: took 562.244781ms to configureAuth
	I0603 13:50:07.570318 1143450 buildroot.go:189] setting minikube options for container-runtime
	I0603 13:50:07.570537 1143450 config.go:182] Loaded profile config "default-k8s-diff-port-030870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:50:07.570629 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:07.574171 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.574706 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:07.574739 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.574948 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:07.575262 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:07.575549 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:07.575781 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:07.575965 1143450 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:07.576217 1143450 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0603 13:50:07.576247 1143450 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 13:50:07.839415 1143450 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 13:50:07.839455 1143450 machine.go:97] duration metric: took 1.180296439s to provisionDockerMachine
	I0603 13:50:07.839468 1143450 start.go:293] postStartSetup for "default-k8s-diff-port-030870" (driver="kvm2")
	I0603 13:50:07.839482 1143450 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 13:50:07.839506 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:07.839843 1143450 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 13:50:07.839872 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:07.842547 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.842884 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:07.842918 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.843234 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:07.843471 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:07.843708 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:07.843952 1143450 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa Username:docker}
	I0603 13:50:07.927654 1143450 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 13:50:07.932965 1143450 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 13:50:07.932997 1143450 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/addons for local assets ...
	I0603 13:50:07.933082 1143450 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/files for local assets ...
	I0603 13:50:07.933202 1143450 filesync.go:149] local asset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> 10862512.pem in /etc/ssl/certs
	I0603 13:50:07.933343 1143450 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 13:50:07.945059 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:50:07.975774 1143450 start.go:296] duration metric: took 136.280559ms for postStartSetup
	I0603 13:50:07.975822 1143450 fix.go:56] duration metric: took 20.481265153s for fixHost
	I0603 13:50:07.975848 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:07.979035 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.979436 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:07.979486 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.979737 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:07.980012 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:07.980228 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:07.980452 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:07.980691 1143450 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:07.980935 1143450 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0603 13:50:07.980954 1143450 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 13:50:08.082946 1143450 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717422608.057620379
	
	I0603 13:50:08.082978 1143450 fix.go:216] guest clock: 1717422608.057620379
	I0603 13:50:08.082988 1143450 fix.go:229] Guest: 2024-06-03 13:50:08.057620379 +0000 UTC Remote: 2024-06-03 13:50:07.975826846 +0000 UTC m=+237.845886752 (delta=81.793533ms)
	I0603 13:50:08.083018 1143450 fix.go:200] guest clock delta is within tolerance: 81.793533ms
	I0603 13:50:08.083025 1143450 start.go:83] releasing machines lock for "default-k8s-diff-port-030870", held for 20.588515063s
	I0603 13:50:08.083060 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:08.083369 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetIP
	I0603 13:50:08.086674 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:08.087202 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:08.087285 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:08.087508 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:08.088324 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:08.088575 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:08.088673 1143450 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 13:50:08.088758 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:08.088823 1143450 ssh_runner.go:195] Run: cat /version.json
	I0603 13:50:08.088852 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:08.092020 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:08.092175 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:08.092406 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:08.092485 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:08.092863 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:08.092893 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:08.092916 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:08.092924 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:08.093273 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:08.093276 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:08.093522 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:08.093541 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:08.093708 1143450 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa Username:docker}
	I0603 13:50:08.093710 1143450 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa Username:docker}
	I0603 13:50:08.176292 1143450 ssh_runner.go:195] Run: systemctl --version
	I0603 13:50:08.204977 1143450 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 13:50:08.367121 1143450 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 13:50:08.376347 1143450 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 13:50:08.376431 1143450 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 13:50:08.398639 1143450 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 13:50:08.398672 1143450 start.go:494] detecting cgroup driver to use...
	I0603 13:50:08.398750 1143450 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 13:50:08.422776 1143450 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 13:50:08.443035 1143450 docker.go:217] disabling cri-docker service (if available) ...
	I0603 13:50:08.443108 1143450 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 13:50:08.459853 1143450 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 13:50:08.482009 1143450 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 13:50:08.631237 1143450 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 13:50:08.806623 1143450 docker.go:233] disabling docker service ...
	I0603 13:50:08.806715 1143450 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 13:50:08.827122 1143450 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 13:50:08.842457 1143450 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 13:50:08.999795 1143450 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 13:50:09.148706 1143450 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 13:50:09.167314 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 13:50:09.188867 1143450 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 13:50:09.188959 1143450 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:09.202239 1143450 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 13:50:09.202319 1143450 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:09.216228 1143450 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:09.231140 1143450 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:09.246767 1143450 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 13:50:09.260418 1143450 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:09.274349 1143450 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:09.300588 1143450 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:09.314659 1143450 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 13:50:09.326844 1143450 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 13:50:09.326919 1143450 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 13:50:09.344375 1143450 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 13:50:09.357955 1143450 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:50:09.504105 1143450 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 13:50:09.685468 1143450 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 13:50:09.685562 1143450 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 13:50:09.690863 1143450 start.go:562] Will wait 60s for crictl version
	I0603 13:50:09.690943 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:50:09.696532 1143450 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 13:50:09.742785 1143450 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 13:50:09.742891 1143450 ssh_runner.go:195] Run: crio --version
	I0603 13:50:09.782137 1143450 ssh_runner.go:195] Run: crio --version
	I0603 13:50:09.816251 1143450 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 13:50:09.817854 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetIP
	I0603 13:50:09.821049 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:09.821555 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:09.821595 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:09.821855 1143450 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0603 13:50:09.826658 1143450 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:50:09.841351 1143450 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-030870 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:default-k8s-diff-port-030870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.177 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 13:50:09.841521 1143450 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 13:50:09.841586 1143450 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:50:09.883751 1143450 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0603 13:50:09.883825 1143450 ssh_runner.go:195] Run: which lz4
	I0603 13:50:09.888383 1143450 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 13:50:09.893662 1143450 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 13:50:09.893704 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0603 13:50:08.110706 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .Start
	I0603 13:50:08.110954 1143678 main.go:141] libmachine: (old-k8s-version-151788) Ensuring networks are active...
	I0603 13:50:08.111890 1143678 main.go:141] libmachine: (old-k8s-version-151788) Ensuring network default is active
	I0603 13:50:08.112291 1143678 main.go:141] libmachine: (old-k8s-version-151788) Ensuring network mk-old-k8s-version-151788 is active
	I0603 13:50:08.112708 1143678 main.go:141] libmachine: (old-k8s-version-151788) Getting domain xml...
	I0603 13:50:08.113547 1143678 main.go:141] libmachine: (old-k8s-version-151788) Creating domain...
	I0603 13:50:09.528855 1143678 main.go:141] libmachine: (old-k8s-version-151788) Waiting to get IP...
	I0603 13:50:09.529978 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:09.530410 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:09.530453 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:09.530382 1144654 retry.go:31] will retry after 208.935457ms: waiting for machine to come up
	I0603 13:50:09.741245 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:09.741816 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:09.741864 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:09.741769 1144654 retry.go:31] will retry after 376.532154ms: waiting for machine to come up
	I0603 13:50:10.120533 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:10.121261 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:10.121337 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:10.121239 1144654 retry.go:31] will retry after 339.126643ms: waiting for machine to come up
	I0603 13:50:10.461708 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:10.462488 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:10.462514 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:10.462425 1144654 retry.go:31] will retry after 490.057426ms: waiting for machine to come up
	I0603 13:50:10.954107 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:10.954887 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:10.954921 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:10.954840 1144654 retry.go:31] will retry after 711.209001ms: waiting for machine to come up
	I0603 13:50:11.667459 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:11.668198 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:11.668231 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:11.668135 1144654 retry.go:31] will retry after 928.879285ms: waiting for machine to come up
	I0603 13:50:09.645006 1143252 node_ready.go:53] node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:10.146403 1143252 node_ready.go:49] node "embed-certs-223260" has status "Ready":"True"
	I0603 13:50:10.146438 1143252 node_ready.go:38] duration metric: took 5.005510729s for node "embed-certs-223260" to be "Ready" ...
	I0603 13:50:10.146453 1143252 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:50:10.154249 1143252 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-qdjrv" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:10.164361 1143252 pod_ready.go:92] pod "coredns-7db6d8ff4d-qdjrv" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:10.164401 1143252 pod_ready.go:81] duration metric: took 10.115855ms for pod "coredns-7db6d8ff4d-qdjrv" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:10.164419 1143252 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:11.675214 1143252 pod_ready.go:92] pod "etcd-embed-certs-223260" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:11.675243 1143252 pod_ready.go:81] duration metric: took 1.510815036s for pod "etcd-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:11.675254 1143252 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:11.522734 1143450 crio.go:462] duration metric: took 1.634406537s to copy over tarball
	I0603 13:50:11.522837 1143450 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 13:50:13.983446 1143450 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.460564522s)
	I0603 13:50:13.983484 1143450 crio.go:469] duration metric: took 2.460706596s to extract the tarball
	I0603 13:50:13.983503 1143450 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 13:50:14.029942 1143450 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:50:14.083084 1143450 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 13:50:14.083113 1143450 cache_images.go:84] Images are preloaded, skipping loading
	I0603 13:50:14.083122 1143450 kubeadm.go:928] updating node { 192.168.39.177 8444 v1.30.1 crio true true} ...
	I0603 13:50:14.083247 1143450 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-030870 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.177
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-030870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 13:50:14.083319 1143450 ssh_runner.go:195] Run: crio config
	I0603 13:50:14.142320 1143450 cni.go:84] Creating CNI manager for ""
	I0603 13:50:14.142344 1143450 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:50:14.142354 1143450 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 13:50:14.142379 1143450 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.177 APIServerPort:8444 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-030870 NodeName:default-k8s-diff-port-030870 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.177"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.177 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 13:50:14.142517 1143450 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.177
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-030870"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.177
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.177"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 13:50:14.142577 1143450 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 13:50:14.153585 1143450 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 13:50:14.153687 1143450 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 13:50:14.164499 1143450 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0603 13:50:14.186564 1143450 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 13:50:14.205489 1143450 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0603 13:50:14.227005 1143450 ssh_runner.go:195] Run: grep 192.168.39.177	control-plane.minikube.internal$ /etc/hosts
	I0603 13:50:14.231782 1143450 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.177	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:50:14.247433 1143450 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:50:14.368336 1143450 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:50:14.391791 1143450 certs.go:68] Setting up /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870 for IP: 192.168.39.177
	I0603 13:50:14.391816 1143450 certs.go:194] generating shared ca certs ...
	I0603 13:50:14.391840 1143450 certs.go:226] acquiring lock for ca certs: {Name:mkeec5aabce7c9540fcb31b78e4f96c2851d54f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:50:14.392015 1143450 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key
	I0603 13:50:14.392075 1143450 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key
	I0603 13:50:14.392090 1143450 certs.go:256] generating profile certs ...
	I0603 13:50:14.392282 1143450 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870/client.key
	I0603 13:50:14.392373 1143450 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870/apiserver.key.7a30187e
	I0603 13:50:14.392428 1143450 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870/proxy-client.key
	I0603 13:50:14.392545 1143450 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem (1338 bytes)
	W0603 13:50:14.392602 1143450 certs.go:480] ignoring /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251_empty.pem, impossibly tiny 0 bytes
	I0603 13:50:14.392616 1143450 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 13:50:14.392650 1143450 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem (1078 bytes)
	I0603 13:50:14.392687 1143450 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem (1123 bytes)
	I0603 13:50:14.392722 1143450 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem (1675 bytes)
	I0603 13:50:14.392780 1143450 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:50:14.393706 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 13:50:14.424354 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 13:50:14.476267 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 13:50:14.514457 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0603 13:50:14.548166 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0603 13:50:14.584479 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0603 13:50:14.626894 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 13:50:14.663103 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0603 13:50:14.696750 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /usr/share/ca-certificates/10862512.pem (1708 bytes)
	I0603 13:50:14.725770 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 13:50:14.755779 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem --> /usr/share/ca-certificates/1086251.pem (1338 bytes)
	I0603 13:50:14.786060 1143450 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 13:50:14.805976 1143450 ssh_runner.go:195] Run: openssl version
	I0603 13:50:14.812737 1143450 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10862512.pem && ln -fs /usr/share/ca-certificates/10862512.pem /etc/ssl/certs/10862512.pem"
	I0603 13:50:14.824707 1143450 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10862512.pem
	I0603 13:50:14.831139 1143450 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:37 /usr/share/ca-certificates/10862512.pem
	I0603 13:50:14.831255 1143450 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10862512.pem
	I0603 13:50:14.838855 1143450 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10862512.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 13:50:14.850974 1143450 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 13:50:14.865613 1143450 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:50:14.871431 1143450 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:24 /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:50:14.871518 1143450 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:50:14.878919 1143450 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 13:50:14.891371 1143450 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1086251.pem && ln -fs /usr/share/ca-certificates/1086251.pem /etc/ssl/certs/1086251.pem"
	I0603 13:50:14.903721 1143450 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1086251.pem
	I0603 13:50:14.909069 1143450 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:37 /usr/share/ca-certificates/1086251.pem
	I0603 13:50:14.909180 1143450 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1086251.pem
	I0603 13:50:14.915904 1143450 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1086251.pem /etc/ssl/certs/51391683.0"
	I0603 13:50:14.928622 1143450 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 13:50:14.934466 1143450 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 13:50:14.941321 1143450 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 13:50:14.947960 1143450 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 13:50:14.955629 1143450 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 13:50:14.962761 1143450 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 13:50:14.970396 1143450 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 13:50:14.977381 1143450 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-030870 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.1 ClusterName:default-k8s-diff-port-030870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.177 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:50:14.977543 1143450 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 13:50:14.977599 1143450 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:50:15.042628 1143450 cri.go:89] found id: ""
	I0603 13:50:15.042733 1143450 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0603 13:50:15.055439 1143450 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 13:50:15.055469 1143450 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 13:50:15.055476 1143450 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 13:50:15.055535 1143450 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 13:50:15.067250 1143450 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 13:50:15.068159 1143450 kubeconfig.go:125] found "default-k8s-diff-port-030870" server: "https://192.168.39.177:8444"
	I0603 13:50:15.070060 1143450 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 13:50:15.082723 1143450 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.177
	I0603 13:50:15.082788 1143450 kubeadm.go:1154] stopping kube-system containers ...
	I0603 13:50:15.082809 1143450 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0603 13:50:15.082972 1143450 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:50:15.124369 1143450 cri.go:89] found id: ""
	I0603 13:50:15.124509 1143450 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 13:50:15.144064 1143450 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 13:50:15.156148 1143450 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 13:50:15.156174 1143450 kubeadm.go:156] found existing configuration files:
	
	I0603 13:50:15.156240 1143450 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0603 13:50:15.166927 1143450 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 13:50:15.167006 1143450 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 13:50:12.598536 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:12.598972 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:12.599008 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:12.598948 1144654 retry.go:31] will retry after 882.970422ms: waiting for machine to come up
	I0603 13:50:13.483171 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:13.483723 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:13.483758 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:13.483640 1144654 retry.go:31] will retry after 1.215665556s: waiting for machine to come up
	I0603 13:50:14.701392 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:14.701960 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:14.701991 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:14.701899 1144654 retry.go:31] will retry after 1.614371992s: waiting for machine to come up
	I0603 13:50:16.318708 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:16.319127 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:16.319148 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:16.319103 1144654 retry.go:31] will retry after 2.146267337s: waiting for machine to come up
	I0603 13:50:13.683419 1143252 pod_ready.go:102] pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:15.684744 1143252 pod_ready.go:102] pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:16.792510 1143252 pod_ready.go:92] pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:16.792538 1143252 pod_ready.go:81] duration metric: took 5.117277447s for pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:16.792549 1143252 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:16.798083 1143252 pod_ready.go:92] pod "kube-controller-manager-embed-certs-223260" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:16.798112 1143252 pod_ready.go:81] duration metric: took 5.554915ms for pod "kube-controller-manager-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:16.798126 1143252 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s5vdl" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:16.804217 1143252 pod_ready.go:92] pod "kube-proxy-s5vdl" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:16.804247 1143252 pod_ready.go:81] duration metric: took 6.113411ms for pod "kube-proxy-s5vdl" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:16.804262 1143252 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:16.810317 1143252 pod_ready.go:92] pod "kube-scheduler-embed-certs-223260" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:16.810343 1143252 pod_ready.go:81] duration metric: took 6.073098ms for pod "kube-scheduler-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:16.810357 1143252 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:15.178645 1143450 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0603 13:50:15.486524 1143450 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 13:50:15.486608 1143450 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 13:50:15.497694 1143450 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0603 13:50:15.509586 1143450 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 13:50:15.509665 1143450 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 13:50:15.521976 1143450 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0603 13:50:15.533446 1143450 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 13:50:15.533535 1143450 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 13:50:15.545525 1143450 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 13:50:15.557558 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:15.710109 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:16.725380 1143450 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.015227554s)
	I0603 13:50:16.725452 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:16.964275 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:17.061586 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:17.183665 1143450 api_server.go:52] waiting for apiserver process to appear ...
	I0603 13:50:17.183764 1143450 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:17.684365 1143450 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:18.184269 1143450 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:18.254733 1143450 api_server.go:72] duration metric: took 1.07106398s to wait for apiserver process to appear ...
	I0603 13:50:18.254769 1143450 api_server.go:88] waiting for apiserver healthz status ...
	I0603 13:50:18.254797 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:18.466825 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:18.467260 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:18.467292 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:18.467187 1144654 retry.go:31] will retry after 2.752334209s: waiting for machine to come up
	I0603 13:50:21.220813 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:21.221235 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:21.221267 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:21.221182 1144654 retry.go:31] will retry after 3.082080728s: waiting for machine to come up
	I0603 13:50:18.819188 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:21.323790 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:21.193140 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 13:50:21.193177 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 13:50:21.193193 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:21.265534 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 13:50:21.265580 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 13:50:21.265602 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:21.277669 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 13:50:21.277703 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 13:50:21.754973 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:21.761802 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:21.761841 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:22.255071 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:22.262166 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:22.262227 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:22.755128 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:22.759896 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:22.759936 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:23.255520 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:23.262093 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:23.262128 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:23.755784 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:23.760053 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:23.760079 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:24.255534 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:24.259793 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:24.259820 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:24.755365 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:24.759964 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 200:
	ok
	I0603 13:50:24.768830 1143450 api_server.go:141] control plane version: v1.30.1
	I0603 13:50:24.768862 1143450 api_server.go:131] duration metric: took 6.51408552s to wait for apiserver health ...
	I0603 13:50:24.768872 1143450 cni.go:84] Creating CNI manager for ""
	I0603 13:50:24.768879 1143450 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:50:24.771099 1143450 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 13:50:24.772806 1143450 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 13:50:24.784204 1143450 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 13:50:24.805572 1143450 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 13:50:24.816944 1143450 system_pods.go:59] 8 kube-system pods found
	I0603 13:50:24.816988 1143450 system_pods.go:61] "coredns-7db6d8ff4d-flxqj" [a116f363-ca50-4e2d-8c77-e99498c81e36] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 13:50:24.816997 1143450 system_pods.go:61] "etcd-default-k8s-diff-port-030870" [4134b8e4-b7c4-4571-ae7f-f1eff2be2427] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0603 13:50:24.817008 1143450 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-030870" [38fe3d48-9d20-448a-b8d1-7c3af8ab1d2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0603 13:50:24.817021 1143450 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-030870" [5c8f2fc4-fc4f-48f8-8d81-3b64aa9a93c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0603 13:50:24.817028 1143450 system_pods.go:61] "kube-proxy-thsrx" [96df5442-b343-47c8-a561-681a2d568d50] Running
	I0603 13:50:24.817037 1143450 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-030870" [1f2c23a1-1c2c-463f-a5f0-e8f1bb8956f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0603 13:50:24.817044 1143450 system_pods.go:61] "metrics-server-569cc877fc-8xw9v" [4ab08177-2171-493b-928c-456d8a21fd68] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:50:24.817050 1143450 system_pods.go:61] "storage-provisioner" [64d080e5-d582-4ee5-adbc-a652e8e2b820] Running
	I0603 13:50:24.817060 1143450 system_pods.go:74] duration metric: took 11.461696ms to wait for pod list to return data ...
	I0603 13:50:24.817069 1143450 node_conditions.go:102] verifying NodePressure condition ...
	I0603 13:50:24.820804 1143450 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 13:50:24.820834 1143450 node_conditions.go:123] node cpu capacity is 2
	I0603 13:50:24.820846 1143450 node_conditions.go:105] duration metric: took 3.771492ms to run NodePressure ...
	I0603 13:50:24.820865 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:25.098472 1143450 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0603 13:50:25.103237 1143450 kubeadm.go:733] kubelet initialised
	I0603 13:50:25.103263 1143450 kubeadm.go:734] duration metric: took 4.763539ms waiting for restarted kubelet to initialise ...
	I0603 13:50:25.103274 1143450 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:50:25.109364 1143450 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-flxqj" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:25.114629 1143450 pod_ready.go:97] node "default-k8s-diff-port-030870" hosting pod "coredns-7db6d8ff4d-flxqj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.114662 1143450 pod_ready.go:81] duration metric: took 5.268473ms for pod "coredns-7db6d8ff4d-flxqj" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:25.114676 1143450 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-030870" hosting pod "coredns-7db6d8ff4d-flxqj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.114687 1143450 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:25.118734 1143450 pod_ready.go:97] node "default-k8s-diff-port-030870" hosting pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.118777 1143450 pod_ready.go:81] duration metric: took 4.079659ms for pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:25.118790 1143450 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-030870" hosting pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.118810 1143450 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:25.123298 1143450 pod_ready.go:97] node "default-k8s-diff-port-030870" hosting pod "kube-apiserver-default-k8s-diff-port-030870" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.123334 1143450 pod_ready.go:81] duration metric: took 4.509948ms for pod "kube-apiserver-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:25.123351 1143450 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-030870" hosting pod "kube-apiserver-default-k8s-diff-port-030870" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.123361 1143450 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:25.210283 1143450 pod_ready.go:97] node "default-k8s-diff-port-030870" hosting pod "kube-controller-manager-default-k8s-diff-port-030870" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.210316 1143450 pod_ready.go:81] duration metric: took 86.945898ms for pod "kube-controller-manager-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:25.210329 1143450 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-030870" hosting pod "kube-controller-manager-default-k8s-diff-port-030870" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.210338 1143450 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-thsrx" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:25.609043 1143450 pod_ready.go:97] node "default-k8s-diff-port-030870" hosting pod "kube-proxy-thsrx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.609074 1143450 pod_ready.go:81] duration metric: took 398.728553ms for pod "kube-proxy-thsrx" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:25.609084 1143450 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-030870" hosting pod "kube-proxy-thsrx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.609091 1143450 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:26.009831 1143450 pod_ready.go:97] node "default-k8s-diff-port-030870" hosting pod "kube-scheduler-default-k8s-diff-port-030870" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:26.009866 1143450 pod_ready.go:81] duration metric: took 400.766037ms for pod "kube-scheduler-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:26.009880 1143450 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-030870" hosting pod "kube-scheduler-default-k8s-diff-port-030870" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:26.009888 1143450 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:26.410271 1143450 pod_ready.go:97] node "default-k8s-diff-port-030870" hosting pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:26.410301 1143450 pod_ready.go:81] duration metric: took 400.402293ms for pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:26.410315 1143450 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-030870" hosting pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:26.410326 1143450 pod_ready.go:38] duration metric: took 1.307039933s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:50:26.410347 1143450 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 13:50:26.422726 1143450 ops.go:34] apiserver oom_adj: -16
	I0603 13:50:26.422753 1143450 kubeadm.go:591] duration metric: took 11.367271168s to restartPrimaryControlPlane
	I0603 13:50:26.422763 1143450 kubeadm.go:393] duration metric: took 11.445396197s to StartCluster
	I0603 13:50:26.422784 1143450 settings.go:142] acquiring lock: {Name:mka7155af15d143794eb08b8670f7d850f44839e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:50:26.422866 1143450 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 13:50:26.424423 1143450 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/kubeconfig: {Name:mk082a4c41fd0f4876b4085806e1bc5ef6533b14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:50:26.424744 1143450 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.177 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 13:50:26.426628 1143450 out.go:177] * Verifying Kubernetes components...
	I0603 13:50:26.424855 1143450 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 13:50:26.424985 1143450 config.go:182] Loaded profile config "default-k8s-diff-port-030870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:50:26.428227 1143450 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:50:26.428239 1143450 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-030870"
	I0603 13:50:26.428241 1143450 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-030870"
	I0603 13:50:26.428275 1143450 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-030870"
	I0603 13:50:26.428285 1143450 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-030870"
	W0603 13:50:26.428297 1143450 addons.go:243] addon storage-provisioner should already be in state true
	I0603 13:50:26.428243 1143450 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-030870"
	I0603 13:50:26.428338 1143450 host.go:66] Checking if "default-k8s-diff-port-030870" exists ...
	I0603 13:50:26.428404 1143450 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-030870"
	W0603 13:50:26.428428 1143450 addons.go:243] addon metrics-server should already be in state true
	I0603 13:50:26.428501 1143450 host.go:66] Checking if "default-k8s-diff-port-030870" exists ...
	I0603 13:50:26.428650 1143450 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:26.428676 1143450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:26.428724 1143450 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:26.428751 1143450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:26.428948 1143450 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:26.429001 1143450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:26.445709 1143450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33951
	I0603 13:50:26.446187 1143450 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:26.446719 1143450 main.go:141] libmachine: Using API Version  1
	I0603 13:50:26.446743 1143450 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:26.447152 1143450 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:26.447817 1143450 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:26.447852 1143450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:26.449660 1143450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46507
	I0603 13:50:26.449721 1143450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37871
	I0603 13:50:26.450120 1143450 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:26.450161 1143450 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:26.450735 1143450 main.go:141] libmachine: Using API Version  1
	I0603 13:50:26.450755 1143450 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:26.450906 1143450 main.go:141] libmachine: Using API Version  1
	I0603 13:50:26.450930 1143450 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:26.451177 1143450 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:26.451333 1143450 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:26.451421 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetState
	I0603 13:50:26.451909 1143450 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:26.451951 1143450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:26.455458 1143450 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-030870"
	W0603 13:50:26.455484 1143450 addons.go:243] addon default-storageclass should already be in state true
	I0603 13:50:26.455523 1143450 host.go:66] Checking if "default-k8s-diff-port-030870" exists ...
	I0603 13:50:26.455776 1143450 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:26.455825 1143450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:26.470807 1143450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36503
	I0603 13:50:26.471179 1143450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44131
	I0603 13:50:26.471763 1143450 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:26.471921 1143450 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:26.472042 1143450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34451
	I0603 13:50:26.472471 1143450 main.go:141] libmachine: Using API Version  1
	I0603 13:50:26.472501 1143450 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:26.472575 1143450 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:26.472750 1143450 main.go:141] libmachine: Using API Version  1
	I0603 13:50:26.472760 1143450 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:26.472966 1143450 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:26.473095 1143450 main.go:141] libmachine: Using API Version  1
	I0603 13:50:26.473118 1143450 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:26.473132 1143450 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:26.473134 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetState
	I0603 13:50:26.473357 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetState
	I0603 13:50:26.473486 1143450 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:26.474129 1143450 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:26.474183 1143450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:26.475437 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:26.475594 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:26.477911 1143450 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:26.479474 1143450 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0603 13:50:24.304462 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:24.305104 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:24.305175 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:24.305099 1144654 retry.go:31] will retry after 4.178596743s: waiting for machine to come up
	I0603 13:50:26.480998 1143450 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0603 13:50:26.481021 1143450 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0603 13:50:26.481047 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:26.479556 1143450 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 13:50:26.481095 1143450 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 13:50:26.481116 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:26.484634 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:26.484694 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:26.485083 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:26.485116 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:26.485147 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:26.485160 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:26.485538 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:26.485628 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:26.485729 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:26.485829 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:26.485856 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:26.485993 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:26.486040 1143450 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa Username:docker}
	I0603 13:50:26.486158 1143450 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa Username:docker}
	I0603 13:50:26.496035 1143450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45527
	I0603 13:50:26.496671 1143450 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:26.497270 1143450 main.go:141] libmachine: Using API Version  1
	I0603 13:50:26.497290 1143450 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:26.497719 1143450 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:26.497989 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetState
	I0603 13:50:26.500018 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:26.500280 1143450 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 13:50:26.500298 1143450 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 13:50:26.500318 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:26.503226 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:26.503732 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:26.503768 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:26.503967 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:26.504212 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:26.504399 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:26.504556 1143450 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa Username:docker}
	I0603 13:50:26.608774 1143450 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:50:26.629145 1143450 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-030870" to be "Ready" ...
	I0603 13:50:26.692164 1143450 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 13:50:26.784756 1143450 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 13:50:26.788686 1143450 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0603 13:50:26.788711 1143450 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0603 13:50:26.841094 1143450 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0603 13:50:26.841129 1143450 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0603 13:50:26.907657 1143450 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 13:50:26.907688 1143450 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0603 13:50:26.963244 1143450 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:26.963280 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .Close
	I0603 13:50:26.963618 1143450 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:26.963641 1143450 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:26.963649 1143450 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:26.963653 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | Closing plugin on server side
	I0603 13:50:26.963657 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .Close
	I0603 13:50:26.963962 1143450 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:26.963980 1143450 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:26.963982 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | Closing plugin on server side
	I0603 13:50:26.971726 1143450 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:26.971748 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .Close
	I0603 13:50:26.972101 1143450 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:26.972125 1143450 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:26.975238 1143450 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 13:50:27.653643 1143450 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:27.653689 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .Close
	I0603 13:50:27.654037 1143450 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:27.654061 1143450 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:27.654078 1143450 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:27.654087 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .Close
	I0603 13:50:27.654429 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | Closing plugin on server side
	I0603 13:50:27.654484 1143450 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:27.654507 1143450 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:27.847367 1143450 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:27.847397 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .Close
	I0603 13:50:27.847745 1143450 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:27.847770 1143450 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:27.847779 1143450 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:27.847785 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | Closing plugin on server side
	I0603 13:50:27.847793 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .Close
	I0603 13:50:27.848112 1143450 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:27.848130 1143450 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:27.848144 1143450 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-030870"
	I0603 13:50:27.851386 1143450 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0603 13:50:23.817272 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:25.818013 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:27.818160 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:29.798777 1142862 start.go:364] duration metric: took 56.694826675s to acquireMachinesLock for "no-preload-817450"
	I0603 13:50:29.798855 1142862 start.go:96] Skipping create...Using existing machine configuration
	I0603 13:50:29.798866 1142862 fix.go:54] fixHost starting: 
	I0603 13:50:29.799329 1142862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:29.799369 1142862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:29.817787 1142862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46057
	I0603 13:50:29.818396 1142862 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:29.819003 1142862 main.go:141] libmachine: Using API Version  1
	I0603 13:50:29.819025 1142862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:29.819450 1142862 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:29.819617 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:50:29.819782 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetState
	I0603 13:50:29.821742 1142862 fix.go:112] recreateIfNeeded on no-preload-817450: state=Stopped err=<nil>
	I0603 13:50:29.821777 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	W0603 13:50:29.821973 1142862 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 13:50:29.823915 1142862 out.go:177] * Restarting existing kvm2 VM for "no-preload-817450" ...
	I0603 13:50:27.852929 1143450 addons.go:510] duration metric: took 1.428071927s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0603 13:50:28.633355 1143450 node_ready.go:53] node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:29.825584 1142862 main.go:141] libmachine: (no-preload-817450) Calling .Start
	I0603 13:50:29.825783 1142862 main.go:141] libmachine: (no-preload-817450) Ensuring networks are active...
	I0603 13:50:29.826746 1142862 main.go:141] libmachine: (no-preload-817450) Ensuring network default is active
	I0603 13:50:29.827116 1142862 main.go:141] libmachine: (no-preload-817450) Ensuring network mk-no-preload-817450 is active
	I0603 13:50:29.827617 1142862 main.go:141] libmachine: (no-preload-817450) Getting domain xml...
	I0603 13:50:29.828419 1142862 main.go:141] libmachine: (no-preload-817450) Creating domain...
	I0603 13:50:28.485041 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.485598 1143678 main.go:141] libmachine: (old-k8s-version-151788) Found IP for machine: 192.168.50.65
	I0603 13:50:28.485624 1143678 main.go:141] libmachine: (old-k8s-version-151788) Reserving static IP address...
	I0603 13:50:28.485639 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has current primary IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.486053 1143678 main.go:141] libmachine: (old-k8s-version-151788) Reserved static IP address: 192.168.50.65
	I0603 13:50:28.486109 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "old-k8s-version-151788", mac: "52:54:00:56:4e:c1", ip: "192.168.50.65"} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.486123 1143678 main.go:141] libmachine: (old-k8s-version-151788) Waiting for SSH to be available...
	I0603 13:50:28.486144 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | skip adding static IP to network mk-old-k8s-version-151788 - found existing host DHCP lease matching {name: "old-k8s-version-151788", mac: "52:54:00:56:4e:c1", ip: "192.168.50.65"}
	I0603 13:50:28.486156 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | Getting to WaitForSSH function...
	I0603 13:50:28.488305 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.488754 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.488788 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.489025 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | Using SSH client type: external
	I0603 13:50:28.489048 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | Using SSH private key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/id_rsa (-rw-------)
	I0603 13:50:28.489114 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.65 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 13:50:28.489147 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | About to run SSH command:
	I0603 13:50:28.489167 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | exit 0
	I0603 13:50:28.613732 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | SSH cmd err, output: <nil>: 
	I0603 13:50:28.614183 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetConfigRaw
	I0603 13:50:28.614879 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetIP
	I0603 13:50:28.617742 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.618235 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.618270 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.618481 1143678 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/config.json ...
	I0603 13:50:28.618699 1143678 machine.go:94] provisionDockerMachine start ...
	I0603 13:50:28.618719 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:50:28.618967 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:28.621356 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.621655 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.621685 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.621897 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:28.622117 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:28.622321 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:28.622511 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:28.622750 1143678 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:28.622946 1143678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0603 13:50:28.622958 1143678 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 13:50:28.726383 1143678 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 13:50:28.726419 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetMachineName
	I0603 13:50:28.726740 1143678 buildroot.go:166] provisioning hostname "old-k8s-version-151788"
	I0603 13:50:28.726777 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetMachineName
	I0603 13:50:28.727042 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:28.729901 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.730372 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.730402 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.730599 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:28.730824 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:28.731031 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:28.731205 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:28.731403 1143678 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:28.731585 1143678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0603 13:50:28.731599 1143678 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-151788 && echo "old-k8s-version-151788" | sudo tee /etc/hostname
	I0603 13:50:28.848834 1143678 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-151788
	
	I0603 13:50:28.848867 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:28.852250 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.852698 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.852721 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.852980 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:28.853239 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:28.853536 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:28.853819 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:28.854093 1143678 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:28.854338 1143678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0603 13:50:28.854367 1143678 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-151788' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-151788/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-151788' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 13:50:28.967427 1143678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 13:50:28.967461 1143678 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19011-1078924/.minikube CaCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19011-1078924/.minikube}
	I0603 13:50:28.967520 1143678 buildroot.go:174] setting up certificates
	I0603 13:50:28.967538 1143678 provision.go:84] configureAuth start
	I0603 13:50:28.967550 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetMachineName
	I0603 13:50:28.967946 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetIP
	I0603 13:50:28.970841 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.971226 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.971256 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.971449 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:28.974316 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.974702 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.974732 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.974911 1143678 provision.go:143] copyHostCerts
	I0603 13:50:28.974994 1143678 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem, removing ...
	I0603 13:50:28.975010 1143678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 13:50:28.975068 1143678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem (1675 bytes)
	I0603 13:50:28.975247 1143678 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem, removing ...
	I0603 13:50:28.975260 1143678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 13:50:28.975283 1143678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem (1078 bytes)
	I0603 13:50:28.975354 1143678 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem, removing ...
	I0603 13:50:28.975362 1143678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 13:50:28.975385 1143678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem (1123 bytes)
	I0603 13:50:28.975463 1143678 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-151788 san=[127.0.0.1 192.168.50.65 localhost minikube old-k8s-version-151788]
	I0603 13:50:29.096777 1143678 provision.go:177] copyRemoteCerts
	I0603 13:50:29.096835 1143678 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 13:50:29.096865 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:29.099989 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.100408 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:29.100434 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.100644 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:29.100831 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.100975 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:29.101144 1143678 sshutil.go:53] new ssh client: &{IP:192.168.50.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/id_rsa Username:docker}
	I0603 13:50:29.184886 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 13:50:29.211432 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0603 13:50:29.238552 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 13:50:29.266803 1143678 provision.go:87] duration metric: took 299.247567ms to configureAuth
	I0603 13:50:29.266844 1143678 buildroot.go:189] setting minikube options for container-runtime
	I0603 13:50:29.267107 1143678 config.go:182] Loaded profile config "old-k8s-version-151788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0603 13:50:29.267220 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:29.270966 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.271417 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:29.271472 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.271688 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:29.271893 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.272121 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.272327 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:29.272544 1143678 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:29.272787 1143678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0603 13:50:29.272811 1143678 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 13:50:29.548407 1143678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 13:50:29.548437 1143678 machine.go:97] duration metric: took 929.724002ms to provisionDockerMachine
	I0603 13:50:29.548449 1143678 start.go:293] postStartSetup for "old-k8s-version-151788" (driver="kvm2")
	I0603 13:50:29.548461 1143678 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 13:50:29.548486 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:50:29.548924 1143678 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 13:50:29.548992 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:29.552127 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.552531 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:29.552571 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.552756 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:29.552974 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.553166 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:29.553364 1143678 sshutil.go:53] new ssh client: &{IP:192.168.50.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/id_rsa Username:docker}
	I0603 13:50:29.637026 1143678 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 13:50:29.641264 1143678 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 13:50:29.641293 1143678 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/addons for local assets ...
	I0603 13:50:29.641376 1143678 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/files for local assets ...
	I0603 13:50:29.641509 1143678 filesync.go:149] local asset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> 10862512.pem in /etc/ssl/certs
	I0603 13:50:29.641600 1143678 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 13:50:29.657273 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:50:29.688757 1143678 start.go:296] duration metric: took 140.291954ms for postStartSetup
	I0603 13:50:29.688806 1143678 fix.go:56] duration metric: took 21.605539652s for fixHost
	I0603 13:50:29.688843 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:29.691764 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.692170 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:29.692216 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.692356 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:29.692623 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.692814 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.692996 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:29.693180 1143678 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:29.693372 1143678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0603 13:50:29.693384 1143678 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 13:50:29.798629 1143678 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717422629.770375968
	
	I0603 13:50:29.798655 1143678 fix.go:216] guest clock: 1717422629.770375968
	I0603 13:50:29.798662 1143678 fix.go:229] Guest: 2024-06-03 13:50:29.770375968 +0000 UTC Remote: 2024-06-03 13:50:29.688811675 +0000 UTC m=+247.377673500 (delta=81.564293ms)
	I0603 13:50:29.798683 1143678 fix.go:200] guest clock delta is within tolerance: 81.564293ms
	I0603 13:50:29.798688 1143678 start.go:83] releasing machines lock for "old-k8s-version-151788", held for 21.715483341s
	I0603 13:50:29.798712 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:50:29.799019 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetIP
	I0603 13:50:29.802078 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.802479 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:29.802522 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.802674 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:50:29.803271 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:50:29.803496 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:50:29.803584 1143678 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 13:50:29.803646 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:29.803961 1143678 ssh_runner.go:195] Run: cat /version.json
	I0603 13:50:29.803988 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:29.806505 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.806863 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.806926 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:29.806961 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.807093 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:29.807299 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.807345 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:29.807386 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.807476 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:29.807670 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:29.807669 1143678 sshutil.go:53] new ssh client: &{IP:192.168.50.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/id_rsa Username:docker}
	I0603 13:50:29.807841 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.807947 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:29.808183 1143678 sshutil.go:53] new ssh client: &{IP:192.168.50.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/id_rsa Username:docker}
	I0603 13:50:29.890622 1143678 ssh_runner.go:195] Run: systemctl --version
	I0603 13:50:29.918437 1143678 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 13:50:30.064471 1143678 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 13:50:30.073881 1143678 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 13:50:30.073969 1143678 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 13:50:30.097037 1143678 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 13:50:30.097070 1143678 start.go:494] detecting cgroup driver to use...
	I0603 13:50:30.097147 1143678 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 13:50:30.114374 1143678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 13:50:30.132000 1143678 docker.go:217] disabling cri-docker service (if available) ...
	I0603 13:50:30.132075 1143678 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 13:50:30.148156 1143678 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 13:50:30.164601 1143678 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 13:50:30.303125 1143678 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 13:50:30.475478 1143678 docker.go:233] disabling docker service ...
	I0603 13:50:30.475578 1143678 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 13:50:30.494632 1143678 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 13:50:30.513383 1143678 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 13:50:30.691539 1143678 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 13:50:30.849280 1143678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 13:50:30.869107 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 13:50:30.893451 1143678 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0603 13:50:30.893528 1143678 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:30.909358 1143678 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 13:50:30.909465 1143678 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:30.926891 1143678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:30.941879 1143678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:30.957985 1143678 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 13:50:30.971349 1143678 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 13:50:30.984948 1143678 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 13:50:30.985023 1143678 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 13:50:30.999255 1143678 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 13:50:31.011615 1143678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:50:31.162848 1143678 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 13:50:31.352121 1143678 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 13:50:31.352190 1143678 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 13:50:31.357946 1143678 start.go:562] Will wait 60s for crictl version
	I0603 13:50:31.358032 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:31.362540 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 13:50:31.410642 1143678 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 13:50:31.410757 1143678 ssh_runner.go:195] Run: crio --version
	I0603 13:50:31.444750 1143678 ssh_runner.go:195] Run: crio --version
	I0603 13:50:31.482404 1143678 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0603 13:50:31.484218 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetIP
	I0603 13:50:31.488049 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:31.488663 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:31.488695 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:31.488985 1143678 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0603 13:50:31.494813 1143678 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:50:31.511436 1143678 kubeadm.go:877] updating cluster {Name:old-k8s-version-151788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-151788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 13:50:31.511597 1143678 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0603 13:50:31.511659 1143678 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:50:31.571733 1143678 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0603 13:50:31.571819 1143678 ssh_runner.go:195] Run: which lz4
	I0603 13:50:31.577765 1143678 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 13:50:31.583983 1143678 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 13:50:31.584025 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0603 13:50:30.319230 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:32.824874 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:30.633456 1143450 node_ready.go:53] node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:32.134192 1143450 node_ready.go:49] node "default-k8s-diff-port-030870" has status "Ready":"True"
	I0603 13:50:32.134227 1143450 node_ready.go:38] duration metric: took 5.505047986s for node "default-k8s-diff-port-030870" to be "Ready" ...
	I0603 13:50:32.134241 1143450 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:50:32.143157 1143450 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-flxqj" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:32.150075 1143450 pod_ready.go:92] pod "coredns-7db6d8ff4d-flxqj" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:32.150113 1143450 pod_ready.go:81] duration metric: took 6.922006ms for pod "coredns-7db6d8ff4d-flxqj" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:32.150128 1143450 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:34.157758 1143450 pod_ready.go:102] pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:31.283193 1142862 main.go:141] libmachine: (no-preload-817450) Waiting to get IP...
	I0603 13:50:31.284191 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:31.284681 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:31.284757 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:31.284641 1144889 retry.go:31] will retry after 246.139268ms: waiting for machine to come up
	I0603 13:50:31.532345 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:31.533024 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:31.533056 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:31.532956 1144889 retry.go:31] will retry after 283.586657ms: waiting for machine to come up
	I0603 13:50:31.818610 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:31.819271 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:31.819302 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:31.819235 1144889 retry.go:31] will retry after 345.327314ms: waiting for machine to come up
	I0603 13:50:32.165948 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:32.166532 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:32.166585 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:32.166485 1144889 retry.go:31] will retry after 567.370644ms: waiting for machine to come up
	I0603 13:50:32.735409 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:32.736074 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:32.736118 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:32.735978 1144889 retry.go:31] will retry after 523.349811ms: waiting for machine to come up
	I0603 13:50:33.261023 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:33.261738 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:33.261769 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:33.261685 1144889 retry.go:31] will retry after 617.256992ms: waiting for machine to come up
	I0603 13:50:33.880579 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:33.881159 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:33.881188 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:33.881113 1144889 retry.go:31] will retry after 975.807438ms: waiting for machine to come up
	I0603 13:50:34.858935 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:34.859418 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:34.859447 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:34.859365 1144889 retry.go:31] will retry after 1.257722281s: waiting for machine to come up
	I0603 13:50:33.399678 1143678 crio.go:462] duration metric: took 1.821959808s to copy over tarball
	I0603 13:50:33.399768 1143678 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 13:50:36.631033 1143678 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.231219364s)
	I0603 13:50:36.631081 1143678 crio.go:469] duration metric: took 3.231364789s to extract the tarball
	I0603 13:50:36.631092 1143678 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 13:50:36.677954 1143678 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:50:36.718160 1143678 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0603 13:50:36.718197 1143678 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0603 13:50:36.718295 1143678 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 13:50:36.718335 1143678 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 13:50:36.718295 1143678 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:36.718456 1143678 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0603 13:50:36.718302 1143678 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 13:50:36.718343 1143678 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0603 13:50:36.718335 1143678 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0603 13:50:36.718858 1143678 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 13:50:36.720574 1143678 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 13:50:36.720644 1143678 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:36.720573 1143678 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0603 13:50:36.720574 1143678 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 13:50:36.720576 1143678 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 13:50:36.720603 1143678 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0603 13:50:36.720608 1143678 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 13:50:36.721118 1143678 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0603 13:50:36.907182 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0603 13:50:36.907179 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0603 13:50:36.910017 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:36.920969 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 13:50:36.925739 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0603 13:50:36.935710 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0603 13:50:36.946767 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0603 13:50:36.973425 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0603 13:50:37.050763 1143678 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0603 13:50:37.050817 1143678 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0603 13:50:37.050846 1143678 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0603 13:50:37.050876 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:37.050880 1143678 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 13:50:37.050906 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:37.162505 1143678 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0603 13:50:37.162561 1143678 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 13:50:37.162608 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:37.162706 1143678 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0603 13:50:37.162727 1143678 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 13:50:37.162754 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:37.162858 1143678 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0603 13:50:37.162898 1143678 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 13:50:37.162922 1143678 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0603 13:50:37.162965 1143678 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0603 13:50:37.163001 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:37.162943 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:37.164963 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0603 13:50:37.165019 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0603 13:50:37.165136 1143678 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0603 13:50:37.165260 1143678 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0603 13:50:37.165295 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:37.188179 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0603 13:50:37.188292 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0603 13:50:37.188315 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0603 13:50:37.188371 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0603 13:50:37.188561 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 13:50:37.300592 1143678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0603 13:50:37.300642 1143678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0603 13:50:35.317483 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:37.318105 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:36.160066 1143450 pod_ready.go:102] pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:37.334685 1143450 pod_ready.go:92] pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:37.334719 1143450 pod_ready.go:81] duration metric: took 5.184582613s for pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.334732 1143450 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.341104 1143450 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-030870" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:37.341140 1143450 pod_ready.go:81] duration metric: took 6.399805ms for pod "kube-apiserver-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.341154 1143450 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.347174 1143450 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-030870" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:37.347208 1143450 pod_ready.go:81] duration metric: took 6.044519ms for pod "kube-controller-manager-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.347220 1143450 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-thsrx" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.356909 1143450 pod_ready.go:92] pod "kube-proxy-thsrx" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:37.356949 1143450 pod_ready.go:81] duration metric: took 9.72108ms for pod "kube-proxy-thsrx" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.356962 1143450 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.363891 1143450 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-030870" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:37.363915 1143450 pod_ready.go:81] duration metric: took 6.9442ms for pod "kube-scheduler-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.363927 1143450 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:39.372092 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:36.118754 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:36.119214 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:36.119251 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:36.119148 1144889 retry.go:31] will retry after 1.380813987s: waiting for machine to come up
	I0603 13:50:37.501464 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:37.501889 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:37.501937 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:37.501849 1144889 retry.go:31] will retry after 2.144177789s: waiting for machine to come up
	I0603 13:50:39.648238 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:39.648744 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:39.648768 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:39.648693 1144889 retry.go:31] will retry after 1.947487062s: waiting for machine to come up
	I0603 13:50:37.360149 1143678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0603 13:50:37.360196 1143678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0603 13:50:37.360346 1143678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0603 13:50:37.360371 1143678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0603 13:50:37.360436 1143678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0603 13:50:37.543409 1143678 cache_images.go:92] duration metric: took 825.189409ms to LoadCachedImages
	W0603 13:50:37.543559 1143678 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0603 13:50:37.543581 1143678 kubeadm.go:928] updating node { 192.168.50.65 8443 v1.20.0 crio true true} ...
	I0603 13:50:37.543723 1143678 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-151788 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-151788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 13:50:37.543804 1143678 ssh_runner.go:195] Run: crio config
	I0603 13:50:37.601388 1143678 cni.go:84] Creating CNI manager for ""
	I0603 13:50:37.601428 1143678 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:50:37.601445 1143678 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 13:50:37.601471 1143678 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.65 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-151788 NodeName:old-k8s-version-151788 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.65"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.65 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0603 13:50:37.601664 1143678 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.65
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-151788"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.65
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.65"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 13:50:37.601746 1143678 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0603 13:50:37.613507 1143678 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 13:50:37.613588 1143678 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 13:50:37.623853 1143678 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0603 13:50:37.642298 1143678 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 13:50:37.660863 1143678 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0603 13:50:37.679974 1143678 ssh_runner.go:195] Run: grep 192.168.50.65	control-plane.minikube.internal$ /etc/hosts
	I0603 13:50:37.685376 1143678 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.65	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:50:37.702732 1143678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:50:37.859343 1143678 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:50:37.880684 1143678 certs.go:68] Setting up /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788 for IP: 192.168.50.65
	I0603 13:50:37.880714 1143678 certs.go:194] generating shared ca certs ...
	I0603 13:50:37.880737 1143678 certs.go:226] acquiring lock for ca certs: {Name:mkeec5aabce7c9540fcb31b78e4f96c2851d54f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:50:37.880952 1143678 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key
	I0603 13:50:37.881012 1143678 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key
	I0603 13:50:37.881024 1143678 certs.go:256] generating profile certs ...
	I0603 13:50:37.881179 1143678 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/client.key
	I0603 13:50:37.881279 1143678 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/apiserver.key.9bfe4cc3
	I0603 13:50:37.881334 1143678 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/proxy-client.key
	I0603 13:50:37.881554 1143678 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem (1338 bytes)
	W0603 13:50:37.881602 1143678 certs.go:480] ignoring /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251_empty.pem, impossibly tiny 0 bytes
	I0603 13:50:37.881629 1143678 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 13:50:37.881667 1143678 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem (1078 bytes)
	I0603 13:50:37.881698 1143678 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem (1123 bytes)
	I0603 13:50:37.881730 1143678 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem (1675 bytes)
	I0603 13:50:37.881805 1143678 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:50:37.882741 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 13:50:37.919377 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 13:50:37.957218 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 13:50:37.987016 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0603 13:50:38.024442 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0603 13:50:38.051406 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0603 13:50:38.094816 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 13:50:38.143689 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 13:50:38.171488 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem --> /usr/share/ca-certificates/1086251.pem (1338 bytes)
	I0603 13:50:38.197296 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /usr/share/ca-certificates/10862512.pem (1708 bytes)
	I0603 13:50:38.224025 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 13:50:38.250728 1143678 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 13:50:38.270485 1143678 ssh_runner.go:195] Run: openssl version
	I0603 13:50:38.276995 1143678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10862512.pem && ln -fs /usr/share/ca-certificates/10862512.pem /etc/ssl/certs/10862512.pem"
	I0603 13:50:38.288742 1143678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10862512.pem
	I0603 13:50:38.293880 1143678 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:37 /usr/share/ca-certificates/10862512.pem
	I0603 13:50:38.293955 1143678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10862512.pem
	I0603 13:50:38.300456 1143678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10862512.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 13:50:38.312180 1143678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 13:50:38.324349 1143678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:50:38.329812 1143678 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:24 /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:50:38.329881 1143678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:50:38.337560 1143678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 13:50:38.350229 1143678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1086251.pem && ln -fs /usr/share/ca-certificates/1086251.pem /etc/ssl/certs/1086251.pem"
	I0603 13:50:38.362635 1143678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1086251.pem
	I0603 13:50:38.368842 1143678 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:37 /usr/share/ca-certificates/1086251.pem
	I0603 13:50:38.368920 1143678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1086251.pem
	I0603 13:50:38.376029 1143678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1086251.pem /etc/ssl/certs/51391683.0"
	I0603 13:50:38.387703 1143678 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 13:50:38.393071 1143678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 13:50:38.399760 1143678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 13:50:38.406332 1143678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 13:50:38.413154 1143678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 13:50:38.419162 1143678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 13:50:38.425818 1143678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 13:50:38.432495 1143678 kubeadm.go:391] StartCluster: {Name:old-k8s-version-151788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-151788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:50:38.432659 1143678 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 13:50:38.432718 1143678 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:50:38.479889 1143678 cri.go:89] found id: ""
	I0603 13:50:38.479975 1143678 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0603 13:50:38.490549 1143678 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 13:50:38.490574 1143678 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 13:50:38.490580 1143678 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 13:50:38.490637 1143678 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 13:50:38.501024 1143678 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 13:50:38.503665 1143678 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-151788" does not appear in /home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 13:50:38.504563 1143678 kubeconfig.go:62] /home/jenkins/minikube-integration/19011-1078924/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-151788" cluster setting kubeconfig missing "old-k8s-version-151788" context setting]
	I0603 13:50:38.505614 1143678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/kubeconfig: {Name:mk082a4c41fd0f4876b4085806e1bc5ef6533b14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:50:38.562691 1143678 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 13:50:38.573839 1143678 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.65
	I0603 13:50:38.573889 1143678 kubeadm.go:1154] stopping kube-system containers ...
	I0603 13:50:38.573905 1143678 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0603 13:50:38.573987 1143678 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:50:38.615876 1143678 cri.go:89] found id: ""
	I0603 13:50:38.615972 1143678 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 13:50:38.633568 1143678 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 13:50:38.645197 1143678 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 13:50:38.645229 1143678 kubeadm.go:156] found existing configuration files:
	
	I0603 13:50:38.645291 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 13:50:38.655344 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 13:50:38.655423 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 13:50:38.665789 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 13:50:38.674765 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 13:50:38.674842 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 13:50:38.684268 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 13:50:38.693586 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 13:50:38.693650 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 13:50:38.703313 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 13:50:38.712523 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 13:50:38.712597 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 13:50:38.722362 1143678 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 13:50:38.732190 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:38.875545 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:39.722534 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:39.970226 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:40.090817 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:40.193178 1143678 api_server.go:52] waiting for apiserver process to appear ...
	I0603 13:50:40.193485 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:40.693580 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:41.193579 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:41.693608 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:42.193587 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:39.318177 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:41.818337 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:41.373738 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:43.870381 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:41.597745 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:41.598343 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:41.598372 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:41.598280 1144889 retry.go:31] will retry after 2.47307834s: waiting for machine to come up
	I0603 13:50:44.074548 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:44.075009 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:44.075037 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:44.074970 1144889 retry.go:31] will retry after 3.055733752s: waiting for machine to come up
	I0603 13:50:42.693593 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:43.194448 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:43.693645 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:44.193587 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:44.694583 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:45.194065 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:45.694138 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:46.194173 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:46.694344 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:47.194063 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:44.316348 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:46.317245 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:47.133727 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.134266 1142862 main.go:141] libmachine: (no-preload-817450) Found IP for machine: 192.168.72.125
	I0603 13:50:47.134301 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has current primary IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.134308 1142862 main.go:141] libmachine: (no-preload-817450) Reserving static IP address...
	I0603 13:50:47.134745 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "no-preload-817450", mac: "52:54:00:8f:cc:be", ip: "192.168.72.125"} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.134777 1142862 main.go:141] libmachine: (no-preload-817450) Reserved static IP address: 192.168.72.125
	I0603 13:50:47.134797 1142862 main.go:141] libmachine: (no-preload-817450) DBG | skip adding static IP to network mk-no-preload-817450 - found existing host DHCP lease matching {name: "no-preload-817450", mac: "52:54:00:8f:cc:be", ip: "192.168.72.125"}
	I0603 13:50:47.134816 1142862 main.go:141] libmachine: (no-preload-817450) DBG | Getting to WaitForSSH function...
	I0603 13:50:47.134858 1142862 main.go:141] libmachine: (no-preload-817450) Waiting for SSH to be available...
	I0603 13:50:47.137239 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.137669 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.137705 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.137810 1142862 main.go:141] libmachine: (no-preload-817450) DBG | Using SSH client type: external
	I0603 13:50:47.137835 1142862 main.go:141] libmachine: (no-preload-817450) DBG | Using SSH private key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa (-rw-------)
	I0603 13:50:47.137870 1142862 main.go:141] libmachine: (no-preload-817450) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 13:50:47.137879 1142862 main.go:141] libmachine: (no-preload-817450) DBG | About to run SSH command:
	I0603 13:50:47.137889 1142862 main.go:141] libmachine: (no-preload-817450) DBG | exit 0
	I0603 13:50:47.265932 1142862 main.go:141] libmachine: (no-preload-817450) DBG | SSH cmd err, output: <nil>: 
	I0603 13:50:47.266268 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetConfigRaw
	I0603 13:50:47.267007 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetIP
	I0603 13:50:47.269463 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.269849 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.269885 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.270135 1142862 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450/config.json ...
	I0603 13:50:47.270355 1142862 machine.go:94] provisionDockerMachine start ...
	I0603 13:50:47.270375 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:50:47.270589 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:47.272915 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.273307 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.273341 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.273543 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:47.273737 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.273905 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.274061 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:47.274242 1142862 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:47.274417 1142862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.125 22 <nil> <nil>}
	I0603 13:50:47.274429 1142862 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 13:50:47.380760 1142862 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 13:50:47.380789 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetMachineName
	I0603 13:50:47.381068 1142862 buildroot.go:166] provisioning hostname "no-preload-817450"
	I0603 13:50:47.381095 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetMachineName
	I0603 13:50:47.381314 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:47.384093 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.384460 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.384482 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.384627 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:47.384798 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.384938 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.385099 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:47.385276 1142862 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:47.385533 1142862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.125 22 <nil> <nil>}
	I0603 13:50:47.385562 1142862 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-817450 && echo "no-preload-817450" | sudo tee /etc/hostname
	I0603 13:50:47.505203 1142862 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-817450
	
	I0603 13:50:47.505231 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:47.508267 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.508696 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.508721 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.508877 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:47.509066 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.509281 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.509437 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:47.509606 1142862 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:47.509780 1142862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.125 22 <nil> <nil>}
	I0603 13:50:47.509795 1142862 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-817450' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-817450/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-817450' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 13:50:47.618705 1142862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 13:50:47.618757 1142862 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19011-1078924/.minikube CaCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19011-1078924/.minikube}
	I0603 13:50:47.618822 1142862 buildroot.go:174] setting up certificates
	I0603 13:50:47.618835 1142862 provision.go:84] configureAuth start
	I0603 13:50:47.618854 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetMachineName
	I0603 13:50:47.619166 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetIP
	I0603 13:50:47.621974 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.622512 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.622548 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.622652 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:47.624950 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.625275 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.625302 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.625419 1142862 provision.go:143] copyHostCerts
	I0603 13:50:47.625504 1142862 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem, removing ...
	I0603 13:50:47.625520 1142862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 13:50:47.625591 1142862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem (1675 bytes)
	I0603 13:50:47.625697 1142862 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem, removing ...
	I0603 13:50:47.625706 1142862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 13:50:47.625725 1142862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem (1078 bytes)
	I0603 13:50:47.625790 1142862 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem, removing ...
	I0603 13:50:47.625800 1142862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 13:50:47.625826 1142862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem (1123 bytes)
	I0603 13:50:47.625891 1142862 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem org=jenkins.no-preload-817450 san=[127.0.0.1 192.168.72.125 localhost minikube no-preload-817450]
	I0603 13:50:47.733710 1142862 provision.go:177] copyRemoteCerts
	I0603 13:50:47.733769 1142862 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 13:50:47.733801 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:47.736326 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.736657 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.736686 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.736844 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:47.737036 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.737222 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:47.737341 1142862 sshutil.go:53] new ssh client: &{IP:192.168.72.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa Username:docker}
	I0603 13:50:47.821893 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 13:50:47.848085 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0603 13:50:47.875891 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 13:50:47.900761 1142862 provision.go:87] duration metric: took 281.906702ms to configureAuth
	I0603 13:50:47.900795 1142862 buildroot.go:189] setting minikube options for container-runtime
	I0603 13:50:47.900986 1142862 config.go:182] Loaded profile config "no-preload-817450": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:50:47.901072 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:47.904128 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.904551 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.904581 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.904802 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:47.905018 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.905203 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.905413 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:47.905609 1142862 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:47.905816 1142862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.125 22 <nil> <nil>}
	I0603 13:50:47.905839 1142862 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 13:50:48.176290 1142862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 13:50:48.176321 1142862 machine.go:97] duration metric: took 905.950732ms to provisionDockerMachine
	I0603 13:50:48.176333 1142862 start.go:293] postStartSetup for "no-preload-817450" (driver="kvm2")
	I0603 13:50:48.176344 1142862 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 13:50:48.176361 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:50:48.176689 1142862 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 13:50:48.176712 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:48.179595 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.179994 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:48.180020 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.180186 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:48.180398 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:48.180561 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:48.180704 1142862 sshutil.go:53] new ssh client: &{IP:192.168.72.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa Username:docker}
	I0603 13:50:48.267996 1142862 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 13:50:48.272936 1142862 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 13:50:48.272970 1142862 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/addons for local assets ...
	I0603 13:50:48.273044 1142862 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/files for local assets ...
	I0603 13:50:48.273141 1142862 filesync.go:149] local asset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> 10862512.pem in /etc/ssl/certs
	I0603 13:50:48.273285 1142862 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 13:50:48.283984 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:50:48.310846 1142862 start.go:296] duration metric: took 134.495139ms for postStartSetup
	I0603 13:50:48.310899 1142862 fix.go:56] duration metric: took 18.512032449s for fixHost
	I0603 13:50:48.310928 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:48.313969 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.314331 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:48.314358 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.314627 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:48.314896 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:48.315086 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:48.315258 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:48.315442 1142862 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:48.315681 1142862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.125 22 <nil> <nil>}
	I0603 13:50:48.315698 1142862 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 13:50:48.422576 1142862 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717422648.390814282
	
	I0603 13:50:48.422599 1142862 fix.go:216] guest clock: 1717422648.390814282
	I0603 13:50:48.422606 1142862 fix.go:229] Guest: 2024-06-03 13:50:48.390814282 +0000 UTC Remote: 2024-06-03 13:50:48.310904217 +0000 UTC m=+357.796105522 (delta=79.910065ms)
	I0603 13:50:48.422636 1142862 fix.go:200] guest clock delta is within tolerance: 79.910065ms
	I0603 13:50:48.422642 1142862 start.go:83] releasing machines lock for "no-preload-817450", held for 18.623816039s
	I0603 13:50:48.422659 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:50:48.422954 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetIP
	I0603 13:50:48.426261 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.426671 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:48.426701 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.426864 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:50:48.427460 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:50:48.427661 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:50:48.427762 1142862 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 13:50:48.427827 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:48.427878 1142862 ssh_runner.go:195] Run: cat /version.json
	I0603 13:50:48.427914 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:48.430586 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.430830 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.430965 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:48.430993 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.431177 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:48.431326 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:48.431355 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.431387 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:48.431516 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:48.431584 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:48.431676 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:48.431751 1142862 sshutil.go:53] new ssh client: &{IP:192.168.72.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa Username:docker}
	I0603 13:50:48.431798 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:48.431936 1142862 sshutil.go:53] new ssh client: &{IP:192.168.72.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa Username:docker}
	I0603 13:50:48.506899 1142862 ssh_runner.go:195] Run: systemctl --version
	I0603 13:50:48.545903 1142862 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 13:50:48.700235 1142862 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 13:50:48.706614 1142862 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 13:50:48.706704 1142862 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 13:50:48.724565 1142862 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 13:50:48.724592 1142862 start.go:494] detecting cgroup driver to use...
	I0603 13:50:48.724664 1142862 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 13:50:48.741006 1142862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 13:50:48.758824 1142862 docker.go:217] disabling cri-docker service (if available) ...
	I0603 13:50:48.758899 1142862 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 13:50:48.773280 1142862 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 13:50:48.791049 1142862 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 13:50:48.917847 1142862 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 13:50:49.081837 1142862 docker.go:233] disabling docker service ...
	I0603 13:50:49.081927 1142862 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 13:50:49.097577 1142862 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 13:50:49.112592 1142862 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 13:50:49.228447 1142862 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 13:50:49.350782 1142862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 13:50:49.366017 1142862 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 13:50:49.385685 1142862 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 13:50:49.385765 1142862 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:49.396361 1142862 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 13:50:49.396432 1142862 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:49.408606 1142862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:49.419642 1142862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:49.430431 1142862 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 13:50:49.441378 1142862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:49.451810 1142862 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:49.469080 1142862 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:49.480054 1142862 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 13:50:49.489742 1142862 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 13:50:49.489814 1142862 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 13:50:49.502889 1142862 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 13:50:49.512414 1142862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:50:49.639903 1142862 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 13:50:49.786388 1142862 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 13:50:49.786486 1142862 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 13:50:49.791642 1142862 start.go:562] Will wait 60s for crictl version
	I0603 13:50:49.791711 1142862 ssh_runner.go:195] Run: which crictl
	I0603 13:50:49.796156 1142862 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 13:50:49.841667 1142862 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 13:50:49.841765 1142862 ssh_runner.go:195] Run: crio --version
	I0603 13:50:49.872213 1142862 ssh_runner.go:195] Run: crio --version
	I0603 13:50:49.910979 1142862 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 13:50:46.370749 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:48.870860 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:49.912417 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetIP
	I0603 13:50:49.915368 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:49.915731 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:49.915759 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:49.915913 1142862 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0603 13:50:49.920247 1142862 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:50:49.933231 1142862 kubeadm.go:877] updating cluster {Name:no-preload-817450 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:no-preload-817450 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.125 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 13:50:49.933358 1142862 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 13:50:49.933388 1142862 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:50:49.970029 1142862 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0603 13:50:49.970059 1142862 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.1 registry.k8s.io/kube-controller-manager:v1.30.1 registry.k8s.io/kube-scheduler:v1.30.1 registry.k8s.io/kube-proxy:v1.30.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0603 13:50:49.970118 1142862 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:49.970147 1142862 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.1
	I0603 13:50:49.970163 1142862 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0603 13:50:49.970198 1142862 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 13:50:49.970239 1142862 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.1
	I0603 13:50:49.970316 1142862 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.1
	I0603 13:50:49.970328 1142862 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0603 13:50:49.970379 1142862 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0603 13:50:49.971809 1142862 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0603 13:50:49.971837 1142862 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0603 13:50:49.971841 1142862 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.1
	I0603 13:50:49.971809 1142862 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0603 13:50:49.971808 1142862 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.1
	I0603 13:50:49.971876 1142862 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 13:50:49.971816 1142862 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.1
	I0603 13:50:49.971813 1142862 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:50.126557 1142862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0603 13:50:50.146394 1142862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.1
	I0603 13:50:50.149455 1142862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.1
	I0603 13:50:50.149755 1142862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0603 13:50:50.154990 1142862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0603 13:50:50.162983 1142862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:50.177520 1142862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 13:50:50.188703 1142862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.1
	I0603 13:50:50.299288 1142862 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.1" does not exist at hash "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a" in container runtime
	I0603 13:50:50.299312 1142862 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.1" does not exist at hash "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035" in container runtime
	I0603 13:50:50.299345 1142862 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.1
	I0603 13:50:50.299350 1142862 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.1
	I0603 13:50:50.299389 1142862 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0603 13:50:50.299406 1142862 ssh_runner.go:195] Run: which crictl
	I0603 13:50:50.299413 1142862 ssh_runner.go:195] Run: which crictl
	I0603 13:50:50.299422 1142862 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0603 13:50:50.299488 1142862 ssh_runner.go:195] Run: which crictl
	I0603 13:50:50.353368 1142862 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0603 13:50:50.353431 1142862 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:50.353485 1142862 ssh_runner.go:195] Run: which crictl
	I0603 13:50:50.353506 1142862 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0603 13:50:50.353543 1142862 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0603 13:50:50.353591 1142862 ssh_runner.go:195] Run: which crictl
	I0603 13:50:50.379011 1142862 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.1" needs transfer: "registry.k8s.io/kube-proxy:v1.30.1" does not exist at hash "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd" in container runtime
	I0603 13:50:50.379028 1142862 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.1" does not exist at hash "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c" in container runtime
	I0603 13:50:50.379054 1142862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.1
	I0603 13:50:50.379062 1142862 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.1
	I0603 13:50:50.379105 1142862 ssh_runner.go:195] Run: which crictl
	I0603 13:50:50.379075 1142862 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 13:50:50.379146 1142862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.1
	I0603 13:50:50.379181 1142862 ssh_runner.go:195] Run: which crictl
	I0603 13:50:50.379212 1142862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0603 13:50:50.379229 1142862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0603 13:50:50.379239 1142862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:50.482204 1142862 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1
	I0603 13:50:50.482210 1142862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.1
	I0603 13:50:50.482332 1142862 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0603 13:50:50.511560 1142862 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0603 13:50:50.511671 1142862 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1
	I0603 13:50:50.511721 1142862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 13:50:50.511769 1142862 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0603 13:50:50.511772 1142862 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0603 13:50:50.511682 1142862 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0603 13:50:50.511868 1142862 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0603 13:50:50.512290 1142862 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0603 13:50:50.512360 1142862 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0603 13:50:50.549035 1142862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.1 (exists)
	I0603 13:50:50.549061 1142862 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1
	I0603 13:50:50.549066 1142862 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0603 13:50:50.549156 1142862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0603 13:50:50.549166 1142862 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.1
	I0603 13:50:47.693584 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:48.193894 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:48.694053 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:49.193587 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:49.694081 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:50.194053 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:50.694265 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:51.193572 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:51.694283 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:52.194444 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:48.321194 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:50.816679 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:52.818121 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:51.372716 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:53.372880 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:50.573615 1142862 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1
	I0603 13:50:50.573661 1142862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0603 13:50:50.573708 1142862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.1 (exists)
	I0603 13:50:50.573737 1142862 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0603 13:50:50.573754 1142862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0603 13:50:50.573816 1142862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0603 13:50:50.573839 1142862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.1 (exists)
	I0603 13:50:52.739312 1142862 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1: (2.190102069s)
	I0603 13:50:52.739333 1142862 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.1: (2.165569436s)
	I0603 13:50:52.739354 1142862 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1 from cache
	I0603 13:50:52.739365 1142862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.1 (exists)
	I0603 13:50:52.739372 1142862 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0603 13:50:52.739420 1142862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0603 13:50:54.995960 1142862 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1: (2.256502953s)
	I0603 13:50:54.996000 1142862 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1 from cache
	I0603 13:50:54.996019 1142862 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0603 13:50:54.996076 1142862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0603 13:50:52.694071 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:53.193597 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:53.694503 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:54.193609 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:54.694446 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:55.193856 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:55.693583 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:56.194271 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:56.693558 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:57.194427 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:55.317668 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:57.318423 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:55.872030 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:58.376034 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:55.844775 1142862 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0603 13:50:55.844853 1142862 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0603 13:50:55.844967 1142862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0603 13:50:58.110074 1142862 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1: (2.265068331s)
	I0603 13:50:58.110103 1142862 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1 from cache
	I0603 13:50:58.110115 1142862 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0603 13:50:58.110169 1142862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0603 13:50:59.979789 1142862 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.869594477s)
	I0603 13:50:59.979817 1142862 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0603 13:50:59.979832 1142862 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0603 13:50:59.979875 1142862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0603 13:50:57.694027 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:58.193718 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:58.693488 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:59.193725 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:59.694310 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:00.194455 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:00.694182 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:01.193916 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:01.693504 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:02.194236 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:59.816444 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:01.817757 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:00.872105 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:03.373427 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:04.067476 1142862 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.087571936s)
	I0603 13:51:04.067529 1142862 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0603 13:51:04.067549 1142862 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.1
	I0603 13:51:04.067605 1142862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1
	I0603 13:51:02.694248 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:03.194094 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:03.694072 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:04.194494 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:04.693899 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:05.193578 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:05.693584 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:06.193934 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:06.693586 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:07.193993 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:04.316979 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:06.318105 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:05.871061 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:08.371377 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:05.819264 1142862 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1: (1.75162069s)
	I0603 13:51:05.819302 1142862 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1 from cache
	I0603 13:51:05.819334 1142862 cache_images.go:123] Successfully loaded all cached images
	I0603 13:51:05.819341 1142862 cache_images.go:92] duration metric: took 15.849267186s to LoadCachedImages
	I0603 13:51:05.819352 1142862 kubeadm.go:928] updating node { 192.168.72.125 8443 v1.30.1 crio true true} ...
	I0603 13:51:05.819549 1142862 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-817450 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:no-preload-817450 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 13:51:05.819636 1142862 ssh_runner.go:195] Run: crio config
	I0603 13:51:05.874089 1142862 cni.go:84] Creating CNI manager for ""
	I0603 13:51:05.874114 1142862 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:51:05.874127 1142862 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 13:51:05.874152 1142862 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.125 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-817450 NodeName:no-preload-817450 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 13:51:05.874339 1142862 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.125
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-817450"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 13:51:05.874411 1142862 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 13:51:05.886116 1142862 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 13:51:05.886185 1142862 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 13:51:05.896269 1142862 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0603 13:51:05.914746 1142862 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 13:51:05.931936 1142862 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0603 13:51:05.949151 1142862 ssh_runner.go:195] Run: grep 192.168.72.125	control-plane.minikube.internal$ /etc/hosts
	I0603 13:51:05.953180 1142862 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.125	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:51:05.966675 1142862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:51:06.107517 1142862 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:51:06.129233 1142862 certs.go:68] Setting up /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450 for IP: 192.168.72.125
	I0603 13:51:06.129264 1142862 certs.go:194] generating shared ca certs ...
	I0603 13:51:06.129280 1142862 certs.go:226] acquiring lock for ca certs: {Name:mkeec5aabce7c9540fcb31b78e4f96c2851d54f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:51:06.129517 1142862 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key
	I0603 13:51:06.129583 1142862 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key
	I0603 13:51:06.129597 1142862 certs.go:256] generating profile certs ...
	I0603 13:51:06.129686 1142862 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450/client.key
	I0603 13:51:06.129746 1142862 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450/apiserver.key.e8ec030b
	I0603 13:51:06.129779 1142862 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450/proxy-client.key
	I0603 13:51:06.129885 1142862 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem (1338 bytes)
	W0603 13:51:06.129912 1142862 certs.go:480] ignoring /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251_empty.pem, impossibly tiny 0 bytes
	I0603 13:51:06.129919 1142862 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 13:51:06.129939 1142862 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem (1078 bytes)
	I0603 13:51:06.129965 1142862 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem (1123 bytes)
	I0603 13:51:06.129991 1142862 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem (1675 bytes)
	I0603 13:51:06.130028 1142862 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:51:06.130817 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 13:51:06.171348 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 13:51:06.206270 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 13:51:06.240508 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0603 13:51:06.292262 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0603 13:51:06.320406 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 13:51:06.346655 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 13:51:06.375908 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 13:51:06.401723 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 13:51:06.425992 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem --> /usr/share/ca-certificates/1086251.pem (1338 bytes)
	I0603 13:51:06.450484 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /usr/share/ca-certificates/10862512.pem (1708 bytes)
	I0603 13:51:06.475206 1142862 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 13:51:06.492795 1142862 ssh_runner.go:195] Run: openssl version
	I0603 13:51:06.499759 1142862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 13:51:06.511760 1142862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:51:06.516690 1142862 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:24 /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:51:06.516763 1142862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:51:06.523284 1142862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 13:51:06.535250 1142862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1086251.pem && ln -fs /usr/share/ca-certificates/1086251.pem /etc/ssl/certs/1086251.pem"
	I0603 13:51:06.545921 1142862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1086251.pem
	I0603 13:51:06.550765 1142862 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:37 /usr/share/ca-certificates/1086251.pem
	I0603 13:51:06.550823 1142862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1086251.pem
	I0603 13:51:06.556898 1142862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1086251.pem /etc/ssl/certs/51391683.0"
	I0603 13:51:06.567717 1142862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10862512.pem && ln -fs /usr/share/ca-certificates/10862512.pem /etc/ssl/certs/10862512.pem"
	I0603 13:51:06.578662 1142862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10862512.pem
	I0603 13:51:06.584084 1142862 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:37 /usr/share/ca-certificates/10862512.pem
	I0603 13:51:06.584153 1142862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10862512.pem
	I0603 13:51:06.591566 1142862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10862512.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 13:51:06.603554 1142862 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 13:51:06.608323 1142862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 13:51:06.614939 1142862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 13:51:06.621519 1142862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 13:51:06.627525 1142862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 13:51:06.633291 1142862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 13:51:06.639258 1142862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 13:51:06.644789 1142862 kubeadm.go:391] StartCluster: {Name:no-preload-817450 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:no-preload-817450 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.125 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:51:06.644876 1142862 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 13:51:06.644928 1142862 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:51:06.694731 1142862 cri.go:89] found id: ""
	I0603 13:51:06.694811 1142862 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0603 13:51:06.709773 1142862 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 13:51:06.709804 1142862 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 13:51:06.709812 1142862 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 13:51:06.709875 1142862 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 13:51:06.721095 1142862 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 13:51:06.722256 1142862 kubeconfig.go:125] found "no-preload-817450" server: "https://192.168.72.125:8443"
	I0603 13:51:06.724877 1142862 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 13:51:06.735753 1142862 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.125
	I0603 13:51:06.735789 1142862 kubeadm.go:1154] stopping kube-system containers ...
	I0603 13:51:06.735802 1142862 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0603 13:51:06.735847 1142862 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:51:06.776650 1142862 cri.go:89] found id: ""
	I0603 13:51:06.776743 1142862 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 13:51:06.796259 1142862 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 13:51:06.809765 1142862 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 13:51:06.809785 1142862 kubeadm.go:156] found existing configuration files:
	
	I0603 13:51:06.809839 1142862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 13:51:06.819821 1142862 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 13:51:06.819878 1142862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 13:51:06.829960 1142862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 13:51:06.839510 1142862 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 13:51:06.839561 1142862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 13:51:06.849346 1142862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 13:51:06.858834 1142862 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 13:51:06.858886 1142862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 13:51:06.869159 1142862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 13:51:06.879672 1142862 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 13:51:06.879739 1142862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 13:51:06.889393 1142862 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 13:51:06.899309 1142862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:51:07.021375 1142862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:51:08.119929 1142862 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.098510185s)
	I0603 13:51:08.119959 1142862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:51:08.318752 1142862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:51:08.396713 1142862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:51:08.506285 1142862 api_server.go:52] waiting for apiserver process to appear ...
	I0603 13:51:08.506384 1142862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:09.006865 1142862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:09.506528 1142862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:09.582432 1142862 api_server.go:72] duration metric: took 1.076134659s to wait for apiserver process to appear ...
	I0603 13:51:09.582463 1142862 api_server.go:88] waiting for apiserver healthz status ...
	I0603 13:51:09.582507 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:07.693540 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:08.194490 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:08.694498 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:09.194496 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:09.694286 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:10.193605 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:10.694326 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:11.193904 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:11.694504 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:12.194093 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:08.318739 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:10.817309 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:10.371622 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:12.372640 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:14.871007 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:12.049693 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 13:51:12.049731 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 13:51:12.049748 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:12.084495 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 13:51:12.084526 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 13:51:12.084541 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:12.141515 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:51:12.141555 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:51:12.582630 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:12.590238 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:51:12.590279 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:51:13.082813 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:13.097350 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:51:13.097380 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:51:13.582895 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:13.587479 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:51:13.587511 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:51:14.083076 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:14.087531 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:51:14.087561 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:51:14.583203 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:14.587735 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:51:14.587781 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:51:15.082844 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:15.087403 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:51:15.087438 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:51:15.583226 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:15.590238 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 200:
	ok
	I0603 13:51:15.601732 1142862 api_server.go:141] control plane version: v1.30.1
	I0603 13:51:15.601762 1142862 api_server.go:131] duration metric: took 6.019291333s to wait for apiserver health ...
	I0603 13:51:15.601775 1142862 cni.go:84] Creating CNI manager for ""
	I0603 13:51:15.601784 1142862 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:51:15.603654 1142862 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 13:51:12.694356 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:13.194219 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:13.693546 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:14.193588 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:14.694003 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:15.193572 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:15.694012 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:16.193567 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:16.694014 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:17.193554 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:13.320666 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:15.818073 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:17.369593 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:19.369916 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:15.605291 1142862 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 13:51:15.618333 1142862 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 13:51:15.640539 1142862 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 13:51:15.651042 1142862 system_pods.go:59] 8 kube-system pods found
	I0603 13:51:15.651086 1142862 system_pods.go:61] "coredns-7db6d8ff4d-s562v" [be995d41-2b25-4839-a36b-212a507e7db7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 13:51:15.651102 1142862 system_pods.go:61] "etcd-no-preload-817450" [1b21708b-d81b-4594-a186-546437467c26] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0603 13:51:15.651117 1142862 system_pods.go:61] "kube-apiserver-no-preload-817450" [0741a4bf-3161-4cf3-a9c6-36af2a0c4fde] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0603 13:51:15.651126 1142862 system_pods.go:61] "kube-controller-manager-no-preload-817450" [43713383-9197-4874-8aa9-7b1b1f05e4b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0603 13:51:15.651133 1142862 system_pods.go:61] "kube-proxy-2j4sg" [112657ad-311a-46ee-b5c0-6f544991465e] Running
	I0603 13:51:15.651145 1142862 system_pods.go:61] "kube-scheduler-no-preload-817450" [40db5c40-dc01-4fd3-a5e0-06a6ee1fd0a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0603 13:51:15.651152 1142862 system_pods.go:61] "metrics-server-569cc877fc-mtvrq" [00cb7657-2564-4d25-8faa-b6f618e61115] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:51:15.651163 1142862 system_pods.go:61] "storage-provisioner" [913d3120-32ce-4212-84be-9e3b99f2a894] Running
	I0603 13:51:15.651171 1142862 system_pods.go:74] duration metric: took 10.608401ms to wait for pod list to return data ...
	I0603 13:51:15.651181 1142862 node_conditions.go:102] verifying NodePressure condition ...
	I0603 13:51:15.654759 1142862 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 13:51:15.654784 1142862 node_conditions.go:123] node cpu capacity is 2
	I0603 13:51:15.654795 1142862 node_conditions.go:105] duration metric: took 3.608137ms to run NodePressure ...
	I0603 13:51:15.654813 1142862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:51:15.940085 1142862 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0603 13:51:15.944785 1142862 kubeadm.go:733] kubelet initialised
	I0603 13:51:15.944808 1142862 kubeadm.go:734] duration metric: took 4.692827ms waiting for restarted kubelet to initialise ...
	I0603 13:51:15.944817 1142862 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:51:15.950113 1142862 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-s562v" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:17.958330 1142862 pod_ready.go:102] pod "coredns-7db6d8ff4d-s562v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:20.456029 1142862 pod_ready.go:102] pod "coredns-7db6d8ff4d-s562v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:17.693856 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:18.193853 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:18.693858 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:19.193568 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:19.693680 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:20.193556 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:20.694129 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:21.193662 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:21.694445 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:22.193668 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:18.317128 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:20.317375 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:22.317530 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:21.371070 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:23.871400 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:21.958183 1142862 pod_ready.go:92] pod "coredns-7db6d8ff4d-s562v" in "kube-system" namespace has status "Ready":"True"
	I0603 13:51:21.958208 1142862 pod_ready.go:81] duration metric: took 6.008058251s for pod "coredns-7db6d8ff4d-s562v" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:21.958220 1142862 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:23.964785 1142862 pod_ready.go:102] pod "etcd-no-preload-817450" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:22.694004 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:23.193793 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:23.694340 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:24.194411 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:24.694314 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:25.194501 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:25.693545 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:26.194255 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:26.694312 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:27.194453 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:24.817165 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:27.317176 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:26.369665 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:28.370392 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:25.966060 1142862 pod_ready.go:102] pod "etcd-no-preload-817450" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:27.965236 1142862 pod_ready.go:92] pod "etcd-no-preload-817450" in "kube-system" namespace has status "Ready":"True"
	I0603 13:51:27.965267 1142862 pod_ready.go:81] duration metric: took 6.007038184s for pod "etcd-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.965281 1142862 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.969898 1142862 pod_ready.go:92] pod "kube-apiserver-no-preload-817450" in "kube-system" namespace has status "Ready":"True"
	I0603 13:51:27.969920 1142862 pod_ready.go:81] duration metric: took 4.630357ms for pod "kube-apiserver-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.969932 1142862 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.974500 1142862 pod_ready.go:92] pod "kube-controller-manager-no-preload-817450" in "kube-system" namespace has status "Ready":"True"
	I0603 13:51:27.974517 1142862 pod_ready.go:81] duration metric: took 4.577117ms for pod "kube-controller-manager-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.974526 1142862 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2j4sg" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.978510 1142862 pod_ready.go:92] pod "kube-proxy-2j4sg" in "kube-system" namespace has status "Ready":"True"
	I0603 13:51:27.978530 1142862 pod_ready.go:81] duration metric: took 3.997645ms for pod "kube-proxy-2j4sg" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.978537 1142862 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.982488 1142862 pod_ready.go:92] pod "kube-scheduler-no-preload-817450" in "kube-system" namespace has status "Ready":"True"
	I0603 13:51:27.982507 1142862 pod_ready.go:81] duration metric: took 3.962666ms for pod "kube-scheduler-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.982518 1142862 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:29.989265 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:27.694334 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:28.193809 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:28.693744 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:29.193608 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:29.693584 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:30.194111 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:30.694213 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:31.193588 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:31.694336 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:32.193716 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:29.317483 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:31.324199 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:30.370435 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:32.870510 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:34.872543 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:31.990649 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:34.488899 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:32.693501 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:33.194174 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:33.693995 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:34.194242 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:34.693961 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:35.194052 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:35.693730 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:36.193559 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:36.693763 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:37.194274 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:33.816533 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:36.316832 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:37.371543 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:39.372034 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:36.489364 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:38.490431 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:40.490888 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:37.693590 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:38.194328 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:38.694296 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:39.194272 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:39.693607 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:40.193595 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:51:40.193691 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:51:40.237747 1143678 cri.go:89] found id: ""
	I0603 13:51:40.237776 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.237785 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:51:40.237792 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:51:40.237854 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:51:40.275924 1143678 cri.go:89] found id: ""
	I0603 13:51:40.275964 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.275975 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:51:40.275983 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:51:40.276049 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:51:40.314827 1143678 cri.go:89] found id: ""
	I0603 13:51:40.314857 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.314870 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:51:40.314877 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:51:40.314939 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:51:40.359040 1143678 cri.go:89] found id: ""
	I0603 13:51:40.359072 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.359084 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:51:40.359092 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:51:40.359154 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:51:40.396136 1143678 cri.go:89] found id: ""
	I0603 13:51:40.396170 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.396185 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:51:40.396194 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:51:40.396261 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:51:40.436766 1143678 cri.go:89] found id: ""
	I0603 13:51:40.436803 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.436814 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:51:40.436828 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:51:40.436902 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:51:40.477580 1143678 cri.go:89] found id: ""
	I0603 13:51:40.477606 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.477615 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:51:40.477621 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:51:40.477713 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:51:40.518920 1143678 cri.go:89] found id: ""
	I0603 13:51:40.518960 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.518972 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:51:40.518984 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:51:40.519001 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:51:40.659881 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:51:40.659913 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:51:40.659932 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:51:40.727850 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:51:40.727894 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:51:40.774153 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:51:40.774189 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:40.828054 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:51:40.828094 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:51:38.820985 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:41.322044 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:41.870717 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:43.872112 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:42.988898 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:44.989384 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:43.342659 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:43.357063 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:51:43.357131 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:51:43.398000 1143678 cri.go:89] found id: ""
	I0603 13:51:43.398036 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.398045 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:51:43.398051 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:51:43.398106 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:51:43.436761 1143678 cri.go:89] found id: ""
	I0603 13:51:43.436805 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.436814 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:51:43.436820 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:51:43.436872 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:51:43.478122 1143678 cri.go:89] found id: ""
	I0603 13:51:43.478154 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.478164 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:51:43.478172 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:51:43.478243 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:51:43.514473 1143678 cri.go:89] found id: ""
	I0603 13:51:43.514511 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.514523 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:51:43.514532 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:51:43.514600 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:51:43.552354 1143678 cri.go:89] found id: ""
	I0603 13:51:43.552390 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.552399 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:51:43.552405 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:51:43.552489 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:51:43.590637 1143678 cri.go:89] found id: ""
	I0603 13:51:43.590665 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.590677 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:51:43.590685 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:51:43.590745 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:51:43.633958 1143678 cri.go:89] found id: ""
	I0603 13:51:43.634001 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.634013 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:51:43.634021 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:51:43.634088 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:51:43.672640 1143678 cri.go:89] found id: ""
	I0603 13:51:43.672683 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.672695 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:51:43.672716 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:51:43.672733 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:43.725880 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:51:43.725937 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:51:43.743736 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:51:43.743771 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:51:43.831757 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:51:43.831785 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:51:43.831801 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:51:43.905062 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:51:43.905114 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:51:46.459588 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:46.472911 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:51:46.472983 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:51:46.513723 1143678 cri.go:89] found id: ""
	I0603 13:51:46.513757 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.513768 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:51:46.513776 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:51:46.513845 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:51:46.549205 1143678 cri.go:89] found id: ""
	I0603 13:51:46.549234 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.549242 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:51:46.549251 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:51:46.549311 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:51:46.585004 1143678 cri.go:89] found id: ""
	I0603 13:51:46.585042 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.585053 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:51:46.585063 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:51:46.585120 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:51:46.620534 1143678 cri.go:89] found id: ""
	I0603 13:51:46.620571 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.620582 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:51:46.620590 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:51:46.620661 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:51:46.655974 1143678 cri.go:89] found id: ""
	I0603 13:51:46.656005 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.656014 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:51:46.656020 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:51:46.656091 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:51:46.693078 1143678 cri.go:89] found id: ""
	I0603 13:51:46.693141 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.693158 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:51:46.693168 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:51:46.693244 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:51:46.729177 1143678 cri.go:89] found id: ""
	I0603 13:51:46.729213 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.729223 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:51:46.729232 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:51:46.729300 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:51:46.766899 1143678 cri.go:89] found id: ""
	I0603 13:51:46.766929 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.766937 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:51:46.766946 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:51:46.766959 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:46.826715 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:51:46.826757 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:51:46.841461 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:51:46.841504 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:51:46.914505 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:51:46.914533 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:51:46.914551 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:51:46.989886 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:51:46.989928 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:51:43.817456 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:45.817576 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:46.370927 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:48.371196 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:46.990440 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:49.489483 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:49.532804 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:49.547359 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:51:49.547438 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:51:49.584262 1143678 cri.go:89] found id: ""
	I0603 13:51:49.584299 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.584311 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:51:49.584319 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:51:49.584389 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:51:49.622332 1143678 cri.go:89] found id: ""
	I0603 13:51:49.622372 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.622384 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:51:49.622393 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:51:49.622488 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:51:49.664339 1143678 cri.go:89] found id: ""
	I0603 13:51:49.664378 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.664390 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:51:49.664399 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:51:49.664468 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:51:49.712528 1143678 cri.go:89] found id: ""
	I0603 13:51:49.712558 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.712565 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:51:49.712574 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:51:49.712640 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:51:49.767343 1143678 cri.go:89] found id: ""
	I0603 13:51:49.767374 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.767382 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:51:49.767388 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:51:49.767450 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:51:49.822457 1143678 cri.go:89] found id: ""
	I0603 13:51:49.822491 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.822499 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:51:49.822505 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:51:49.822561 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:51:49.867823 1143678 cri.go:89] found id: ""
	I0603 13:51:49.867855 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.867867 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:51:49.867875 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:51:49.867936 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:51:49.906765 1143678 cri.go:89] found id: ""
	I0603 13:51:49.906797 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.906805 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:51:49.906816 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:51:49.906829 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:51:49.921731 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:51:49.921764 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:51:49.993832 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:51:49.993860 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:51:49.993878 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:51:50.070080 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:51:50.070125 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:51:50.112323 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:51:50.112357 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:48.317830 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:50.816577 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:52.817035 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:50.871664 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:52.871865 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:51.990258 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:54.489037 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:52.666289 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:52.680475 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:51:52.680550 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:51:52.722025 1143678 cri.go:89] found id: ""
	I0603 13:51:52.722063 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.722075 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:51:52.722083 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:51:52.722145 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:51:52.759709 1143678 cri.go:89] found id: ""
	I0603 13:51:52.759742 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.759754 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:51:52.759762 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:51:52.759838 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:51:52.797131 1143678 cri.go:89] found id: ""
	I0603 13:51:52.797162 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.797171 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:51:52.797176 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:51:52.797231 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:51:52.832921 1143678 cri.go:89] found id: ""
	I0603 13:51:52.832951 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.832959 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:51:52.832965 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:51:52.833024 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:51:52.869361 1143678 cri.go:89] found id: ""
	I0603 13:51:52.869389 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.869399 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:51:52.869422 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:51:52.869495 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:51:52.905863 1143678 cri.go:89] found id: ""
	I0603 13:51:52.905897 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.905909 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:51:52.905917 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:51:52.905985 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:51:52.940407 1143678 cri.go:89] found id: ""
	I0603 13:51:52.940438 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.940446 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:51:52.940452 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:51:52.940517 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:51:52.982079 1143678 cri.go:89] found id: ""
	I0603 13:51:52.982115 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.982126 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:51:52.982138 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:51:52.982155 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:51:53.066897 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:51:53.066942 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:51:53.108016 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:51:53.108056 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:53.164105 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:51:53.164151 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:51:53.178708 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:51:53.178743 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:51:53.257441 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:51:55.758633 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:55.774241 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:51:55.774329 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:51:55.809373 1143678 cri.go:89] found id: ""
	I0603 13:51:55.809436 1143678 logs.go:276] 0 containers: []
	W0603 13:51:55.809450 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:51:55.809467 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:51:55.809539 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:51:55.849741 1143678 cri.go:89] found id: ""
	I0603 13:51:55.849768 1143678 logs.go:276] 0 containers: []
	W0603 13:51:55.849776 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:51:55.849783 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:51:55.849834 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:51:55.893184 1143678 cri.go:89] found id: ""
	I0603 13:51:55.893216 1143678 logs.go:276] 0 containers: []
	W0603 13:51:55.893228 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:51:55.893238 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:51:55.893307 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:51:55.931572 1143678 cri.go:89] found id: ""
	I0603 13:51:55.931618 1143678 logs.go:276] 0 containers: []
	W0603 13:51:55.931632 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:51:55.931642 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:51:55.931713 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:51:55.969490 1143678 cri.go:89] found id: ""
	I0603 13:51:55.969527 1143678 logs.go:276] 0 containers: []
	W0603 13:51:55.969538 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:51:55.969546 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:51:55.969614 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:51:56.009266 1143678 cri.go:89] found id: ""
	I0603 13:51:56.009301 1143678 logs.go:276] 0 containers: []
	W0603 13:51:56.009313 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:51:56.009321 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:51:56.009394 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:51:56.049471 1143678 cri.go:89] found id: ""
	I0603 13:51:56.049520 1143678 logs.go:276] 0 containers: []
	W0603 13:51:56.049540 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:51:56.049547 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:51:56.049616 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:51:56.090176 1143678 cri.go:89] found id: ""
	I0603 13:51:56.090213 1143678 logs.go:276] 0 containers: []
	W0603 13:51:56.090228 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:51:56.090241 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:51:56.090266 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:51:56.175692 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:51:56.175737 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:51:56.222642 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:51:56.222683 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:56.276258 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:51:56.276301 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:51:56.291703 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:51:56.291739 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:51:56.364788 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:51:55.316604 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:57.816804 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:55.370917 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:57.372903 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:59.870783 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:56.489636 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:58.990006 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:58.865558 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:58.879983 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:51:58.880074 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:51:58.917422 1143678 cri.go:89] found id: ""
	I0603 13:51:58.917461 1143678 logs.go:276] 0 containers: []
	W0603 13:51:58.917473 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:51:58.917480 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:51:58.917535 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:51:58.953900 1143678 cri.go:89] found id: ""
	I0603 13:51:58.953933 1143678 logs.go:276] 0 containers: []
	W0603 13:51:58.953943 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:51:58.953959 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:51:58.954030 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:51:58.988677 1143678 cri.go:89] found id: ""
	I0603 13:51:58.988704 1143678 logs.go:276] 0 containers: []
	W0603 13:51:58.988713 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:51:58.988721 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:51:58.988783 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:51:59.023436 1143678 cri.go:89] found id: ""
	I0603 13:51:59.023474 1143678 logs.go:276] 0 containers: []
	W0603 13:51:59.023486 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:51:59.023494 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:51:59.023570 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:51:59.061357 1143678 cri.go:89] found id: ""
	I0603 13:51:59.061386 1143678 logs.go:276] 0 containers: []
	W0603 13:51:59.061394 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:51:59.061400 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:51:59.061487 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:51:59.102995 1143678 cri.go:89] found id: ""
	I0603 13:51:59.103025 1143678 logs.go:276] 0 containers: []
	W0603 13:51:59.103038 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:51:59.103047 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:51:59.103124 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:51:59.141443 1143678 cri.go:89] found id: ""
	I0603 13:51:59.141480 1143678 logs.go:276] 0 containers: []
	W0603 13:51:59.141492 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:51:59.141499 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:51:59.141586 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:51:59.182909 1143678 cri.go:89] found id: ""
	I0603 13:51:59.182943 1143678 logs.go:276] 0 containers: []
	W0603 13:51:59.182953 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:51:59.182967 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:51:59.182984 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:51:59.259533 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:51:59.259580 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:51:59.308976 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:51:59.309016 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:59.362092 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:51:59.362142 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:51:59.378836 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:51:59.378887 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:51:59.454524 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:01.954939 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:01.969968 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:01.970039 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:02.014226 1143678 cri.go:89] found id: ""
	I0603 13:52:02.014267 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.014280 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:02.014289 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:02.014361 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:02.051189 1143678 cri.go:89] found id: ""
	I0603 13:52:02.051244 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.051259 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:02.051268 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:02.051349 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:02.093509 1143678 cri.go:89] found id: ""
	I0603 13:52:02.093548 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.093575 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:02.093586 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:02.093718 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:02.132069 1143678 cri.go:89] found id: ""
	I0603 13:52:02.132113 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.132129 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:02.132138 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:02.132299 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:02.168043 1143678 cri.go:89] found id: ""
	I0603 13:52:02.168071 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.168079 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:02.168085 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:02.168138 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:02.207029 1143678 cri.go:89] found id: ""
	I0603 13:52:02.207064 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.207074 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:02.207081 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:02.207134 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:02.247669 1143678 cri.go:89] found id: ""
	I0603 13:52:02.247719 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.247728 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:02.247734 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:02.247848 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:02.285780 1143678 cri.go:89] found id: ""
	I0603 13:52:02.285817 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.285829 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:02.285841 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:02.285863 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:59.817887 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:01.818381 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:01.871338 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:04.371052 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:00.990263 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:02.990651 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:05.490343 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:02.348775 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:02.349776 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:02.364654 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:02.364691 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:02.447948 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:02.447978 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:02.447992 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:02.534039 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:02.534100 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:05.080437 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:05.094169 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:05.094245 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:05.132312 1143678 cri.go:89] found id: ""
	I0603 13:52:05.132339 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.132346 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:05.132352 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:05.132423 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:05.168941 1143678 cri.go:89] found id: ""
	I0603 13:52:05.168979 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.168990 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:05.168999 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:05.169068 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:05.207151 1143678 cri.go:89] found id: ""
	I0603 13:52:05.207188 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.207196 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:05.207202 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:05.207272 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:05.258807 1143678 cri.go:89] found id: ""
	I0603 13:52:05.258839 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.258850 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:05.258859 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:05.259004 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:05.298250 1143678 cri.go:89] found id: ""
	I0603 13:52:05.298285 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.298297 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:05.298306 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:05.298381 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:05.340922 1143678 cri.go:89] found id: ""
	I0603 13:52:05.340951 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.340959 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:05.340966 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:05.341027 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:05.382680 1143678 cri.go:89] found id: ""
	I0603 13:52:05.382707 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.382715 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:05.382722 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:05.382777 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:05.426774 1143678 cri.go:89] found id: ""
	I0603 13:52:05.426801 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.426811 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:05.426822 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:05.426836 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:05.483042 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:05.483091 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:05.499119 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:05.499159 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:05.580933 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:05.580962 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:05.580983 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:05.660395 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:05.660437 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:03.818676 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:06.316881 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:06.371515 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:08.871174 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:07.490662 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:09.992709 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:08.200887 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:08.215113 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:08.215203 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:08.252367 1143678 cri.go:89] found id: ""
	I0603 13:52:08.252404 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.252417 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:08.252427 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:08.252500 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:08.289249 1143678 cri.go:89] found id: ""
	I0603 13:52:08.289279 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.289290 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:08.289298 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:08.289364 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:08.331155 1143678 cri.go:89] found id: ""
	I0603 13:52:08.331181 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.331195 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:08.331201 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:08.331258 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:08.371376 1143678 cri.go:89] found id: ""
	I0603 13:52:08.371400 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.371408 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:08.371415 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:08.371477 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:08.408009 1143678 cri.go:89] found id: ""
	I0603 13:52:08.408045 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.408057 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:08.408065 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:08.408119 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:08.446377 1143678 cri.go:89] found id: ""
	I0603 13:52:08.446413 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.446421 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:08.446429 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:08.446504 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:08.485429 1143678 cri.go:89] found id: ""
	I0603 13:52:08.485461 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.485471 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:08.485479 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:08.485546 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:08.527319 1143678 cri.go:89] found id: ""
	I0603 13:52:08.527363 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.527375 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:08.527388 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:08.527414 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:08.602347 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:08.602371 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:08.602384 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:08.683855 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:08.683902 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:08.724402 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:08.724443 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:08.781154 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:08.781202 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:11.297827 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:11.313927 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:11.314006 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:11.352622 1143678 cri.go:89] found id: ""
	I0603 13:52:11.352660 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.352671 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:11.352678 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:11.352755 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:11.395301 1143678 cri.go:89] found id: ""
	I0603 13:52:11.395338 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.395351 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:11.395360 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:11.395442 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:11.431104 1143678 cri.go:89] found id: ""
	I0603 13:52:11.431143 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.431155 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:11.431170 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:11.431234 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:11.470177 1143678 cri.go:89] found id: ""
	I0603 13:52:11.470212 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.470223 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:11.470241 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:11.470309 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:11.508741 1143678 cri.go:89] found id: ""
	I0603 13:52:11.508779 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.508803 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:11.508810 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:11.508906 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:11.544970 1143678 cri.go:89] found id: ""
	I0603 13:52:11.545002 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.545012 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:11.545022 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:11.545093 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:11.583606 1143678 cri.go:89] found id: ""
	I0603 13:52:11.583636 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.583653 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:11.583666 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:11.583739 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:11.624770 1143678 cri.go:89] found id: ""
	I0603 13:52:11.624806 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.624815 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:11.624824 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:11.624841 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:11.680251 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:11.680298 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:11.695656 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:11.695695 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:11.770414 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:11.770478 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:11.770497 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:11.850812 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:11.850871 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:08.318447 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:10.817734 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:11.372533 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:13.871822 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:12.490666 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:14.988752 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:14.398649 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:14.411591 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:14.411689 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:14.447126 1143678 cri.go:89] found id: ""
	I0603 13:52:14.447158 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.447170 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:14.447178 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:14.447245 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:14.486681 1143678 cri.go:89] found id: ""
	I0603 13:52:14.486716 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.486728 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:14.486735 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:14.486799 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:14.521297 1143678 cri.go:89] found id: ""
	I0603 13:52:14.521326 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.521337 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:14.521343 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:14.521443 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:14.565086 1143678 cri.go:89] found id: ""
	I0603 13:52:14.565121 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.565130 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:14.565136 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:14.565196 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:14.601947 1143678 cri.go:89] found id: ""
	I0603 13:52:14.601975 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.601984 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:14.601990 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:14.602044 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:14.638332 1143678 cri.go:89] found id: ""
	I0603 13:52:14.638359 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.638366 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:14.638374 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:14.638435 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:14.675254 1143678 cri.go:89] found id: ""
	I0603 13:52:14.675284 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.675293 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:14.675299 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:14.675354 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:14.712601 1143678 cri.go:89] found id: ""
	I0603 13:52:14.712631 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.712639 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:14.712649 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:14.712663 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:14.787026 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:14.787068 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:14.836534 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:14.836564 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:14.889682 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:14.889729 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:14.905230 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:14.905264 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:14.979090 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:13.317070 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:15.317490 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:17.816412 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:15.871901 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:18.370626 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:16.989195 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:18.990108 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:17.479590 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:17.495088 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:17.495250 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:17.530832 1143678 cri.go:89] found id: ""
	I0603 13:52:17.530871 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.530883 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:17.530891 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:17.530966 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:17.567183 1143678 cri.go:89] found id: ""
	I0603 13:52:17.567213 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.567224 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:17.567232 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:17.567305 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:17.602424 1143678 cri.go:89] found id: ""
	I0603 13:52:17.602458 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.602469 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:17.602493 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:17.602570 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:17.641148 1143678 cri.go:89] found id: ""
	I0603 13:52:17.641184 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.641197 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:17.641205 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:17.641273 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:17.679004 1143678 cri.go:89] found id: ""
	I0603 13:52:17.679031 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.679039 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:17.679045 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:17.679102 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:17.717667 1143678 cri.go:89] found id: ""
	I0603 13:52:17.717698 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.717707 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:17.717715 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:17.717786 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:17.760262 1143678 cri.go:89] found id: ""
	I0603 13:52:17.760300 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.760323 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:17.760331 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:17.760416 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:17.796910 1143678 cri.go:89] found id: ""
	I0603 13:52:17.796943 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.796960 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:17.796976 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:17.796990 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:17.811733 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:17.811768 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:17.891891 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:17.891920 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:17.891939 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:17.969495 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:17.969535 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:18.032622 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:18.032654 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:20.586079 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:20.599118 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:20.599202 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:20.633732 1143678 cri.go:89] found id: ""
	I0603 13:52:20.633770 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.633780 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:20.633787 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:20.633841 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:20.668126 1143678 cri.go:89] found id: ""
	I0603 13:52:20.668155 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.668163 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:20.668169 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:20.668231 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:20.704144 1143678 cri.go:89] found id: ""
	I0603 13:52:20.704177 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.704187 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:20.704194 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:20.704251 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:20.745562 1143678 cri.go:89] found id: ""
	I0603 13:52:20.745594 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.745602 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:20.745608 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:20.745663 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:20.788998 1143678 cri.go:89] found id: ""
	I0603 13:52:20.789041 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.789053 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:20.789075 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:20.789152 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:20.832466 1143678 cri.go:89] found id: ""
	I0603 13:52:20.832495 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.832503 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:20.832510 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:20.832575 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:20.875212 1143678 cri.go:89] found id: ""
	I0603 13:52:20.875248 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.875258 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:20.875267 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:20.875336 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:20.912957 1143678 cri.go:89] found id: ""
	I0603 13:52:20.912989 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.912999 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:20.913011 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:20.913030 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:20.963655 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:20.963700 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:20.978619 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:20.978658 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:21.057136 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:21.057163 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:21.057185 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:21.136368 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:21.136415 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:19.817227 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:21.817625 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:20.871465 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:23.370757 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:21.488564 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:23.991662 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:23.676222 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:23.691111 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:23.691213 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:23.733282 1143678 cri.go:89] found id: ""
	I0603 13:52:23.733319 1143678 logs.go:276] 0 containers: []
	W0603 13:52:23.733332 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:23.733341 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:23.733438 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:23.780841 1143678 cri.go:89] found id: ""
	I0603 13:52:23.780873 1143678 logs.go:276] 0 containers: []
	W0603 13:52:23.780882 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:23.780894 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:23.780947 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:23.820521 1143678 cri.go:89] found id: ""
	I0603 13:52:23.820553 1143678 logs.go:276] 0 containers: []
	W0603 13:52:23.820565 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:23.820573 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:23.820636 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:23.857684 1143678 cri.go:89] found id: ""
	I0603 13:52:23.857728 1143678 logs.go:276] 0 containers: []
	W0603 13:52:23.857739 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:23.857747 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:23.857818 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:23.896800 1143678 cri.go:89] found id: ""
	I0603 13:52:23.896829 1143678 logs.go:276] 0 containers: []
	W0603 13:52:23.896842 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:23.896850 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:23.896914 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:23.935511 1143678 cri.go:89] found id: ""
	I0603 13:52:23.935538 1143678 logs.go:276] 0 containers: []
	W0603 13:52:23.935547 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:23.935554 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:23.935608 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:23.973858 1143678 cri.go:89] found id: ""
	I0603 13:52:23.973885 1143678 logs.go:276] 0 containers: []
	W0603 13:52:23.973895 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:23.973901 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:23.973961 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:24.012491 1143678 cri.go:89] found id: ""
	I0603 13:52:24.012521 1143678 logs.go:276] 0 containers: []
	W0603 13:52:24.012532 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:24.012545 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:24.012569 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:24.064274 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:24.064319 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:24.079382 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:24.079420 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:24.153708 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:24.153733 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:24.153749 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:24.233104 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:24.233148 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:26.774771 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:26.789853 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:26.789924 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:26.830089 1143678 cri.go:89] found id: ""
	I0603 13:52:26.830129 1143678 logs.go:276] 0 containers: []
	W0603 13:52:26.830167 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:26.830176 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:26.830251 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:26.866907 1143678 cri.go:89] found id: ""
	I0603 13:52:26.866941 1143678 logs.go:276] 0 containers: []
	W0603 13:52:26.866952 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:26.866960 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:26.867031 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:26.915028 1143678 cri.go:89] found id: ""
	I0603 13:52:26.915061 1143678 logs.go:276] 0 containers: []
	W0603 13:52:26.915070 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:26.915079 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:26.915151 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:26.962044 1143678 cri.go:89] found id: ""
	I0603 13:52:26.962075 1143678 logs.go:276] 0 containers: []
	W0603 13:52:26.962083 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:26.962088 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:26.962154 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:26.996156 1143678 cri.go:89] found id: ""
	I0603 13:52:26.996188 1143678 logs.go:276] 0 containers: []
	W0603 13:52:26.996196 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:26.996202 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:26.996265 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:27.038593 1143678 cri.go:89] found id: ""
	I0603 13:52:27.038627 1143678 logs.go:276] 0 containers: []
	W0603 13:52:27.038636 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:27.038642 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:27.038708 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:27.076116 1143678 cri.go:89] found id: ""
	I0603 13:52:27.076144 1143678 logs.go:276] 0 containers: []
	W0603 13:52:27.076153 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:27.076159 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:27.076228 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:27.110653 1143678 cri.go:89] found id: ""
	I0603 13:52:27.110688 1143678 logs.go:276] 0 containers: []
	W0603 13:52:27.110700 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:27.110714 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:27.110733 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:27.193718 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:27.193743 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:27.193756 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:27.269423 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:27.269483 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:27.307899 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:27.307939 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:24.317663 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:26.817148 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:25.371861 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:27.870070 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:29.870299 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:26.488753 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:28.489065 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:30.489568 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:27.363830 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:27.363878 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:29.879016 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:29.893482 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:29.893553 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:29.932146 1143678 cri.go:89] found id: ""
	I0603 13:52:29.932190 1143678 logs.go:276] 0 containers: []
	W0603 13:52:29.932199 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:29.932205 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:29.932259 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:29.968986 1143678 cri.go:89] found id: ""
	I0603 13:52:29.969020 1143678 logs.go:276] 0 containers: []
	W0603 13:52:29.969032 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:29.969040 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:29.969097 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:30.007190 1143678 cri.go:89] found id: ""
	I0603 13:52:30.007228 1143678 logs.go:276] 0 containers: []
	W0603 13:52:30.007238 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:30.007244 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:30.007303 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:30.044607 1143678 cri.go:89] found id: ""
	I0603 13:52:30.044638 1143678 logs.go:276] 0 containers: []
	W0603 13:52:30.044646 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:30.044652 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:30.044706 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:30.083103 1143678 cri.go:89] found id: ""
	I0603 13:52:30.083179 1143678 logs.go:276] 0 containers: []
	W0603 13:52:30.083193 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:30.083204 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:30.083280 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:30.124125 1143678 cri.go:89] found id: ""
	I0603 13:52:30.124152 1143678 logs.go:276] 0 containers: []
	W0603 13:52:30.124160 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:30.124167 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:30.124234 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:30.164293 1143678 cri.go:89] found id: ""
	I0603 13:52:30.164329 1143678 logs.go:276] 0 containers: []
	W0603 13:52:30.164345 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:30.164353 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:30.164467 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:30.219980 1143678 cri.go:89] found id: ""
	I0603 13:52:30.220015 1143678 logs.go:276] 0 containers: []
	W0603 13:52:30.220028 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:30.220042 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:30.220063 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:30.313282 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:30.313305 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:30.313323 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:30.393759 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:30.393801 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:30.441384 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:30.441434 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:30.493523 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:30.493558 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:28.817554 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:31.317629 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:31.870659 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:33.870954 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:32.990340 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:35.495665 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:33.009114 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:33.023177 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:33.023278 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:33.065346 1143678 cri.go:89] found id: ""
	I0603 13:52:33.065388 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.065400 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:33.065424 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:33.065506 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:33.108513 1143678 cri.go:89] found id: ""
	I0603 13:52:33.108549 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.108561 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:33.108569 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:33.108640 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:33.146053 1143678 cri.go:89] found id: ""
	I0603 13:52:33.146082 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.146089 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:33.146107 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:33.146165 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:33.187152 1143678 cri.go:89] found id: ""
	I0603 13:52:33.187195 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.187206 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:33.187216 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:33.187302 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:33.223887 1143678 cri.go:89] found id: ""
	I0603 13:52:33.223920 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.223932 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:33.223941 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:33.224010 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:33.263902 1143678 cri.go:89] found id: ""
	I0603 13:52:33.263958 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.263971 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:33.263980 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:33.264048 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:33.302753 1143678 cri.go:89] found id: ""
	I0603 13:52:33.302785 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.302796 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:33.302805 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:33.302859 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:33.340711 1143678 cri.go:89] found id: ""
	I0603 13:52:33.340745 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.340754 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:33.340763 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:33.340780 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:33.400226 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:33.400271 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:33.414891 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:33.414923 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:33.498121 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:33.498156 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:33.498172 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:33.575682 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:33.575731 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:36.116930 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:36.133001 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:36.133070 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:36.182727 1143678 cri.go:89] found id: ""
	I0603 13:52:36.182763 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.182774 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:36.182782 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:36.182851 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:36.228804 1143678 cri.go:89] found id: ""
	I0603 13:52:36.228841 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.228854 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:36.228862 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:36.228929 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:36.279320 1143678 cri.go:89] found id: ""
	I0603 13:52:36.279359 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.279370 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:36.279378 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:36.279461 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:36.319725 1143678 cri.go:89] found id: ""
	I0603 13:52:36.319751 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.319759 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:36.319765 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:36.319819 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:36.356657 1143678 cri.go:89] found id: ""
	I0603 13:52:36.356685 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.356693 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:36.356703 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:36.356760 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:36.393397 1143678 cri.go:89] found id: ""
	I0603 13:52:36.393448 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.393459 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:36.393467 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:36.393545 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:36.429211 1143678 cri.go:89] found id: ""
	I0603 13:52:36.429246 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.429254 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:36.429260 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:36.429324 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:36.466796 1143678 cri.go:89] found id: ""
	I0603 13:52:36.466831 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.466839 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:36.466849 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:36.466862 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:36.509871 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:36.509900 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:36.562167 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:36.562206 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:36.577014 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:36.577047 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:36.657581 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:36.657604 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:36.657625 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:33.817495 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:35.820854 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:36.371645 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:38.871484 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:37.989038 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:39.989986 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:39.242339 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:39.257985 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:39.258072 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:39.300153 1143678 cri.go:89] found id: ""
	I0603 13:52:39.300185 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.300197 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:39.300205 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:39.300304 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:39.336117 1143678 cri.go:89] found id: ""
	I0603 13:52:39.336152 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.336162 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:39.336175 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:39.336307 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:39.375945 1143678 cri.go:89] found id: ""
	I0603 13:52:39.375979 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.375990 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:39.375998 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:39.376066 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:39.417207 1143678 cri.go:89] found id: ""
	I0603 13:52:39.417242 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.417253 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:39.417261 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:39.417340 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:39.456259 1143678 cri.go:89] found id: ""
	I0603 13:52:39.456295 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.456307 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:39.456315 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:39.456377 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:39.494879 1143678 cri.go:89] found id: ""
	I0603 13:52:39.494904 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.494913 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:39.494919 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:39.494979 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:39.532129 1143678 cri.go:89] found id: ""
	I0603 13:52:39.532157 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.532168 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:39.532177 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:39.532267 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:39.570662 1143678 cri.go:89] found id: ""
	I0603 13:52:39.570693 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.570703 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:39.570717 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:39.570734 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:39.622008 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:39.622057 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:39.636849 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:39.636884 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:39.719914 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:39.719948 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:39.719967 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:39.801723 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:39.801769 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:38.317321 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:40.817649 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:42.819652 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:41.370965 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:43.371900 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:42.490311 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:44.988731 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:42.348936 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:42.363663 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:42.363735 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:42.400584 1143678 cri.go:89] found id: ""
	I0603 13:52:42.400616 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.400625 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:42.400631 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:42.400685 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:42.438853 1143678 cri.go:89] found id: ""
	I0603 13:52:42.438885 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.438893 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:42.438899 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:42.438954 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:42.474980 1143678 cri.go:89] found id: ""
	I0603 13:52:42.475013 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.475025 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:42.475032 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:42.475086 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:42.511027 1143678 cri.go:89] found id: ""
	I0603 13:52:42.511056 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.511068 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:42.511077 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:42.511237 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:42.545333 1143678 cri.go:89] found id: ""
	I0603 13:52:42.545367 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.545378 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:42.545386 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:42.545468 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:42.583392 1143678 cri.go:89] found id: ""
	I0603 13:52:42.583438 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.583556 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:42.583591 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:42.583656 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:42.620886 1143678 cri.go:89] found id: ""
	I0603 13:52:42.620916 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.620924 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:42.620930 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:42.620985 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:42.656265 1143678 cri.go:89] found id: ""
	I0603 13:52:42.656301 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.656313 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:42.656327 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:42.656344 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:42.711078 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:42.711124 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:42.727751 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:42.727788 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:42.802330 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:42.802356 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:42.802370 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:42.883700 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:42.883742 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:45.424591 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:45.440797 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:45.440883 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:45.483664 1143678 cri.go:89] found id: ""
	I0603 13:52:45.483698 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.483709 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:45.483717 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:45.483789 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:45.523147 1143678 cri.go:89] found id: ""
	I0603 13:52:45.523182 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.523193 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:45.523201 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:45.523273 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:45.563483 1143678 cri.go:89] found id: ""
	I0603 13:52:45.563516 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.563527 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:45.563536 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:45.563598 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:45.603574 1143678 cri.go:89] found id: ""
	I0603 13:52:45.603603 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.603618 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:45.603625 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:45.603680 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:45.642664 1143678 cri.go:89] found id: ""
	I0603 13:52:45.642694 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.642705 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:45.642714 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:45.642793 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:45.679961 1143678 cri.go:89] found id: ""
	I0603 13:52:45.679998 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.680011 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:45.680026 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:45.680100 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:45.716218 1143678 cri.go:89] found id: ""
	I0603 13:52:45.716255 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.716263 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:45.716270 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:45.716364 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:45.752346 1143678 cri.go:89] found id: ""
	I0603 13:52:45.752374 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.752382 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:45.752391 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:45.752405 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:45.793992 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:45.794029 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:45.844930 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:45.844973 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:45.859594 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:45.859633 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:45.936469 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:45.936498 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:45.936515 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:45.317705 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:47.818994 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:45.870780 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:47.871003 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:49.871625 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:46.990866 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:49.488680 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:48.514959 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:48.528331 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:48.528401 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:48.565671 1143678 cri.go:89] found id: ""
	I0603 13:52:48.565703 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.565715 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:48.565724 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:48.565786 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:48.603938 1143678 cri.go:89] found id: ""
	I0603 13:52:48.603973 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.603991 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:48.604000 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:48.604068 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:48.643521 1143678 cri.go:89] found id: ""
	I0603 13:52:48.643550 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.643562 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:48.643571 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:48.643627 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:48.678264 1143678 cri.go:89] found id: ""
	I0603 13:52:48.678301 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.678312 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:48.678320 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:48.678407 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:48.714974 1143678 cri.go:89] found id: ""
	I0603 13:52:48.715014 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.715026 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:48.715034 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:48.715138 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:48.750364 1143678 cri.go:89] found id: ""
	I0603 13:52:48.750396 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.750408 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:48.750416 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:48.750482 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:48.788203 1143678 cri.go:89] found id: ""
	I0603 13:52:48.788238 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.788249 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:48.788258 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:48.788345 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:48.826891 1143678 cri.go:89] found id: ""
	I0603 13:52:48.826920 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.826928 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:48.826938 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:48.826951 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:48.877271 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:48.877315 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:48.892155 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:48.892187 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:48.973433 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:48.973459 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:48.973473 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:49.062819 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:49.062888 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:51.614261 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:51.628056 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:51.628142 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:51.662894 1143678 cri.go:89] found id: ""
	I0603 13:52:51.662924 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.662935 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:51.662942 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:51.663009 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:51.701847 1143678 cri.go:89] found id: ""
	I0603 13:52:51.701878 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.701889 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:51.701896 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:51.701963 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:51.737702 1143678 cri.go:89] found id: ""
	I0603 13:52:51.737741 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.737752 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:51.737760 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:51.737833 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:51.772913 1143678 cri.go:89] found id: ""
	I0603 13:52:51.772944 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.772956 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:51.772964 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:51.773034 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:51.810268 1143678 cri.go:89] found id: ""
	I0603 13:52:51.810298 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.810307 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:51.810312 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:51.810377 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:51.848575 1143678 cri.go:89] found id: ""
	I0603 13:52:51.848612 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.848624 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:51.848633 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:51.848696 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:51.886500 1143678 cri.go:89] found id: ""
	I0603 13:52:51.886536 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.886549 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:51.886560 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:51.886617 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:51.924070 1143678 cri.go:89] found id: ""
	I0603 13:52:51.924104 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.924115 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:51.924128 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:51.924146 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:51.940324 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:51.940355 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:52.019958 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:52.019997 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:52.020015 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:52.095953 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:52.095999 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:52.141070 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:52.141102 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:50.317008 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:52.317142 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:51.872275 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:54.376761 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:51.490098 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:53.491292 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:54.694651 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:54.708508 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:54.708597 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:54.745708 1143678 cri.go:89] found id: ""
	I0603 13:52:54.745748 1143678 logs.go:276] 0 containers: []
	W0603 13:52:54.745762 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:54.745770 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:54.745842 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:54.783335 1143678 cri.go:89] found id: ""
	I0603 13:52:54.783369 1143678 logs.go:276] 0 containers: []
	W0603 13:52:54.783381 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:54.783389 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:54.783465 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:54.824111 1143678 cri.go:89] found id: ""
	I0603 13:52:54.824140 1143678 logs.go:276] 0 containers: []
	W0603 13:52:54.824151 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:54.824159 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:54.824230 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:54.868676 1143678 cri.go:89] found id: ""
	I0603 13:52:54.868710 1143678 logs.go:276] 0 containers: []
	W0603 13:52:54.868721 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:54.868730 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:54.868801 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:54.906180 1143678 cri.go:89] found id: ""
	I0603 13:52:54.906216 1143678 logs.go:276] 0 containers: []
	W0603 13:52:54.906227 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:54.906235 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:54.906310 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:54.945499 1143678 cri.go:89] found id: ""
	I0603 13:52:54.945532 1143678 logs.go:276] 0 containers: []
	W0603 13:52:54.945544 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:54.945552 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:54.945619 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:54.986785 1143678 cri.go:89] found id: ""
	I0603 13:52:54.986812 1143678 logs.go:276] 0 containers: []
	W0603 13:52:54.986820 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:54.986826 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:54.986888 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:55.035290 1143678 cri.go:89] found id: ""
	I0603 13:52:55.035320 1143678 logs.go:276] 0 containers: []
	W0603 13:52:55.035329 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:55.035338 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:55.035352 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:55.085384 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:55.085451 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:55.100699 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:55.100733 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:55.171587 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:55.171614 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:55.171638 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:55.249078 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:55.249123 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:54.317435 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:56.318657 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:56.869954 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:58.872728 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:55.990512 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:58.489578 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:00.490668 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:57.791538 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:57.804373 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:57.804437 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:57.843969 1143678 cri.go:89] found id: ""
	I0603 13:52:57.844007 1143678 logs.go:276] 0 containers: []
	W0603 13:52:57.844016 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:57.844022 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:57.844077 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:57.881201 1143678 cri.go:89] found id: ""
	I0603 13:52:57.881239 1143678 logs.go:276] 0 containers: []
	W0603 13:52:57.881252 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:57.881261 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:57.881336 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:57.917572 1143678 cri.go:89] found id: ""
	I0603 13:52:57.917601 1143678 logs.go:276] 0 containers: []
	W0603 13:52:57.917610 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:57.917617 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:57.917671 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:57.951603 1143678 cri.go:89] found id: ""
	I0603 13:52:57.951642 1143678 logs.go:276] 0 containers: []
	W0603 13:52:57.951654 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:57.951661 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:57.951716 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:57.992833 1143678 cri.go:89] found id: ""
	I0603 13:52:57.992863 1143678 logs.go:276] 0 containers: []
	W0603 13:52:57.992874 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:57.992881 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:57.992945 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:58.031595 1143678 cri.go:89] found id: ""
	I0603 13:52:58.031636 1143678 logs.go:276] 0 containers: []
	W0603 13:52:58.031648 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:58.031657 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:58.031723 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:58.068947 1143678 cri.go:89] found id: ""
	I0603 13:52:58.068985 1143678 logs.go:276] 0 containers: []
	W0603 13:52:58.068996 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:58.069005 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:58.069077 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:58.106559 1143678 cri.go:89] found id: ""
	I0603 13:52:58.106587 1143678 logs.go:276] 0 containers: []
	W0603 13:52:58.106598 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:58.106623 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:58.106640 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:58.162576 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:58.162623 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:58.177104 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:58.177155 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:58.250279 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:58.250312 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:58.250329 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:58.330876 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:58.330920 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:00.871443 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:00.885505 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:00.885589 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:00.923878 1143678 cri.go:89] found id: ""
	I0603 13:53:00.923910 1143678 logs.go:276] 0 containers: []
	W0603 13:53:00.923920 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:00.923928 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:00.923995 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:00.960319 1143678 cri.go:89] found id: ""
	I0603 13:53:00.960362 1143678 logs.go:276] 0 containers: []
	W0603 13:53:00.960375 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:00.960384 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:00.960449 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:00.998806 1143678 cri.go:89] found id: ""
	I0603 13:53:00.998845 1143678 logs.go:276] 0 containers: []
	W0603 13:53:00.998857 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:00.998866 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:00.998929 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:01.033211 1143678 cri.go:89] found id: ""
	I0603 13:53:01.033245 1143678 logs.go:276] 0 containers: []
	W0603 13:53:01.033256 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:01.033265 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:01.033341 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:01.072852 1143678 cri.go:89] found id: ""
	I0603 13:53:01.072883 1143678 logs.go:276] 0 containers: []
	W0603 13:53:01.072891 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:01.072898 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:01.072950 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:01.115667 1143678 cri.go:89] found id: ""
	I0603 13:53:01.115699 1143678 logs.go:276] 0 containers: []
	W0603 13:53:01.115711 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:01.115719 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:01.115824 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:01.153676 1143678 cri.go:89] found id: ""
	I0603 13:53:01.153717 1143678 logs.go:276] 0 containers: []
	W0603 13:53:01.153733 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:01.153741 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:01.153815 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:01.188970 1143678 cri.go:89] found id: ""
	I0603 13:53:01.189003 1143678 logs.go:276] 0 containers: []
	W0603 13:53:01.189017 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:01.189031 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:01.189049 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:01.233151 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:01.233214 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:01.287218 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:01.287269 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:01.302370 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:01.302408 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:01.378414 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:01.378444 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:01.378463 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:58.817003 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:01.317698 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:01.371257 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:03.872917 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:02.989133 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:04.990930 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:03.957327 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:03.971246 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:03.971340 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:04.007299 1143678 cri.go:89] found id: ""
	I0603 13:53:04.007335 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.007347 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:04.007356 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:04.007427 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:04.046364 1143678 cri.go:89] found id: ""
	I0603 13:53:04.046396 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.046405 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:04.046411 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:04.046469 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:04.082094 1143678 cri.go:89] found id: ""
	I0603 13:53:04.082127 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.082139 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:04.082148 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:04.082209 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:04.117389 1143678 cri.go:89] found id: ""
	I0603 13:53:04.117434 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.117446 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:04.117454 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:04.117530 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:04.150560 1143678 cri.go:89] found id: ""
	I0603 13:53:04.150596 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.150606 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:04.150614 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:04.150678 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:04.184808 1143678 cri.go:89] found id: ""
	I0603 13:53:04.184845 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.184857 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:04.184865 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:04.184935 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:04.220286 1143678 cri.go:89] found id: ""
	I0603 13:53:04.220317 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.220326 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:04.220332 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:04.220385 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:04.258898 1143678 cri.go:89] found id: ""
	I0603 13:53:04.258929 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.258941 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:04.258955 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:04.258972 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:04.312151 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:04.312198 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:04.329908 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:04.329943 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:04.402075 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:04.402106 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:04.402138 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:04.482873 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:04.482936 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:07.049978 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:07.063072 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:07.063140 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:07.097703 1143678 cri.go:89] found id: ""
	I0603 13:53:07.097737 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.097748 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:07.097755 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:07.097811 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:07.134826 1143678 cri.go:89] found id: ""
	I0603 13:53:07.134865 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.134878 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:07.134886 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:07.134955 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:07.178015 1143678 cri.go:89] found id: ""
	I0603 13:53:07.178050 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.178061 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:07.178068 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:07.178138 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:07.215713 1143678 cri.go:89] found id: ""
	I0603 13:53:07.215753 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.215764 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:07.215777 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:07.215840 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:07.251787 1143678 cri.go:89] found id: ""
	I0603 13:53:07.251815 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.251824 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:07.251830 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:07.251897 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:07.293357 1143678 cri.go:89] found id: ""
	I0603 13:53:07.293387 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.293398 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:07.293427 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:07.293496 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:07.329518 1143678 cri.go:89] found id: ""
	I0603 13:53:07.329551 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.329561 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:07.329569 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:07.329650 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:03.819203 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:06.317653 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:06.370539 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:08.370701 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:07.490706 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:09.990002 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:07.369534 1143678 cri.go:89] found id: ""
	I0603 13:53:07.369576 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.369587 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:07.369601 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:07.369617 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:07.424211 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:07.424260 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:07.439135 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:07.439172 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:07.511325 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:07.511360 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:07.511378 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:07.588348 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:07.588393 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:10.129812 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:10.143977 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:10.144057 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:10.181873 1143678 cri.go:89] found id: ""
	I0603 13:53:10.181906 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.181918 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:10.181926 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:10.181981 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:10.218416 1143678 cri.go:89] found id: ""
	I0603 13:53:10.218460 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.218473 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:10.218482 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:10.218562 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:10.253580 1143678 cri.go:89] found id: ""
	I0603 13:53:10.253618 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.253630 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:10.253646 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:10.253717 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:10.302919 1143678 cri.go:89] found id: ""
	I0603 13:53:10.302949 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.302957 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:10.302964 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:10.303024 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:10.343680 1143678 cri.go:89] found id: ""
	I0603 13:53:10.343709 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.343721 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:10.343729 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:10.343798 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:10.379281 1143678 cri.go:89] found id: ""
	I0603 13:53:10.379307 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.379315 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:10.379322 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:10.379374 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:10.420197 1143678 cri.go:89] found id: ""
	I0603 13:53:10.420225 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.420233 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:10.420239 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:10.420322 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:10.458578 1143678 cri.go:89] found id: ""
	I0603 13:53:10.458609 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.458618 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:10.458629 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:10.458642 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:10.511785 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:10.511828 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:10.526040 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:10.526081 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:10.603721 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:10.603749 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:10.603766 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:10.684153 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:10.684204 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:08.816447 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:11.318264 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:10.374788 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:12.871019 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:14.871064 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:11.992127 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:14.488866 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:13.227605 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:13.241131 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:13.241228 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:13.284636 1143678 cri.go:89] found id: ""
	I0603 13:53:13.284667 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.284675 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:13.284681 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:13.284737 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:13.322828 1143678 cri.go:89] found id: ""
	I0603 13:53:13.322861 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.322873 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:13.322881 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:13.322945 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:13.360061 1143678 cri.go:89] found id: ""
	I0603 13:53:13.360089 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.360097 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:13.360103 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:13.360176 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:13.397115 1143678 cri.go:89] found id: ""
	I0603 13:53:13.397149 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.397158 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:13.397164 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:13.397234 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:13.434086 1143678 cri.go:89] found id: ""
	I0603 13:53:13.434118 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.434127 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:13.434135 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:13.434194 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:13.470060 1143678 cri.go:89] found id: ""
	I0603 13:53:13.470089 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.470101 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:13.470113 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:13.470189 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:13.508423 1143678 cri.go:89] found id: ""
	I0603 13:53:13.508464 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.508480 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:13.508487 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:13.508552 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:13.546713 1143678 cri.go:89] found id: ""
	I0603 13:53:13.546752 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.546765 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:13.546778 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:13.546796 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:13.632984 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:13.633027 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:13.679169 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:13.679216 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:13.735765 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:13.735812 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:13.750175 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:13.750210 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:13.826571 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:16.327185 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:16.340163 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:16.340253 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:16.380260 1143678 cri.go:89] found id: ""
	I0603 13:53:16.380292 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.380300 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:16.380307 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:16.380373 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:16.420408 1143678 cri.go:89] found id: ""
	I0603 13:53:16.420438 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.420449 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:16.420457 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:16.420534 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:16.459250 1143678 cri.go:89] found id: ""
	I0603 13:53:16.459285 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.459297 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:16.459307 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:16.459377 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:16.496395 1143678 cri.go:89] found id: ""
	I0603 13:53:16.496427 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.496436 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:16.496444 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:16.496516 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:16.534402 1143678 cri.go:89] found id: ""
	I0603 13:53:16.534433 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.534442 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:16.534449 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:16.534514 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:16.571550 1143678 cri.go:89] found id: ""
	I0603 13:53:16.571577 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.571584 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:16.571591 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:16.571659 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:16.608425 1143678 cri.go:89] found id: ""
	I0603 13:53:16.608457 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.608468 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:16.608482 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:16.608549 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:16.647282 1143678 cri.go:89] found id: ""
	I0603 13:53:16.647315 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.647324 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:16.647334 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:16.647351 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:16.728778 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:16.728814 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:16.728831 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:16.822702 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:16.822747 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:16.868816 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:16.868845 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:16.922262 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:16.922301 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:13.818935 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:16.316865 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:17.370681 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:19.371232 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:16.489494 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:18.490176 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:20.491433 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:19.438231 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:19.452520 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:19.452603 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:19.488089 1143678 cri.go:89] found id: ""
	I0603 13:53:19.488121 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.488133 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:19.488141 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:19.488216 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:19.524494 1143678 cri.go:89] found id: ""
	I0603 13:53:19.524527 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.524537 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:19.524543 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:19.524595 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:19.561288 1143678 cri.go:89] found id: ""
	I0603 13:53:19.561323 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.561333 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:19.561341 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:19.561420 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:19.597919 1143678 cri.go:89] found id: ""
	I0603 13:53:19.597965 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.597976 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:19.597984 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:19.598056 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:19.634544 1143678 cri.go:89] found id: ""
	I0603 13:53:19.634579 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.634591 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:19.634599 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:19.634668 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:19.671473 1143678 cri.go:89] found id: ""
	I0603 13:53:19.671506 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.671518 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:19.671527 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:19.671598 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:19.707968 1143678 cri.go:89] found id: ""
	I0603 13:53:19.708000 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.708011 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:19.708019 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:19.708119 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:19.745555 1143678 cri.go:89] found id: ""
	I0603 13:53:19.745593 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.745604 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:19.745617 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:19.745631 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:19.830765 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:19.830812 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:19.875160 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:19.875197 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:19.927582 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:19.927627 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:19.942258 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:19.942289 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:20.016081 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:18.820067 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:21.319103 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:21.871214 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:24.371680 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:22.990210 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:24.990605 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:22.516859 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:22.534973 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:22.535040 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:22.593003 1143678 cri.go:89] found id: ""
	I0603 13:53:22.593043 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.593051 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:22.593058 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:22.593121 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:22.649916 1143678 cri.go:89] found id: ""
	I0603 13:53:22.649951 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.649963 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:22.649971 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:22.650030 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:22.689397 1143678 cri.go:89] found id: ""
	I0603 13:53:22.689449 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.689459 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:22.689465 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:22.689521 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:22.725109 1143678 cri.go:89] found id: ""
	I0603 13:53:22.725149 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.725161 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:22.725169 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:22.725250 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:22.761196 1143678 cri.go:89] found id: ""
	I0603 13:53:22.761225 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.761237 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:22.761245 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:22.761311 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:22.804065 1143678 cri.go:89] found id: ""
	I0603 13:53:22.804103 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.804112 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:22.804119 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:22.804189 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:22.840456 1143678 cri.go:89] found id: ""
	I0603 13:53:22.840485 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.840493 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:22.840499 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:22.840553 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:22.876796 1143678 cri.go:89] found id: ""
	I0603 13:53:22.876831 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.876842 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:22.876854 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:22.876869 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:22.957274 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:22.957317 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:22.998360 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:22.998394 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:23.054895 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:23.054942 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:23.070107 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:23.070141 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:23.147460 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:25.647727 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:25.663603 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:25.663691 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:25.698102 1143678 cri.go:89] found id: ""
	I0603 13:53:25.698139 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.698150 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:25.698159 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:25.698227 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:25.738601 1143678 cri.go:89] found id: ""
	I0603 13:53:25.738641 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.738648 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:25.738655 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:25.738718 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:25.780622 1143678 cri.go:89] found id: ""
	I0603 13:53:25.780657 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.780670 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:25.780678 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:25.780751 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:25.816950 1143678 cri.go:89] found id: ""
	I0603 13:53:25.816978 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.816989 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:25.816997 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:25.817060 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:25.860011 1143678 cri.go:89] found id: ""
	I0603 13:53:25.860051 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.860063 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:25.860072 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:25.860138 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:25.898832 1143678 cri.go:89] found id: ""
	I0603 13:53:25.898866 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.898878 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:25.898886 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:25.898959 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:25.937483 1143678 cri.go:89] found id: ""
	I0603 13:53:25.937518 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.937533 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:25.937541 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:25.937607 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:25.973972 1143678 cri.go:89] found id: ""
	I0603 13:53:25.974008 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.974021 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:25.974034 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:25.974065 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:25.989188 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:25.989227 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:26.065521 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:26.065546 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:26.065560 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:26.147852 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:26.147899 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:26.191395 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:26.191431 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:23.816928 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:25.818534 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:26.872084 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:28.872558 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:27.489951 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:29.989352 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:28.751041 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:28.764764 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:28.764826 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:28.808232 1143678 cri.go:89] found id: ""
	I0603 13:53:28.808271 1143678 logs.go:276] 0 containers: []
	W0603 13:53:28.808285 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:28.808293 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:28.808369 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:28.849058 1143678 cri.go:89] found id: ""
	I0603 13:53:28.849094 1143678 logs.go:276] 0 containers: []
	W0603 13:53:28.849107 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:28.849114 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:28.849187 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:28.892397 1143678 cri.go:89] found id: ""
	I0603 13:53:28.892427 1143678 logs.go:276] 0 containers: []
	W0603 13:53:28.892441 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:28.892447 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:28.892515 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:28.932675 1143678 cri.go:89] found id: ""
	I0603 13:53:28.932715 1143678 logs.go:276] 0 containers: []
	W0603 13:53:28.932727 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:28.932735 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:28.932840 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:28.969732 1143678 cri.go:89] found id: ""
	I0603 13:53:28.969769 1143678 logs.go:276] 0 containers: []
	W0603 13:53:28.969781 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:28.969789 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:28.969857 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:29.007765 1143678 cri.go:89] found id: ""
	I0603 13:53:29.007791 1143678 logs.go:276] 0 containers: []
	W0603 13:53:29.007798 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:29.007804 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:29.007865 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:29.044616 1143678 cri.go:89] found id: ""
	I0603 13:53:29.044652 1143678 logs.go:276] 0 containers: []
	W0603 13:53:29.044664 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:29.044675 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:29.044734 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:29.081133 1143678 cri.go:89] found id: ""
	I0603 13:53:29.081166 1143678 logs.go:276] 0 containers: []
	W0603 13:53:29.081187 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:29.081198 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:29.081213 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:29.095753 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:29.095783 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:29.174472 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:29.174496 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:29.174516 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:29.251216 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:29.251262 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:29.289127 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:29.289168 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:31.845335 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:31.860631 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:31.860720 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:31.904507 1143678 cri.go:89] found id: ""
	I0603 13:53:31.904544 1143678 logs.go:276] 0 containers: []
	W0603 13:53:31.904556 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:31.904564 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:31.904633 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:31.940795 1143678 cri.go:89] found id: ""
	I0603 13:53:31.940832 1143678 logs.go:276] 0 containers: []
	W0603 13:53:31.940845 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:31.940852 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:31.940921 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:31.978447 1143678 cri.go:89] found id: ""
	I0603 13:53:31.978481 1143678 logs.go:276] 0 containers: []
	W0603 13:53:31.978499 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:31.978507 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:31.978569 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:32.017975 1143678 cri.go:89] found id: ""
	I0603 13:53:32.018009 1143678 logs.go:276] 0 containers: []
	W0603 13:53:32.018018 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:32.018025 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:32.018089 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:32.053062 1143678 cri.go:89] found id: ""
	I0603 13:53:32.053091 1143678 logs.go:276] 0 containers: []
	W0603 13:53:32.053099 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:32.053106 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:32.053181 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:32.089822 1143678 cri.go:89] found id: ""
	I0603 13:53:32.089856 1143678 logs.go:276] 0 containers: []
	W0603 13:53:32.089868 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:32.089877 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:32.089944 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:32.126243 1143678 cri.go:89] found id: ""
	I0603 13:53:32.126280 1143678 logs.go:276] 0 containers: []
	W0603 13:53:32.126291 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:32.126299 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:32.126358 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:32.163297 1143678 cri.go:89] found id: ""
	I0603 13:53:32.163346 1143678 logs.go:276] 0 containers: []
	W0603 13:53:32.163357 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:32.163370 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:32.163386 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:32.218452 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:32.218495 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:32.233688 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:32.233731 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:32.318927 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:32.318947 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:32.318963 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:28.317046 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:30.317308 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:32.318273 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:31.370654 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:33.371038 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:31.991594 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:34.492142 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:32.403734 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:32.403786 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:34.947857 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:34.961894 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:34.961983 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:35.006279 1143678 cri.go:89] found id: ""
	I0603 13:53:35.006308 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.006318 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:35.006326 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:35.006398 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:35.042765 1143678 cri.go:89] found id: ""
	I0603 13:53:35.042794 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.042807 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:35.042815 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:35.042877 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:35.084332 1143678 cri.go:89] found id: ""
	I0603 13:53:35.084365 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.084375 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:35.084381 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:35.084448 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:35.121306 1143678 cri.go:89] found id: ""
	I0603 13:53:35.121337 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.121348 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:35.121358 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:35.121444 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:35.155952 1143678 cri.go:89] found id: ""
	I0603 13:53:35.155994 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.156008 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:35.156016 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:35.156089 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:35.196846 1143678 cri.go:89] found id: ""
	I0603 13:53:35.196881 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.196893 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:35.196902 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:35.196972 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:35.232396 1143678 cri.go:89] found id: ""
	I0603 13:53:35.232429 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.232440 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:35.232449 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:35.232528 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:35.269833 1143678 cri.go:89] found id: ""
	I0603 13:53:35.269862 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.269872 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:35.269885 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:35.269902 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:35.357754 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:35.357794 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:35.399793 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:35.399822 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:35.453742 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:35.453782 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:35.468431 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:35.468465 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:35.547817 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:34.816178 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:36.817093 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:35.373072 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:37.870173 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:36.989364 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:38.990163 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:38.048517 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:38.063481 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:38.063569 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:38.100487 1143678 cri.go:89] found id: ""
	I0603 13:53:38.100523 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.100535 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:38.100543 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:38.100612 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:38.137627 1143678 cri.go:89] found id: ""
	I0603 13:53:38.137665 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.137678 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:38.137686 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:38.137754 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:38.176138 1143678 cri.go:89] found id: ""
	I0603 13:53:38.176172 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.176190 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:38.176199 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:38.176265 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:38.214397 1143678 cri.go:89] found id: ""
	I0603 13:53:38.214439 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.214451 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:38.214459 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:38.214528 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:38.250531 1143678 cri.go:89] found id: ""
	I0603 13:53:38.250563 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.250573 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:38.250580 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:38.250642 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:38.286558 1143678 cri.go:89] found id: ""
	I0603 13:53:38.286587 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.286595 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:38.286601 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:38.286652 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:38.327995 1143678 cri.go:89] found id: ""
	I0603 13:53:38.328043 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.328055 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:38.328062 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:38.328126 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:38.374266 1143678 cri.go:89] found id: ""
	I0603 13:53:38.374300 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.374311 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:38.374324 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:38.374341 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:38.426876 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:38.426918 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:38.443296 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:38.443340 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:38.514702 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:38.514728 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:38.514746 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:38.601536 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:38.601590 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:41.141766 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:41.155927 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:41.156006 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:41.196829 1143678 cri.go:89] found id: ""
	I0603 13:53:41.196871 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.196884 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:41.196896 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:41.196967 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:41.231729 1143678 cri.go:89] found id: ""
	I0603 13:53:41.231780 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.231802 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:41.231812 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:41.231900 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:41.266663 1143678 cri.go:89] found id: ""
	I0603 13:53:41.266699 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.266711 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:41.266720 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:41.266783 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:41.305251 1143678 cri.go:89] found id: ""
	I0603 13:53:41.305278 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.305286 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:41.305292 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:41.305351 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:41.342527 1143678 cri.go:89] found id: ""
	I0603 13:53:41.342556 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.342568 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:41.342575 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:41.342637 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:41.379950 1143678 cri.go:89] found id: ""
	I0603 13:53:41.379982 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.379992 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:41.379999 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:41.380068 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:41.414930 1143678 cri.go:89] found id: ""
	I0603 13:53:41.414965 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.414973 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:41.414980 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:41.415043 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:41.449265 1143678 cri.go:89] found id: ""
	I0603 13:53:41.449299 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.449310 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:41.449324 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:41.449343 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:41.502525 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:41.502560 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:41.519357 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:41.519390 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:41.591443 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:41.591471 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:41.591485 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:41.668758 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:41.668802 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:39.317333 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:41.317598 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:40.370844 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:42.871161 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:41.489574 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:43.989620 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:44.211768 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:44.226789 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:44.226869 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:44.265525 1143678 cri.go:89] found id: ""
	I0603 13:53:44.265553 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.265561 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:44.265568 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:44.265646 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:44.304835 1143678 cri.go:89] found id: ""
	I0603 13:53:44.304866 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.304874 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:44.304880 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:44.304935 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:44.345832 1143678 cri.go:89] found id: ""
	I0603 13:53:44.345875 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.345885 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:44.345891 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:44.345950 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:44.386150 1143678 cri.go:89] found id: ""
	I0603 13:53:44.386186 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.386198 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:44.386207 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:44.386268 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:44.423662 1143678 cri.go:89] found id: ""
	I0603 13:53:44.423697 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.423709 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:44.423719 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:44.423788 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:44.462437 1143678 cri.go:89] found id: ""
	I0603 13:53:44.462464 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.462473 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:44.462481 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:44.462567 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:44.501007 1143678 cri.go:89] found id: ""
	I0603 13:53:44.501062 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.501074 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:44.501081 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:44.501138 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:44.535501 1143678 cri.go:89] found id: ""
	I0603 13:53:44.535543 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.535554 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:44.535567 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:44.535585 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:44.587114 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:44.587157 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:44.602151 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:44.602180 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:44.674065 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:44.674104 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:44.674122 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:44.757443 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:44.757488 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:47.306481 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:47.319895 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:47.319958 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:43.818030 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:46.316852 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:45.370762 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:47.371799 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:49.871512 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:46.488076 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:48.488472 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:50.488892 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:47.356975 1143678 cri.go:89] found id: ""
	I0603 13:53:47.357013 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.357026 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:47.357034 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:47.357106 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:47.393840 1143678 cri.go:89] found id: ""
	I0603 13:53:47.393869 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.393877 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:47.393884 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:47.393936 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:47.428455 1143678 cri.go:89] found id: ""
	I0603 13:53:47.428493 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.428506 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:47.428514 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:47.428597 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:47.463744 1143678 cri.go:89] found id: ""
	I0603 13:53:47.463777 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.463788 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:47.463795 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:47.463855 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:47.498134 1143678 cri.go:89] found id: ""
	I0603 13:53:47.498159 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.498167 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:47.498173 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:47.498245 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:47.534153 1143678 cri.go:89] found id: ""
	I0603 13:53:47.534195 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.534206 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:47.534219 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:47.534272 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:47.567148 1143678 cri.go:89] found id: ""
	I0603 13:53:47.567179 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.567187 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:47.567194 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:47.567249 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:47.605759 1143678 cri.go:89] found id: ""
	I0603 13:53:47.605790 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.605798 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:47.605810 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:47.605824 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:47.683651 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:47.683692 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:47.683705 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:47.763810 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:47.763848 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:47.806092 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:47.806131 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:47.859637 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:47.859677 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:50.377538 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:50.391696 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:50.391776 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:50.433968 1143678 cri.go:89] found id: ""
	I0603 13:53:50.434001 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.434013 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:50.434020 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:50.434080 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:50.470561 1143678 cri.go:89] found id: ""
	I0603 13:53:50.470589 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.470596 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:50.470603 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:50.470662 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:50.510699 1143678 cri.go:89] found id: ""
	I0603 13:53:50.510727 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.510735 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:50.510741 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:50.510808 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:50.553386 1143678 cri.go:89] found id: ""
	I0603 13:53:50.553433 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.553445 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:50.553452 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:50.553533 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:50.589731 1143678 cri.go:89] found id: ""
	I0603 13:53:50.589779 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.589792 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:50.589801 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:50.589885 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:50.625144 1143678 cri.go:89] found id: ""
	I0603 13:53:50.625180 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.625192 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:50.625201 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:50.625274 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:50.669021 1143678 cri.go:89] found id: ""
	I0603 13:53:50.669053 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.669061 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:50.669067 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:50.669121 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:50.714241 1143678 cri.go:89] found id: ""
	I0603 13:53:50.714270 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.714284 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:50.714297 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:50.714314 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:50.766290 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:50.766333 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:50.797242 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:50.797275 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:50.866589 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:50.866616 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:50.866637 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:50.948808 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:50.948854 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:48.318282 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:50.817445 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:52.370798 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:54.377027 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:52.490719 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:54.989907 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:53.496797 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:53.511944 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:53.512021 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:53.549028 1143678 cri.go:89] found id: ""
	I0603 13:53:53.549057 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.549066 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:53.549072 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:53.549128 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:53.583533 1143678 cri.go:89] found id: ""
	I0603 13:53:53.583566 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.583578 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:53.583586 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:53.583652 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:53.618578 1143678 cri.go:89] found id: ""
	I0603 13:53:53.618609 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.618618 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:53.618626 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:53.618701 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:53.653313 1143678 cri.go:89] found id: ""
	I0603 13:53:53.653347 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.653358 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:53.653364 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:53.653442 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:53.689805 1143678 cri.go:89] found id: ""
	I0603 13:53:53.689839 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.689849 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:53.689857 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:53.689931 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:53.725538 1143678 cri.go:89] found id: ""
	I0603 13:53:53.725571 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.725584 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:53.725592 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:53.725648 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:53.762284 1143678 cri.go:89] found id: ""
	I0603 13:53:53.762325 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.762336 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:53.762345 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:53.762419 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:53.799056 1143678 cri.go:89] found id: ""
	I0603 13:53:53.799083 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.799092 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:53.799102 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:53.799115 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:53.873743 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:53.873809 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:53.919692 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:53.919724 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:53.969068 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:53.969109 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:53.983840 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:53.983866 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:54.054842 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:56.555587 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:56.570014 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:56.570076 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:56.604352 1143678 cri.go:89] found id: ""
	I0603 13:53:56.604386 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.604400 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:56.604408 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:56.604479 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:56.648126 1143678 cri.go:89] found id: ""
	I0603 13:53:56.648161 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.648171 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:56.648177 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:56.648231 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:56.685621 1143678 cri.go:89] found id: ""
	I0603 13:53:56.685658 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.685670 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:56.685678 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:56.685763 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:56.721860 1143678 cri.go:89] found id: ""
	I0603 13:53:56.721891 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.721913 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:56.721921 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:56.721989 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:56.757950 1143678 cri.go:89] found id: ""
	I0603 13:53:56.757982 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.757995 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:56.758002 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:56.758068 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:56.794963 1143678 cri.go:89] found id: ""
	I0603 13:53:56.794991 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.794999 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:56.795007 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:56.795072 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:56.831795 1143678 cri.go:89] found id: ""
	I0603 13:53:56.831827 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.831839 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:56.831846 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:56.831913 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:56.869263 1143678 cri.go:89] found id: ""
	I0603 13:53:56.869293 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.869303 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:56.869314 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:56.869331 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:56.945068 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:56.945096 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:56.945110 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:57.028545 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:57.028582 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:57.069973 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:57.070009 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:57.126395 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:57.126436 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:53.316616 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:55.316981 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:57.317295 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:56.870680 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:59.371553 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:56.990964 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:59.489616 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:59.644870 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:59.658547 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:59.658634 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:59.694625 1143678 cri.go:89] found id: ""
	I0603 13:53:59.694656 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.694665 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:59.694673 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:59.694740 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:59.730475 1143678 cri.go:89] found id: ""
	I0603 13:53:59.730573 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.730590 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:59.730599 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:59.730696 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:59.768533 1143678 cri.go:89] found id: ""
	I0603 13:53:59.768567 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.768580 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:59.768590 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:59.768662 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:59.804913 1143678 cri.go:89] found id: ""
	I0603 13:53:59.804944 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.804953 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:59.804960 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:59.805014 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:59.850331 1143678 cri.go:89] found id: ""
	I0603 13:53:59.850363 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.850376 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:59.850385 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:59.850466 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:59.890777 1143678 cri.go:89] found id: ""
	I0603 13:53:59.890814 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.890826 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:59.890834 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:59.890909 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:59.931233 1143678 cri.go:89] found id: ""
	I0603 13:53:59.931268 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.931277 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:59.931283 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:59.931354 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:59.966267 1143678 cri.go:89] found id: ""
	I0603 13:53:59.966307 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.966319 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:59.966333 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:59.966356 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:00.019884 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:00.019924 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:00.034936 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:00.034982 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:00.115002 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:00.115035 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:00.115053 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:00.189992 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:00.190035 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:59.818065 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:02.316183 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:01.870679 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:03.872563 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:01.490213 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:03.988699 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:02.737387 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:02.752131 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:02.752220 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:02.787863 1143678 cri.go:89] found id: ""
	I0603 13:54:02.787893 1143678 logs.go:276] 0 containers: []
	W0603 13:54:02.787902 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:02.787908 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:02.787974 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:02.824938 1143678 cri.go:89] found id: ""
	I0603 13:54:02.824973 1143678 logs.go:276] 0 containers: []
	W0603 13:54:02.824983 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:02.824989 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:02.825061 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:02.861425 1143678 cri.go:89] found id: ""
	I0603 13:54:02.861461 1143678 logs.go:276] 0 containers: []
	W0603 13:54:02.861469 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:02.861476 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:02.861546 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:02.907417 1143678 cri.go:89] found id: ""
	I0603 13:54:02.907453 1143678 logs.go:276] 0 containers: []
	W0603 13:54:02.907475 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:02.907483 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:02.907553 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:02.953606 1143678 cri.go:89] found id: ""
	I0603 13:54:02.953640 1143678 logs.go:276] 0 containers: []
	W0603 13:54:02.953649 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:02.953655 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:02.953728 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:03.007785 1143678 cri.go:89] found id: ""
	I0603 13:54:03.007816 1143678 logs.go:276] 0 containers: []
	W0603 13:54:03.007824 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:03.007830 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:03.007896 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:03.058278 1143678 cri.go:89] found id: ""
	I0603 13:54:03.058316 1143678 logs.go:276] 0 containers: []
	W0603 13:54:03.058329 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:03.058338 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:03.058404 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:03.094766 1143678 cri.go:89] found id: ""
	I0603 13:54:03.094800 1143678 logs.go:276] 0 containers: []
	W0603 13:54:03.094811 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:03.094824 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:03.094840 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:03.163663 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:03.163690 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:03.163704 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:03.250751 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:03.250802 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:03.292418 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:03.292466 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:03.344552 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:03.344600 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:05.859965 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:05.875255 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:05.875340 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:05.918590 1143678 cri.go:89] found id: ""
	I0603 13:54:05.918619 1143678 logs.go:276] 0 containers: []
	W0603 13:54:05.918630 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:05.918637 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:05.918706 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:05.953932 1143678 cri.go:89] found id: ""
	I0603 13:54:05.953969 1143678 logs.go:276] 0 containers: []
	W0603 13:54:05.953980 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:05.953988 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:05.954056 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:05.993319 1143678 cri.go:89] found id: ""
	I0603 13:54:05.993348 1143678 logs.go:276] 0 containers: []
	W0603 13:54:05.993359 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:05.993368 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:05.993468 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:06.033047 1143678 cri.go:89] found id: ""
	I0603 13:54:06.033079 1143678 logs.go:276] 0 containers: []
	W0603 13:54:06.033087 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:06.033100 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:06.033156 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:06.072607 1143678 cri.go:89] found id: ""
	I0603 13:54:06.072631 1143678 logs.go:276] 0 containers: []
	W0603 13:54:06.072640 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:06.072647 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:06.072698 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:06.109944 1143678 cri.go:89] found id: ""
	I0603 13:54:06.109990 1143678 logs.go:276] 0 containers: []
	W0603 13:54:06.109999 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:06.110007 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:06.110071 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:06.150235 1143678 cri.go:89] found id: ""
	I0603 13:54:06.150266 1143678 logs.go:276] 0 containers: []
	W0603 13:54:06.150276 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:06.150284 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:06.150349 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:06.193963 1143678 cri.go:89] found id: ""
	I0603 13:54:06.193992 1143678 logs.go:276] 0 containers: []
	W0603 13:54:06.194004 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:06.194017 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:06.194035 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:06.235790 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:06.235827 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:06.289940 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:06.289980 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:06.305205 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:06.305240 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:06.381170 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:06.381191 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:06.381206 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:04.316812 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:06.317759 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:06.370944 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:08.371668 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:05.989346 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:08.492021 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:08.958985 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:08.973364 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:08.973462 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:09.015050 1143678 cri.go:89] found id: ""
	I0603 13:54:09.015087 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.015099 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:09.015107 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:09.015187 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:09.054474 1143678 cri.go:89] found id: ""
	I0603 13:54:09.054508 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.054521 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:09.054533 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:09.054590 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:09.090867 1143678 cri.go:89] found id: ""
	I0603 13:54:09.090905 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.090917 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:09.090926 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:09.090995 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:09.128401 1143678 cri.go:89] found id: ""
	I0603 13:54:09.128433 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.128441 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:09.128447 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:09.128511 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:09.162952 1143678 cri.go:89] found id: ""
	I0603 13:54:09.162992 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.163005 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:09.163013 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:09.163078 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:09.200375 1143678 cri.go:89] found id: ""
	I0603 13:54:09.200402 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.200410 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:09.200416 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:09.200495 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:09.244694 1143678 cri.go:89] found id: ""
	I0603 13:54:09.244729 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.244740 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:09.244749 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:09.244818 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:09.281633 1143678 cri.go:89] found id: ""
	I0603 13:54:09.281666 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.281675 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:09.281686 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:09.281700 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:09.341287 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:09.341331 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:09.355379 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:09.355415 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:09.435934 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:09.435960 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:09.435979 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:09.518203 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:09.518248 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:12.061538 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:12.076939 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:12.077020 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:12.114308 1143678 cri.go:89] found id: ""
	I0603 13:54:12.114344 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.114353 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:12.114359 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:12.114427 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:12.150336 1143678 cri.go:89] found id: ""
	I0603 13:54:12.150368 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.150383 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:12.150390 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:12.150455 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:12.189881 1143678 cri.go:89] found id: ""
	I0603 13:54:12.189934 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.189946 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:12.189954 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:12.190020 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:12.226361 1143678 cri.go:89] found id: ""
	I0603 13:54:12.226396 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.226407 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:12.226415 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:12.226488 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:12.264216 1143678 cri.go:89] found id: ""
	I0603 13:54:12.264257 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.264265 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:12.264271 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:12.264341 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:12.306563 1143678 cri.go:89] found id: ""
	I0603 13:54:12.306600 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.306612 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:12.306620 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:12.306690 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:12.347043 1143678 cri.go:89] found id: ""
	I0603 13:54:12.347082 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.347094 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:12.347105 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:12.347170 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:08.317824 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:10.816743 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:12.816776 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:10.372079 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:12.872314 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:10.990240 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:13.489762 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:12.383947 1143678 cri.go:89] found id: ""
	I0603 13:54:12.383978 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.383989 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:12.384001 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:12.384018 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:12.464306 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:12.464348 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:12.505079 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:12.505110 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:12.563631 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:12.563666 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:12.578328 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:12.578357 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:12.646015 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:15.147166 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:15.163786 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:15.163865 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:15.202249 1143678 cri.go:89] found id: ""
	I0603 13:54:15.202286 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.202296 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:15.202304 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:15.202372 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:15.236305 1143678 cri.go:89] found id: ""
	I0603 13:54:15.236345 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.236359 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:15.236368 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:15.236459 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:15.273457 1143678 cri.go:89] found id: ""
	I0603 13:54:15.273493 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.273510 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:15.273521 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:15.273592 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:15.314917 1143678 cri.go:89] found id: ""
	I0603 13:54:15.314951 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.314963 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:15.314984 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:15.315055 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:15.353060 1143678 cri.go:89] found id: ""
	I0603 13:54:15.353098 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.353112 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:15.353118 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:15.353197 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:15.390412 1143678 cri.go:89] found id: ""
	I0603 13:54:15.390448 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.390460 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:15.390469 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:15.390534 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:15.427735 1143678 cri.go:89] found id: ""
	I0603 13:54:15.427771 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.427782 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:15.427789 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:15.427854 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:15.467134 1143678 cri.go:89] found id: ""
	I0603 13:54:15.467165 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.467175 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:15.467184 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:15.467199 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:15.517924 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:15.517973 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:15.531728 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:15.531760 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:15.608397 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:15.608421 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:15.608444 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:15.688976 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:15.689016 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:15.319250 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:16.817018 1143252 pod_ready.go:81] duration metric: took 4m0.00664589s for pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace to be "Ready" ...
	E0603 13:54:16.817042 1143252 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0603 13:54:16.817049 1143252 pod_ready.go:38] duration metric: took 4m6.670583216s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:54:16.817081 1143252 api_server.go:52] waiting for apiserver process to appear ...
	I0603 13:54:16.817110 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:16.817158 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:16.871314 1143252 cri.go:89] found id: "45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a"
	I0603 13:54:16.871339 1143252 cri.go:89] found id: ""
	I0603 13:54:16.871350 1143252 logs.go:276] 1 containers: [45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a]
	I0603 13:54:16.871405 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:16.876249 1143252 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:16.876319 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:16.917267 1143252 cri.go:89] found id: "114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05"
	I0603 13:54:16.917298 1143252 cri.go:89] found id: ""
	I0603 13:54:16.917310 1143252 logs.go:276] 1 containers: [114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05]
	I0603 13:54:16.917374 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:16.923290 1143252 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:16.923374 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:16.963598 1143252 cri.go:89] found id: "f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574"
	I0603 13:54:16.963619 1143252 cri.go:89] found id: ""
	I0603 13:54:16.963628 1143252 logs.go:276] 1 containers: [f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574]
	I0603 13:54:16.963689 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:16.968201 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:16.968277 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:17.008229 1143252 cri.go:89] found id: "f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87"
	I0603 13:54:17.008264 1143252 cri.go:89] found id: ""
	I0603 13:54:17.008274 1143252 logs.go:276] 1 containers: [f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87]
	I0603 13:54:17.008341 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:17.012719 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:17.012795 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:17.048353 1143252 cri.go:89] found id: "c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1"
	I0603 13:54:17.048384 1143252 cri.go:89] found id: ""
	I0603 13:54:17.048394 1143252 logs.go:276] 1 containers: [c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1]
	I0603 13:54:17.048459 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:17.053094 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:17.053162 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:17.088475 1143252 cri.go:89] found id: "a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d"
	I0603 13:54:17.088507 1143252 cri.go:89] found id: ""
	I0603 13:54:17.088518 1143252 logs.go:276] 1 containers: [a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d]
	I0603 13:54:17.088583 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:17.093293 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:17.093373 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:17.130335 1143252 cri.go:89] found id: ""
	I0603 13:54:17.130370 1143252 logs.go:276] 0 containers: []
	W0603 13:54:17.130381 1143252 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:17.130389 1143252 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0603 13:54:17.130472 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0603 13:54:17.176283 1143252 cri.go:89] found id: "e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b"
	I0603 13:54:17.176317 1143252 cri.go:89] found id: "141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88"
	I0603 13:54:17.176324 1143252 cri.go:89] found id: ""
	I0603 13:54:17.176335 1143252 logs.go:276] 2 containers: [e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b 141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88]
	I0603 13:54:17.176409 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:17.181455 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:17.185881 1143252 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:17.185902 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:17.239636 1143252 logs.go:123] Gathering logs for kube-apiserver [45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a] ...
	I0603 13:54:17.239680 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a"
	I0603 13:54:17.309488 1143252 logs.go:123] Gathering logs for etcd [114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05] ...
	I0603 13:54:17.309532 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05"
	I0603 13:54:17.362243 1143252 logs.go:123] Gathering logs for coredns [f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574] ...
	I0603 13:54:17.362282 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574"
	I0603 13:54:17.401389 1143252 logs.go:123] Gathering logs for kube-proxy [c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1] ...
	I0603 13:54:17.401440 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1"
	I0603 13:54:17.442095 1143252 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:17.442127 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:17.923198 1143252 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:17.923247 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:17.939968 1143252 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:17.940000 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 13:54:18.075054 1143252 logs.go:123] Gathering logs for kube-scheduler [f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87] ...
	I0603 13:54:18.075098 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87"
	I0603 13:54:18.113954 1143252 logs.go:123] Gathering logs for kube-controller-manager [a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d] ...
	I0603 13:54:18.113994 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d"
	I0603 13:54:18.181862 1143252 logs.go:123] Gathering logs for storage-provisioner [e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b] ...
	I0603 13:54:18.181906 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b"
	I0603 13:54:18.227105 1143252 logs.go:123] Gathering logs for storage-provisioner [141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88] ...
	I0603 13:54:18.227137 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88"
	I0603 13:54:18.272684 1143252 logs.go:123] Gathering logs for container status ...
	I0603 13:54:18.272721 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:15.371753 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:17.870321 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:19.879331 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:15.990326 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:18.489960 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:18.228279 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:18.242909 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:18.242985 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:18.285400 1143678 cri.go:89] found id: ""
	I0603 13:54:18.285445 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.285455 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:18.285461 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:18.285521 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:18.321840 1143678 cri.go:89] found id: ""
	I0603 13:54:18.321868 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.321877 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:18.321884 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:18.321943 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:18.358856 1143678 cri.go:89] found id: ""
	I0603 13:54:18.358888 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.358902 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:18.358911 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:18.358979 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:18.395638 1143678 cri.go:89] found id: ""
	I0603 13:54:18.395678 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.395691 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:18.395699 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:18.395766 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:18.435541 1143678 cri.go:89] found id: ""
	I0603 13:54:18.435570 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.435581 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:18.435589 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:18.435653 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:18.469491 1143678 cri.go:89] found id: ""
	I0603 13:54:18.469527 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.469538 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:18.469545 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:18.469615 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:18.507986 1143678 cri.go:89] found id: ""
	I0603 13:54:18.508018 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.508030 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:18.508039 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:18.508106 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:18.542311 1143678 cri.go:89] found id: ""
	I0603 13:54:18.542343 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.542351 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:18.542361 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:18.542375 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:18.619295 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:18.619337 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:18.662500 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:18.662540 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:18.714392 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:18.714432 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:18.728750 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:18.728785 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:18.800786 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:21.301554 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:21.315880 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:21.315944 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:21.358178 1143678 cri.go:89] found id: ""
	I0603 13:54:21.358208 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.358217 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:21.358227 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:21.358289 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:21.395873 1143678 cri.go:89] found id: ""
	I0603 13:54:21.395969 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.395995 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:21.396014 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:21.396111 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:21.431781 1143678 cri.go:89] found id: ""
	I0603 13:54:21.431810 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.431822 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:21.431831 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:21.431906 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:21.472840 1143678 cri.go:89] found id: ""
	I0603 13:54:21.472872 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.472885 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:21.472893 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:21.472955 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:21.512296 1143678 cri.go:89] found id: ""
	I0603 13:54:21.512333 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.512346 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:21.512353 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:21.512421 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:21.547555 1143678 cri.go:89] found id: ""
	I0603 13:54:21.547588 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.547599 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:21.547609 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:21.547670 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:21.584972 1143678 cri.go:89] found id: ""
	I0603 13:54:21.585005 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.585013 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:21.585019 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:21.585085 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:21.621566 1143678 cri.go:89] found id: ""
	I0603 13:54:21.621599 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.621610 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:21.621623 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:21.621639 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:21.637223 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:21.637263 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:21.712272 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:21.712294 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:21.712310 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:21.800453 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:21.800490 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:21.841477 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:21.841525 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:20.819740 1143252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:20.836917 1143252 api_server.go:72] duration metric: took 4m15.913250824s to wait for apiserver process to appear ...
	I0603 13:54:20.836947 1143252 api_server.go:88] waiting for apiserver healthz status ...
	I0603 13:54:20.836988 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:20.837038 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:20.874034 1143252 cri.go:89] found id: "45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a"
	I0603 13:54:20.874064 1143252 cri.go:89] found id: ""
	I0603 13:54:20.874076 1143252 logs.go:276] 1 containers: [45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a]
	I0603 13:54:20.874146 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:20.878935 1143252 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:20.879020 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:20.920390 1143252 cri.go:89] found id: "114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05"
	I0603 13:54:20.920417 1143252 cri.go:89] found id: ""
	I0603 13:54:20.920425 1143252 logs.go:276] 1 containers: [114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05]
	I0603 13:54:20.920494 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:20.924858 1143252 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:20.924934 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:20.966049 1143252 cri.go:89] found id: "f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574"
	I0603 13:54:20.966077 1143252 cri.go:89] found id: ""
	I0603 13:54:20.966088 1143252 logs.go:276] 1 containers: [f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574]
	I0603 13:54:20.966174 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:20.970734 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:20.970812 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:21.010892 1143252 cri.go:89] found id: "f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87"
	I0603 13:54:21.010918 1143252 cri.go:89] found id: ""
	I0603 13:54:21.010929 1143252 logs.go:276] 1 containers: [f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87]
	I0603 13:54:21.010994 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:21.016274 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:21.016347 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:21.055294 1143252 cri.go:89] found id: "c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1"
	I0603 13:54:21.055318 1143252 cri.go:89] found id: ""
	I0603 13:54:21.055327 1143252 logs.go:276] 1 containers: [c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1]
	I0603 13:54:21.055375 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:21.060007 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:21.060069 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:21.099200 1143252 cri.go:89] found id: "a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d"
	I0603 13:54:21.099225 1143252 cri.go:89] found id: ""
	I0603 13:54:21.099236 1143252 logs.go:276] 1 containers: [a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d]
	I0603 13:54:21.099309 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:21.103590 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:21.103662 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:21.140375 1143252 cri.go:89] found id: ""
	I0603 13:54:21.140409 1143252 logs.go:276] 0 containers: []
	W0603 13:54:21.140422 1143252 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:21.140431 1143252 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0603 13:54:21.140498 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0603 13:54:21.180709 1143252 cri.go:89] found id: "e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b"
	I0603 13:54:21.180735 1143252 cri.go:89] found id: "141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88"
	I0603 13:54:21.180739 1143252 cri.go:89] found id: ""
	I0603 13:54:21.180747 1143252 logs.go:276] 2 containers: [e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b 141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88]
	I0603 13:54:21.180814 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:21.184952 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:21.189111 1143252 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:21.189140 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:21.663768 1143252 logs.go:123] Gathering logs for kube-apiserver [45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a] ...
	I0603 13:54:21.663807 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a"
	I0603 13:54:21.719542 1143252 logs.go:123] Gathering logs for etcd [114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05] ...
	I0603 13:54:21.719573 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05"
	I0603 13:54:21.786686 1143252 logs.go:123] Gathering logs for coredns [f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574] ...
	I0603 13:54:21.786725 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574"
	I0603 13:54:21.824908 1143252 logs.go:123] Gathering logs for kube-scheduler [f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87] ...
	I0603 13:54:21.824948 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87"
	I0603 13:54:21.864778 1143252 logs.go:123] Gathering logs for kube-proxy [c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1] ...
	I0603 13:54:21.864818 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1"
	I0603 13:54:21.904450 1143252 logs.go:123] Gathering logs for storage-provisioner [e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b] ...
	I0603 13:54:21.904480 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b"
	I0603 13:54:21.942006 1143252 logs.go:123] Gathering logs for storage-provisioner [141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88] ...
	I0603 13:54:21.942040 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88"
	I0603 13:54:21.979636 1143252 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:21.979673 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:22.033943 1143252 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:22.033980 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:22.048545 1143252 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:22.048578 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 13:54:22.154866 1143252 logs.go:123] Gathering logs for kube-controller-manager [a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d] ...
	I0603 13:54:22.154906 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d"
	I0603 13:54:22.218033 1143252 logs.go:123] Gathering logs for container status ...
	I0603 13:54:22.218073 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:22.374700 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:24.871898 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:20.989874 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:23.489083 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:24.394864 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:24.408416 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:24.408527 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:24.444572 1143678 cri.go:89] found id: ""
	I0603 13:54:24.444603 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.444612 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:24.444618 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:24.444672 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:24.483710 1143678 cri.go:89] found id: ""
	I0603 13:54:24.483744 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.483755 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:24.483763 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:24.483837 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:24.522396 1143678 cri.go:89] found id: ""
	I0603 13:54:24.522437 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.522450 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:24.522457 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:24.522520 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:24.560865 1143678 cri.go:89] found id: ""
	I0603 13:54:24.560896 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.560905 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:24.560911 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:24.560964 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:24.598597 1143678 cri.go:89] found id: ""
	I0603 13:54:24.598632 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.598643 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:24.598657 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:24.598722 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:24.638854 1143678 cri.go:89] found id: ""
	I0603 13:54:24.638885 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.638897 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:24.638908 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:24.638979 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:24.678039 1143678 cri.go:89] found id: ""
	I0603 13:54:24.678076 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.678088 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:24.678096 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:24.678166 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:24.712836 1143678 cri.go:89] found id: ""
	I0603 13:54:24.712871 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.712883 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:24.712896 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:24.712913 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:24.763503 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:24.763545 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:24.779383 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:24.779416 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:24.867254 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:24.867287 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:24.867307 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:24.944920 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:24.944957 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:24.768551 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:54:24.774942 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 200:
	ok
	I0603 13:54:24.776278 1143252 api_server.go:141] control plane version: v1.30.1
	I0603 13:54:24.776301 1143252 api_server.go:131] duration metric: took 3.939347802s to wait for apiserver health ...
	I0603 13:54:24.776310 1143252 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 13:54:24.776334 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:24.776386 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:24.827107 1143252 cri.go:89] found id: "45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a"
	I0603 13:54:24.827139 1143252 cri.go:89] found id: ""
	I0603 13:54:24.827152 1143252 logs.go:276] 1 containers: [45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a]
	I0603 13:54:24.827210 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:24.831681 1143252 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:24.831752 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:24.875645 1143252 cri.go:89] found id: "114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05"
	I0603 13:54:24.875689 1143252 cri.go:89] found id: ""
	I0603 13:54:24.875711 1143252 logs.go:276] 1 containers: [114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05]
	I0603 13:54:24.875778 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:24.880157 1143252 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:24.880256 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:24.932131 1143252 cri.go:89] found id: "f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574"
	I0603 13:54:24.932157 1143252 cri.go:89] found id: ""
	I0603 13:54:24.932167 1143252 logs.go:276] 1 containers: [f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574]
	I0603 13:54:24.932262 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:24.938104 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:24.938168 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:24.980289 1143252 cri.go:89] found id: "f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87"
	I0603 13:54:24.980318 1143252 cri.go:89] found id: ""
	I0603 13:54:24.980327 1143252 logs.go:276] 1 containers: [f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87]
	I0603 13:54:24.980389 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:24.985608 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:24.985687 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:25.033726 1143252 cri.go:89] found id: "c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1"
	I0603 13:54:25.033749 1143252 cri.go:89] found id: ""
	I0603 13:54:25.033757 1143252 logs.go:276] 1 containers: [c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1]
	I0603 13:54:25.033811 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:25.038493 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:25.038561 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:25.077447 1143252 cri.go:89] found id: "a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d"
	I0603 13:54:25.077474 1143252 cri.go:89] found id: ""
	I0603 13:54:25.077485 1143252 logs.go:276] 1 containers: [a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d]
	I0603 13:54:25.077545 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:25.081701 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:25.081770 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:25.120216 1143252 cri.go:89] found id: ""
	I0603 13:54:25.120246 1143252 logs.go:276] 0 containers: []
	W0603 13:54:25.120254 1143252 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:25.120261 1143252 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0603 13:54:25.120313 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0603 13:54:25.162562 1143252 cri.go:89] found id: "e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b"
	I0603 13:54:25.162596 1143252 cri.go:89] found id: "141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88"
	I0603 13:54:25.162602 1143252 cri.go:89] found id: ""
	I0603 13:54:25.162613 1143252 logs.go:276] 2 containers: [e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b 141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88]
	I0603 13:54:25.162678 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:25.167179 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:25.171531 1143252 logs.go:123] Gathering logs for container status ...
	I0603 13:54:25.171558 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:25.223749 1143252 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:25.223787 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:25.290251 1143252 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:25.290293 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:25.315271 1143252 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:25.315302 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 13:54:25.433219 1143252 logs.go:123] Gathering logs for coredns [f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574] ...
	I0603 13:54:25.433257 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574"
	I0603 13:54:25.473156 1143252 logs.go:123] Gathering logs for kube-scheduler [f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87] ...
	I0603 13:54:25.473194 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87"
	I0603 13:54:25.513988 1143252 logs.go:123] Gathering logs for kube-controller-manager [a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d] ...
	I0603 13:54:25.514015 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d"
	I0603 13:54:25.587224 1143252 logs.go:123] Gathering logs for kube-apiserver [45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a] ...
	I0603 13:54:25.587260 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a"
	I0603 13:54:25.638872 1143252 logs.go:123] Gathering logs for etcd [114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05] ...
	I0603 13:54:25.638909 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05"
	I0603 13:54:25.687323 1143252 logs.go:123] Gathering logs for kube-proxy [c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1] ...
	I0603 13:54:25.687372 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1"
	I0603 13:54:25.739508 1143252 logs.go:123] Gathering logs for storage-provisioner [e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b] ...
	I0603 13:54:25.739539 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b"
	I0603 13:54:25.775066 1143252 logs.go:123] Gathering logs for storage-provisioner [141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88] ...
	I0603 13:54:25.775096 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88"
	I0603 13:54:25.811982 1143252 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:25.812016 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:28.685228 1143252 system_pods.go:59] 8 kube-system pods found
	I0603 13:54:28.685261 1143252 system_pods.go:61] "coredns-7db6d8ff4d-qdjrv" [9a490ea5-c189-4d28-bd6b-509610d35f37] Running
	I0603 13:54:28.685265 1143252 system_pods.go:61] "etcd-embed-certs-223260" [97807b62-195b-4d94-a7f8-754f68ad4f03] Running
	I0603 13:54:28.685269 1143252 system_pods.go:61] "kube-apiserver-embed-certs-223260" [df2f6cde-407c-4ed2-8fec-5fa61a428a88] Running
	I0603 13:54:28.685272 1143252 system_pods.go:61] "kube-controller-manager-embed-certs-223260" [9b8bc1b7-3f43-4626-b9ee-37f5176b7fd6] Running
	I0603 13:54:28.685276 1143252 system_pods.go:61] "kube-proxy-s5vdl" [4c515f67-d265-4140-82ec-ba9ac4ddda80] Running
	I0603 13:54:28.685279 1143252 system_pods.go:61] "kube-scheduler-embed-certs-223260" [d23001bf-d971-42d2-a901-b2ec4b4db649] Running
	I0603 13:54:28.685285 1143252 system_pods.go:61] "metrics-server-569cc877fc-v7d9t" [e89c698d-7aab-4acd-a9b3-5ba0315ad681] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:54:28.685290 1143252 system_pods.go:61] "storage-provisioner" [6ff65744-2d90-4589-a97f-d6b4d792eab4] Running
	I0603 13:54:28.685298 1143252 system_pods.go:74] duration metric: took 3.908982484s to wait for pod list to return data ...
	I0603 13:54:28.685305 1143252 default_sa.go:34] waiting for default service account to be created ...
	I0603 13:54:28.687914 1143252 default_sa.go:45] found service account: "default"
	I0603 13:54:28.687939 1143252 default_sa.go:55] duration metric: took 2.627402ms for default service account to be created ...
	I0603 13:54:28.687947 1143252 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 13:54:28.693336 1143252 system_pods.go:86] 8 kube-system pods found
	I0603 13:54:28.693369 1143252 system_pods.go:89] "coredns-7db6d8ff4d-qdjrv" [9a490ea5-c189-4d28-bd6b-509610d35f37] Running
	I0603 13:54:28.693375 1143252 system_pods.go:89] "etcd-embed-certs-223260" [97807b62-195b-4d94-a7f8-754f68ad4f03] Running
	I0603 13:54:28.693379 1143252 system_pods.go:89] "kube-apiserver-embed-certs-223260" [df2f6cde-407c-4ed2-8fec-5fa61a428a88] Running
	I0603 13:54:28.693385 1143252 system_pods.go:89] "kube-controller-manager-embed-certs-223260" [9b8bc1b7-3f43-4626-b9ee-37f5176b7fd6] Running
	I0603 13:54:28.693389 1143252 system_pods.go:89] "kube-proxy-s5vdl" [4c515f67-d265-4140-82ec-ba9ac4ddda80] Running
	I0603 13:54:28.693393 1143252 system_pods.go:89] "kube-scheduler-embed-certs-223260" [d23001bf-d971-42d2-a901-b2ec4b4db649] Running
	I0603 13:54:28.693401 1143252 system_pods.go:89] "metrics-server-569cc877fc-v7d9t" [e89c698d-7aab-4acd-a9b3-5ba0315ad681] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:54:28.693418 1143252 system_pods.go:89] "storage-provisioner" [6ff65744-2d90-4589-a97f-d6b4d792eab4] Running
	I0603 13:54:28.693438 1143252 system_pods.go:126] duration metric: took 5.484487ms to wait for k8s-apps to be running ...
	I0603 13:54:28.693450 1143252 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 13:54:28.693497 1143252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:54:28.710364 1143252 system_svc.go:56] duration metric: took 16.901982ms WaitForService to wait for kubelet
	I0603 13:54:28.710399 1143252 kubeadm.go:576] duration metric: took 4m23.786738812s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 13:54:28.710444 1143252 node_conditions.go:102] verifying NodePressure condition ...
	I0603 13:54:28.713300 1143252 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 13:54:28.713328 1143252 node_conditions.go:123] node cpu capacity is 2
	I0603 13:54:28.713362 1143252 node_conditions.go:105] duration metric: took 2.909242ms to run NodePressure ...
	I0603 13:54:28.713382 1143252 start.go:240] waiting for startup goroutines ...
	I0603 13:54:28.713392 1143252 start.go:245] waiting for cluster config update ...
	I0603 13:54:28.713424 1143252 start.go:254] writing updated cluster config ...
	I0603 13:54:28.713798 1143252 ssh_runner.go:195] Run: rm -f paused
	I0603 13:54:28.767538 1143252 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 13:54:28.769737 1143252 out.go:177] * Done! kubectl is now configured to use "embed-certs-223260" cluster and "default" namespace by default
	I0603 13:54:27.370695 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:29.870214 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:25.990136 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:28.489276 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:30.489392 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:27.495908 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:27.509885 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:27.509968 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:27.545591 1143678 cri.go:89] found id: ""
	I0603 13:54:27.545626 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.545635 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:27.545641 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:27.545695 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:27.583699 1143678 cri.go:89] found id: ""
	I0603 13:54:27.583728 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.583740 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:27.583748 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:27.583835 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:27.623227 1143678 cri.go:89] found id: ""
	I0603 13:54:27.623268 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.623277 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:27.623283 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:27.623341 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:27.663057 1143678 cri.go:89] found id: ""
	I0603 13:54:27.663090 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.663102 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:27.663109 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:27.663187 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:27.708448 1143678 cri.go:89] found id: ""
	I0603 13:54:27.708481 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.708489 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:27.708495 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:27.708551 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:27.743629 1143678 cri.go:89] found id: ""
	I0603 13:54:27.743663 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.743674 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:27.743682 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:27.743748 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:27.778094 1143678 cri.go:89] found id: ""
	I0603 13:54:27.778128 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.778137 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:27.778147 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:27.778210 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:27.813137 1143678 cri.go:89] found id: ""
	I0603 13:54:27.813170 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.813180 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:27.813192 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:27.813208 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:27.861100 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:27.861136 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:27.914752 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:27.914794 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:27.929479 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:27.929511 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:28.002898 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:28.002926 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:28.002942 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:30.581890 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:30.595982 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:30.596068 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:30.638804 1143678 cri.go:89] found id: ""
	I0603 13:54:30.638841 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.638853 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:30.638862 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:30.638942 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:30.677202 1143678 cri.go:89] found id: ""
	I0603 13:54:30.677242 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.677253 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:30.677262 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:30.677329 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:30.717382 1143678 cri.go:89] found id: ""
	I0603 13:54:30.717436 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.717446 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:30.717455 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:30.717523 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:30.753691 1143678 cri.go:89] found id: ""
	I0603 13:54:30.753719 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.753728 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:30.753734 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:30.753798 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:30.790686 1143678 cri.go:89] found id: ""
	I0603 13:54:30.790714 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.790723 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:30.790729 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:30.790783 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:30.830196 1143678 cri.go:89] found id: ""
	I0603 13:54:30.830224 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.830237 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:30.830245 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:30.830299 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:30.865952 1143678 cri.go:89] found id: ""
	I0603 13:54:30.865980 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.865992 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:30.866000 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:30.866066 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:30.901561 1143678 cri.go:89] found id: ""
	I0603 13:54:30.901592 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.901601 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:30.901610 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:30.901627 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:30.979416 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:30.979459 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:31.035024 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:31.035061 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:31.089005 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:31.089046 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:31.105176 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:31.105210 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:31.172862 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:32.371040 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:34.870810 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:32.989041 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:34.989599 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:33.674069 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:33.688423 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:33.688499 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:33.729840 1143678 cri.go:89] found id: ""
	I0603 13:54:33.729876 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.729886 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:33.729893 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:33.729945 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:33.764984 1143678 cri.go:89] found id: ""
	I0603 13:54:33.765010 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.765018 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:33.765025 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:33.765075 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:33.798411 1143678 cri.go:89] found id: ""
	I0603 13:54:33.798446 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.798459 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:33.798468 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:33.798547 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:33.831565 1143678 cri.go:89] found id: ""
	I0603 13:54:33.831600 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.831611 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:33.831620 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:33.831688 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:33.869701 1143678 cri.go:89] found id: ""
	I0603 13:54:33.869727 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.869735 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:33.869741 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:33.869802 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:33.906108 1143678 cri.go:89] found id: ""
	I0603 13:54:33.906134 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.906144 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:33.906153 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:33.906218 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:33.946577 1143678 cri.go:89] found id: ""
	I0603 13:54:33.946607 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.946615 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:33.946621 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:33.946673 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:33.986691 1143678 cri.go:89] found id: ""
	I0603 13:54:33.986724 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.986743 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:33.986757 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:33.986775 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:34.044068 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:34.044110 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:34.059686 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:34.059724 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:34.141490 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:34.141514 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:34.141531 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:34.227890 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:34.227930 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:36.778969 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:36.792527 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:36.792612 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:36.828044 1143678 cri.go:89] found id: ""
	I0603 13:54:36.828083 1143678 logs.go:276] 0 containers: []
	W0603 13:54:36.828096 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:36.828102 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:36.828166 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:36.863869 1143678 cri.go:89] found id: ""
	I0603 13:54:36.863905 1143678 logs.go:276] 0 containers: []
	W0603 13:54:36.863917 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:36.863926 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:36.863996 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:36.899610 1143678 cri.go:89] found id: ""
	I0603 13:54:36.899649 1143678 logs.go:276] 0 containers: []
	W0603 13:54:36.899661 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:36.899669 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:36.899742 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:36.938627 1143678 cri.go:89] found id: ""
	I0603 13:54:36.938664 1143678 logs.go:276] 0 containers: []
	W0603 13:54:36.938675 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:36.938683 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:36.938739 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:36.973810 1143678 cri.go:89] found id: ""
	I0603 13:54:36.973842 1143678 logs.go:276] 0 containers: []
	W0603 13:54:36.973857 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:36.973863 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:36.973915 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:37.013759 1143678 cri.go:89] found id: ""
	I0603 13:54:37.013792 1143678 logs.go:276] 0 containers: []
	W0603 13:54:37.013805 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:37.013813 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:37.013881 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:37.049665 1143678 cri.go:89] found id: ""
	I0603 13:54:37.049697 1143678 logs.go:276] 0 containers: []
	W0603 13:54:37.049706 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:37.049712 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:37.049787 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:37.087405 1143678 cri.go:89] found id: ""
	I0603 13:54:37.087436 1143678 logs.go:276] 0 containers: []
	W0603 13:54:37.087446 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:37.087457 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:37.087470 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:37.126443 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:37.126476 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:37.177976 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:37.178015 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:37.192821 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:37.192860 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:37.267895 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:37.267926 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:37.267945 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:36.871536 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:37.371048 1143450 pod_ready.go:81] duration metric: took 4m0.007102739s for pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace to be "Ready" ...
	E0603 13:54:37.371080 1143450 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0603 13:54:37.371092 1143450 pod_ready.go:38] duration metric: took 4m5.236838117s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:54:37.371111 1143450 api_server.go:52] waiting for apiserver process to appear ...
	I0603 13:54:37.371145 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:37.371202 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:37.428454 1143450 cri.go:89] found id: "50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836"
	I0603 13:54:37.428487 1143450 cri.go:89] found id: ""
	I0603 13:54:37.428498 1143450 logs.go:276] 1 containers: [50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836]
	I0603 13:54:37.428564 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:37.434473 1143450 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:37.434552 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:37.476251 1143450 cri.go:89] found id: "c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d"
	I0603 13:54:37.476288 1143450 cri.go:89] found id: ""
	I0603 13:54:37.476300 1143450 logs.go:276] 1 containers: [c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d]
	I0603 13:54:37.476368 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:37.483190 1143450 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:37.483280 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:37.528660 1143450 cri.go:89] found id: "bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d"
	I0603 13:54:37.528693 1143450 cri.go:89] found id: ""
	I0603 13:54:37.528704 1143450 logs.go:276] 1 containers: [bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d]
	I0603 13:54:37.528797 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:37.533716 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:37.533809 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:37.573995 1143450 cri.go:89] found id: "7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a"
	I0603 13:54:37.574016 1143450 cri.go:89] found id: ""
	I0603 13:54:37.574025 1143450 logs.go:276] 1 containers: [7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a]
	I0603 13:54:37.574071 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:37.578385 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:37.578465 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:37.616468 1143450 cri.go:89] found id: "9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b"
	I0603 13:54:37.616511 1143450 cri.go:89] found id: ""
	I0603 13:54:37.616522 1143450 logs.go:276] 1 containers: [9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b]
	I0603 13:54:37.616603 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:37.621204 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:37.621277 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:37.661363 1143450 cri.go:89] found id: "b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7"
	I0603 13:54:37.661390 1143450 cri.go:89] found id: ""
	I0603 13:54:37.661401 1143450 logs.go:276] 1 containers: [b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7]
	I0603 13:54:37.661507 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:37.665969 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:37.666055 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:37.705096 1143450 cri.go:89] found id: ""
	I0603 13:54:37.705128 1143450 logs.go:276] 0 containers: []
	W0603 13:54:37.705136 1143450 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:37.705142 1143450 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0603 13:54:37.705210 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0603 13:54:37.746365 1143450 cri.go:89] found id: "969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240"
	I0603 13:54:37.746400 1143450 cri.go:89] found id: "bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4"
	I0603 13:54:37.746404 1143450 cri.go:89] found id: ""
	I0603 13:54:37.746412 1143450 logs.go:276] 2 containers: [969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240 bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4]
	I0603 13:54:37.746470 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:37.750874 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:37.755146 1143450 logs.go:123] Gathering logs for kube-controller-manager [b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7] ...
	I0603 13:54:37.755175 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7"
	I0603 13:54:37.811365 1143450 logs.go:123] Gathering logs for storage-provisioner [bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4] ...
	I0603 13:54:37.811403 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4"
	I0603 13:54:37.849687 1143450 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:37.849729 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:37.904870 1143450 logs.go:123] Gathering logs for etcd [c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d] ...
	I0603 13:54:37.904909 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d"
	I0603 13:54:37.955448 1143450 logs.go:123] Gathering logs for coredns [bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d] ...
	I0603 13:54:37.955497 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d"
	I0603 13:54:37.996659 1143450 logs.go:123] Gathering logs for kube-proxy [9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b] ...
	I0603 13:54:37.996687 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b"
	I0603 13:54:38.047501 1143450 logs.go:123] Gathering logs for storage-provisioner [969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240] ...
	I0603 13:54:38.047540 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240"
	I0603 13:54:38.090932 1143450 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:38.090969 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:38.606612 1143450 logs.go:123] Gathering logs for container status ...
	I0603 13:54:38.606672 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:38.652732 1143450 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:38.652774 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:38.670570 1143450 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:38.670620 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 13:54:38.812156 1143450 logs.go:123] Gathering logs for kube-apiserver [50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836] ...
	I0603 13:54:38.812208 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836"
	I0603 13:54:38.862940 1143450 logs.go:123] Gathering logs for kube-scheduler [7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a] ...
	I0603 13:54:38.862988 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a"
	I0603 13:54:37.491134 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:39.990379 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:39.846505 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:39.860426 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:39.860514 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:39.896684 1143678 cri.go:89] found id: ""
	I0603 13:54:39.896712 1143678 logs.go:276] 0 containers: []
	W0603 13:54:39.896726 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:39.896736 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:39.896801 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:39.932437 1143678 cri.go:89] found id: ""
	I0603 13:54:39.932482 1143678 logs.go:276] 0 containers: []
	W0603 13:54:39.932494 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:39.932503 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:39.932571 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:39.967850 1143678 cri.go:89] found id: ""
	I0603 13:54:39.967883 1143678 logs.go:276] 0 containers: []
	W0603 13:54:39.967891 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:39.967898 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:39.967952 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:40.003255 1143678 cri.go:89] found id: ""
	I0603 13:54:40.003284 1143678 logs.go:276] 0 containers: []
	W0603 13:54:40.003292 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:40.003298 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:40.003351 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:40.045865 1143678 cri.go:89] found id: ""
	I0603 13:54:40.045892 1143678 logs.go:276] 0 containers: []
	W0603 13:54:40.045904 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:40.045912 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:40.045976 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:40.082469 1143678 cri.go:89] found id: ""
	I0603 13:54:40.082498 1143678 logs.go:276] 0 containers: []
	W0603 13:54:40.082507 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:40.082513 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:40.082584 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:40.117181 1143678 cri.go:89] found id: ""
	I0603 13:54:40.117231 1143678 logs.go:276] 0 containers: []
	W0603 13:54:40.117242 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:40.117250 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:40.117320 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:40.157776 1143678 cri.go:89] found id: ""
	I0603 13:54:40.157813 1143678 logs.go:276] 0 containers: []
	W0603 13:54:40.157822 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:40.157832 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:40.157848 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:40.213374 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:40.213437 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:40.228298 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:40.228330 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:40.305450 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:40.305485 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:40.305503 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:40.393653 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:40.393704 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:41.405129 1143450 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:41.423234 1143450 api_server.go:72] duration metric: took 4m14.998447047s to wait for apiserver process to appear ...
	I0603 13:54:41.423266 1143450 api_server.go:88] waiting for apiserver healthz status ...
	I0603 13:54:41.423312 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:41.423374 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:41.463540 1143450 cri.go:89] found id: "50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836"
	I0603 13:54:41.463562 1143450 cri.go:89] found id: ""
	I0603 13:54:41.463570 1143450 logs.go:276] 1 containers: [50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836]
	I0603 13:54:41.463620 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:41.468145 1143450 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:41.468226 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:41.511977 1143450 cri.go:89] found id: "c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d"
	I0603 13:54:41.512000 1143450 cri.go:89] found id: ""
	I0603 13:54:41.512017 1143450 logs.go:276] 1 containers: [c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d]
	I0603 13:54:41.512081 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:41.516600 1143450 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:41.516674 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:41.554392 1143450 cri.go:89] found id: "bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d"
	I0603 13:54:41.554420 1143450 cri.go:89] found id: ""
	I0603 13:54:41.554443 1143450 logs.go:276] 1 containers: [bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d]
	I0603 13:54:41.554508 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:41.558983 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:41.559039 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:41.597710 1143450 cri.go:89] found id: "7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a"
	I0603 13:54:41.597737 1143450 cri.go:89] found id: ""
	I0603 13:54:41.597747 1143450 logs.go:276] 1 containers: [7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a]
	I0603 13:54:41.597811 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:41.602164 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:41.602227 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:41.639422 1143450 cri.go:89] found id: "9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b"
	I0603 13:54:41.639452 1143450 cri.go:89] found id: ""
	I0603 13:54:41.639462 1143450 logs.go:276] 1 containers: [9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b]
	I0603 13:54:41.639532 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:41.644093 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:41.644171 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:41.682475 1143450 cri.go:89] found id: "b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7"
	I0603 13:54:41.682506 1143450 cri.go:89] found id: ""
	I0603 13:54:41.682515 1143450 logs.go:276] 1 containers: [b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7]
	I0603 13:54:41.682578 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:41.687654 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:41.687734 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:41.724804 1143450 cri.go:89] found id: ""
	I0603 13:54:41.724839 1143450 logs.go:276] 0 containers: []
	W0603 13:54:41.724850 1143450 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:41.724858 1143450 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0603 13:54:41.724928 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0603 13:54:41.764625 1143450 cri.go:89] found id: "969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240"
	I0603 13:54:41.764653 1143450 cri.go:89] found id: "bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4"
	I0603 13:54:41.764659 1143450 cri.go:89] found id: ""
	I0603 13:54:41.764670 1143450 logs.go:276] 2 containers: [969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240 bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4]
	I0603 13:54:41.764736 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:41.769499 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:41.773782 1143450 logs.go:123] Gathering logs for container status ...
	I0603 13:54:41.773806 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:41.816486 1143450 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:41.816520 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:41.833538 1143450 logs.go:123] Gathering logs for kube-apiserver [50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836] ...
	I0603 13:54:41.833569 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836"
	I0603 13:54:41.877958 1143450 logs.go:123] Gathering logs for etcd [c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d] ...
	I0603 13:54:41.878004 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d"
	I0603 13:54:41.922575 1143450 logs.go:123] Gathering logs for kube-controller-manager [b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7] ...
	I0603 13:54:41.922612 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7"
	I0603 13:54:41.983865 1143450 logs.go:123] Gathering logs for storage-provisioner [969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240] ...
	I0603 13:54:41.983900 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240"
	I0603 13:54:42.032746 1143450 logs.go:123] Gathering logs for storage-provisioner [bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4] ...
	I0603 13:54:42.032773 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4"
	I0603 13:54:42.076129 1143450 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:42.076166 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:42.129061 1143450 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:42.129099 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 13:54:42.248179 1143450 logs.go:123] Gathering logs for coredns [bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d] ...
	I0603 13:54:42.248213 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d"
	I0603 13:54:42.292179 1143450 logs.go:123] Gathering logs for kube-scheduler [7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a] ...
	I0603 13:54:42.292288 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a"
	I0603 13:54:42.340447 1143450 logs.go:123] Gathering logs for kube-proxy [9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b] ...
	I0603 13:54:42.340493 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b"
	I0603 13:54:42.381993 1143450 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:42.382024 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:42.488926 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:44.990221 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:42.934691 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:42.948505 1143678 kubeadm.go:591] duration metric: took 4m4.45791317s to restartPrimaryControlPlane
	W0603 13:54:42.948592 1143678 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0603 13:54:42.948629 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 13:54:48.316951 1143678 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.36829775s)
	I0603 13:54:48.317039 1143678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:54:48.333630 1143678 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 13:54:48.345772 1143678 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 13:54:48.357359 1143678 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 13:54:48.357386 1143678 kubeadm.go:156] found existing configuration files:
	
	I0603 13:54:48.357477 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 13:54:48.367844 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 13:54:48.367917 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 13:54:48.379349 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 13:54:48.389684 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 13:54:48.389760 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 13:54:48.401562 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 13:54:48.412670 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 13:54:48.412743 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 13:54:48.424261 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 13:54:48.434598 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 13:54:48.434674 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 13:54:48.446187 1143678 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 13:54:48.527873 1143678 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0603 13:54:48.528073 1143678 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 13:54:48.695244 1143678 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 13:54:48.695401 1143678 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 13:54:48.695581 1143678 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 13:54:48.930141 1143678 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 13:54:45.281199 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:54:45.286305 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 200:
	ok
	I0603 13:54:45.287421 1143450 api_server.go:141] control plane version: v1.30.1
	I0603 13:54:45.287444 1143450 api_server.go:131] duration metric: took 3.864171356s to wait for apiserver health ...
	I0603 13:54:45.287455 1143450 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 13:54:45.287486 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:45.287540 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:45.328984 1143450 cri.go:89] found id: "50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836"
	I0603 13:54:45.329012 1143450 cri.go:89] found id: ""
	I0603 13:54:45.329022 1143450 logs.go:276] 1 containers: [50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836]
	I0603 13:54:45.329075 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:45.334601 1143450 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:45.334683 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:45.382942 1143450 cri.go:89] found id: "c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d"
	I0603 13:54:45.382967 1143450 cri.go:89] found id: ""
	I0603 13:54:45.382978 1143450 logs.go:276] 1 containers: [c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d]
	I0603 13:54:45.383039 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:45.387904 1143450 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:45.387969 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:45.431948 1143450 cri.go:89] found id: "bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d"
	I0603 13:54:45.431981 1143450 cri.go:89] found id: ""
	I0603 13:54:45.431992 1143450 logs.go:276] 1 containers: [bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d]
	I0603 13:54:45.432052 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:45.440993 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:45.441074 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:45.490086 1143450 cri.go:89] found id: "7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a"
	I0603 13:54:45.490114 1143450 cri.go:89] found id: ""
	I0603 13:54:45.490125 1143450 logs.go:276] 1 containers: [7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a]
	I0603 13:54:45.490194 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:45.494628 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:45.494688 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:45.532264 1143450 cri.go:89] found id: "9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b"
	I0603 13:54:45.532296 1143450 cri.go:89] found id: ""
	I0603 13:54:45.532307 1143450 logs.go:276] 1 containers: [9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b]
	I0603 13:54:45.532374 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:45.536914 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:45.536985 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:45.576641 1143450 cri.go:89] found id: "b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7"
	I0603 13:54:45.576663 1143450 cri.go:89] found id: ""
	I0603 13:54:45.576671 1143450 logs.go:276] 1 containers: [b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7]
	I0603 13:54:45.576720 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:45.580872 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:45.580926 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:45.628834 1143450 cri.go:89] found id: ""
	I0603 13:54:45.628864 1143450 logs.go:276] 0 containers: []
	W0603 13:54:45.628872 1143450 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:45.628879 1143450 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0603 13:54:45.628931 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0603 13:54:45.671689 1143450 cri.go:89] found id: "969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240"
	I0603 13:54:45.671719 1143450 cri.go:89] found id: "bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4"
	I0603 13:54:45.671727 1143450 cri.go:89] found id: ""
	I0603 13:54:45.671740 1143450 logs.go:276] 2 containers: [969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240 bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4]
	I0603 13:54:45.671799 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:45.677161 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:45.682179 1143450 logs.go:123] Gathering logs for container status ...
	I0603 13:54:45.682219 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:45.731155 1143450 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:45.731192 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 13:54:45.846365 1143450 logs.go:123] Gathering logs for kube-apiserver [50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836] ...
	I0603 13:54:45.846411 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836"
	I0603 13:54:45.907694 1143450 logs.go:123] Gathering logs for coredns [bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d] ...
	I0603 13:54:45.907733 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d"
	I0603 13:54:45.952881 1143450 logs.go:123] Gathering logs for kube-scheduler [7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a] ...
	I0603 13:54:45.952919 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a"
	I0603 13:54:45.998674 1143450 logs.go:123] Gathering logs for kube-controller-manager [b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7] ...
	I0603 13:54:45.998722 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7"
	I0603 13:54:46.061902 1143450 logs.go:123] Gathering logs for storage-provisioner [969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240] ...
	I0603 13:54:46.061949 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240"
	I0603 13:54:46.106017 1143450 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:46.106056 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:46.473915 1143450 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:46.473981 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:46.530212 1143450 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:46.530260 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:46.545954 1143450 logs.go:123] Gathering logs for etcd [c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d] ...
	I0603 13:54:46.545996 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d"
	I0603 13:54:46.595057 1143450 logs.go:123] Gathering logs for kube-proxy [9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b] ...
	I0603 13:54:46.595097 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b"
	I0603 13:54:46.637835 1143450 logs.go:123] Gathering logs for storage-provisioner [bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4] ...
	I0603 13:54:46.637872 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4"
	I0603 13:54:49.190539 1143450 system_pods.go:59] 8 kube-system pods found
	I0603 13:54:49.190572 1143450 system_pods.go:61] "coredns-7db6d8ff4d-flxqj" [a116f363-ca50-4e2d-8c77-e99498c81e36] Running
	I0603 13:54:49.190577 1143450 system_pods.go:61] "etcd-default-k8s-diff-port-030870" [4134b8e4-b7c4-4571-ae7f-f1eff2be2427] Running
	I0603 13:54:49.190582 1143450 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-030870" [38fe3d48-9d20-448a-b8d1-7c3af8ab1d2b] Running
	I0603 13:54:49.190586 1143450 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-030870" [5c8f2fc4-fc4f-48f8-8d81-3b64aa9a93c3] Running
	I0603 13:54:49.190590 1143450 system_pods.go:61] "kube-proxy-thsrx" [96df5442-b343-47c8-a561-681a2d568d50] Running
	I0603 13:54:49.190593 1143450 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-030870" [1f2c23a1-1c2c-463f-a5f0-e8f1bb8956f6] Running
	I0603 13:54:49.190602 1143450 system_pods.go:61] "metrics-server-569cc877fc-8xw9v" [4ab08177-2171-493b-928c-456d8a21fd68] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:54:49.190609 1143450 system_pods.go:61] "storage-provisioner" [64d080e5-d582-4ee5-adbc-a652e8e2b820] Running
	I0603 13:54:49.190620 1143450 system_pods.go:74] duration metric: took 3.903157143s to wait for pod list to return data ...
	I0603 13:54:49.190633 1143450 default_sa.go:34] waiting for default service account to be created ...
	I0603 13:54:49.193192 1143450 default_sa.go:45] found service account: "default"
	I0603 13:54:49.193219 1143450 default_sa.go:55] duration metric: took 2.575016ms for default service account to be created ...
	I0603 13:54:49.193229 1143450 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 13:54:49.202028 1143450 system_pods.go:86] 8 kube-system pods found
	I0603 13:54:49.202065 1143450 system_pods.go:89] "coredns-7db6d8ff4d-flxqj" [a116f363-ca50-4e2d-8c77-e99498c81e36] Running
	I0603 13:54:49.202074 1143450 system_pods.go:89] "etcd-default-k8s-diff-port-030870" [4134b8e4-b7c4-4571-ae7f-f1eff2be2427] Running
	I0603 13:54:49.202081 1143450 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-030870" [38fe3d48-9d20-448a-b8d1-7c3af8ab1d2b] Running
	I0603 13:54:49.202088 1143450 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-030870" [5c8f2fc4-fc4f-48f8-8d81-3b64aa9a93c3] Running
	I0603 13:54:49.202094 1143450 system_pods.go:89] "kube-proxy-thsrx" [96df5442-b343-47c8-a561-681a2d568d50] Running
	I0603 13:54:49.202100 1143450 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-030870" [1f2c23a1-1c2c-463f-a5f0-e8f1bb8956f6] Running
	I0603 13:54:49.202113 1143450 system_pods.go:89] "metrics-server-569cc877fc-8xw9v" [4ab08177-2171-493b-928c-456d8a21fd68] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:54:49.202124 1143450 system_pods.go:89] "storage-provisioner" [64d080e5-d582-4ee5-adbc-a652e8e2b820] Running
	I0603 13:54:49.202135 1143450 system_pods.go:126] duration metric: took 8.899065ms to wait for k8s-apps to be running ...
	I0603 13:54:49.202152 1143450 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 13:54:49.202209 1143450 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:54:49.220199 1143450 system_svc.go:56] duration metric: took 18.025994ms WaitForService to wait for kubelet
	I0603 13:54:49.220242 1143450 kubeadm.go:576] duration metric: took 4m22.79546223s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 13:54:49.220269 1143450 node_conditions.go:102] verifying NodePressure condition ...
	I0603 13:54:49.223327 1143450 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 13:54:49.223354 1143450 node_conditions.go:123] node cpu capacity is 2
	I0603 13:54:49.223367 1143450 node_conditions.go:105] duration metric: took 3.093435ms to run NodePressure ...
	I0603 13:54:49.223383 1143450 start.go:240] waiting for startup goroutines ...
	I0603 13:54:49.223393 1143450 start.go:245] waiting for cluster config update ...
	I0603 13:54:49.223408 1143450 start.go:254] writing updated cluster config ...
	I0603 13:54:49.223704 1143450 ssh_runner.go:195] Run: rm -f paused
	I0603 13:54:49.277924 1143450 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 13:54:49.280442 1143450 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-030870" cluster and "default" namespace by default
	I0603 13:54:48.932024 1143678 out.go:204]   - Generating certificates and keys ...
	I0603 13:54:48.932110 1143678 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 13:54:48.932168 1143678 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 13:54:48.932235 1143678 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 13:54:48.932305 1143678 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 13:54:48.932481 1143678 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 13:54:48.932639 1143678 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 13:54:48.933272 1143678 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 13:54:48.933771 1143678 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 13:54:48.934251 1143678 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 13:54:48.934654 1143678 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 13:54:48.934712 1143678 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 13:54:48.934762 1143678 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 13:54:49.063897 1143678 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 13:54:49.266680 1143678 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 13:54:49.364943 1143678 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 13:54:49.628905 1143678 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 13:54:49.645861 1143678 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 13:54:49.645991 1143678 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 13:54:49.646049 1143678 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 13:54:49.795196 1143678 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 13:54:47.490336 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:49.989543 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:49.798407 1143678 out.go:204]   - Booting up control plane ...
	I0603 13:54:49.798564 1143678 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 13:54:49.800163 1143678 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 13:54:49.802226 1143678 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 13:54:49.803809 1143678 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 13:54:49.806590 1143678 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0603 13:54:52.490088 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:54.990092 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:57.488119 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:59.489775 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:01.490194 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:03.989075 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:05.990054 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:08.489226 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:10.989028 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:13.489118 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:15.489176 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:17.989008 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:20.489091 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:22.989284 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:24.990020 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:27.489326 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:27.983679 1142862 pod_ready.go:81] duration metric: took 4m0.001142992s for pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace to be "Ready" ...
	E0603 13:55:27.983708 1142862 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace to be "Ready" (will not retry!)
	I0603 13:55:27.983731 1142862 pod_ready.go:38] duration metric: took 4m12.038904247s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:55:27.983760 1142862 kubeadm.go:591] duration metric: took 4m21.273943202s to restartPrimaryControlPlane
	W0603 13:55:27.983831 1142862 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0603 13:55:27.983865 1142862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 13:55:29.807867 1143678 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0603 13:55:29.808474 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:55:29.808754 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:55:34.809455 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:55:34.809722 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:55:44.810305 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:55:44.810491 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:55:59.870853 1142862 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.886953189s)
	I0603 13:55:59.870958 1142862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:55:59.889658 1142862 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 13:55:59.901529 1142862 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 13:55:59.914241 1142862 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 13:55:59.914266 1142862 kubeadm.go:156] found existing configuration files:
	
	I0603 13:55:59.914312 1142862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 13:55:59.924884 1142862 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 13:55:59.924950 1142862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 13:55:59.935494 1142862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 13:55:59.946222 1142862 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 13:55:59.946321 1142862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 13:55:59.956749 1142862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 13:55:59.967027 1142862 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 13:55:59.967110 1142862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 13:55:59.979124 1142862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 13:55:59.989689 1142862 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 13:55:59.989751 1142862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 13:56:00.000616 1142862 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 13:56:00.230878 1142862 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 13:56:04.811725 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:56:04.811929 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:56:08.995375 1142862 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0603 13:56:08.995463 1142862 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 13:56:08.995588 1142862 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 13:56:08.995724 1142862 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 13:56:08.995874 1142862 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 13:56:08.995970 1142862 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 13:56:08.997810 1142862 out.go:204]   - Generating certificates and keys ...
	I0603 13:56:08.997914 1142862 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 13:56:08.998045 1142862 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 13:56:08.998154 1142862 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 13:56:08.998321 1142862 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 13:56:08.998423 1142862 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 13:56:08.998506 1142862 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 13:56:08.998578 1142862 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 13:56:08.998665 1142862 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 13:56:08.998764 1142862 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 13:56:08.998860 1142862 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 13:56:08.998919 1142862 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 13:56:08.999011 1142862 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 13:56:08.999111 1142862 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 13:56:08.999202 1142862 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0603 13:56:08.999275 1142862 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 13:56:08.999354 1142862 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 13:56:08.999423 1142862 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 13:56:08.999538 1142862 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 13:56:08.999692 1142862 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 13:56:09.001133 1142862 out.go:204]   - Booting up control plane ...
	I0603 13:56:09.001218 1142862 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 13:56:09.001293 1142862 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 13:56:09.001354 1142862 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 13:56:09.001499 1142862 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 13:56:09.001584 1142862 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 13:56:09.001637 1142862 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 13:56:09.001768 1142862 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0603 13:56:09.001881 1142862 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0603 13:56:09.001941 1142862 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.923053ms
	I0603 13:56:09.002010 1142862 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0603 13:56:09.002090 1142862 kubeadm.go:309] [api-check] The API server is healthy after 5.502208975s
	I0603 13:56:09.002224 1142862 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0603 13:56:09.002363 1142862 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0603 13:56:09.002457 1142862 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0603 13:56:09.002647 1142862 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-817450 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0603 13:56:09.002713 1142862 kubeadm.go:309] [bootstrap-token] Using token: a7hbk8.xb8is7k6ewa3l3ya
	I0603 13:56:09.004666 1142862 out.go:204]   - Configuring RBAC rules ...
	I0603 13:56:09.004792 1142862 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0603 13:56:09.004883 1142862 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0603 13:56:09.005026 1142862 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0603 13:56:09.005234 1142862 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0603 13:56:09.005389 1142862 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0603 13:56:09.005531 1142862 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0603 13:56:09.005651 1142862 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0603 13:56:09.005709 1142862 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0603 13:56:09.005779 1142862 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0603 13:56:09.005787 1142862 kubeadm.go:309] 
	I0603 13:56:09.005869 1142862 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0603 13:56:09.005885 1142862 kubeadm.go:309] 
	I0603 13:56:09.006014 1142862 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0603 13:56:09.006034 1142862 kubeadm.go:309] 
	I0603 13:56:09.006076 1142862 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0603 13:56:09.006136 1142862 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0603 13:56:09.006197 1142862 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0603 13:56:09.006203 1142862 kubeadm.go:309] 
	I0603 13:56:09.006263 1142862 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0603 13:56:09.006273 1142862 kubeadm.go:309] 
	I0603 13:56:09.006330 1142862 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0603 13:56:09.006338 1142862 kubeadm.go:309] 
	I0603 13:56:09.006393 1142862 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0603 13:56:09.006476 1142862 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0603 13:56:09.006542 1142862 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0603 13:56:09.006548 1142862 kubeadm.go:309] 
	I0603 13:56:09.006629 1142862 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0603 13:56:09.006746 1142862 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0603 13:56:09.006758 1142862 kubeadm.go:309] 
	I0603 13:56:09.006850 1142862 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token a7hbk8.xb8is7k6ewa3l3ya \
	I0603 13:56:09.006987 1142862 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c33e9516f6d05db03b44f9194bafe44692a1b8ae1d860b8bc74f77578e93fdb1 \
	I0603 13:56:09.007028 1142862 kubeadm.go:309] 	--control-plane 
	I0603 13:56:09.007037 1142862 kubeadm.go:309] 
	I0603 13:56:09.007141 1142862 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0603 13:56:09.007170 1142862 kubeadm.go:309] 
	I0603 13:56:09.007266 1142862 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token a7hbk8.xb8is7k6ewa3l3ya \
	I0603 13:56:09.007427 1142862 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c33e9516f6d05db03b44f9194bafe44692a1b8ae1d860b8bc74f77578e93fdb1 
	I0603 13:56:09.007451 1142862 cni.go:84] Creating CNI manager for ""
	I0603 13:56:09.007464 1142862 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:56:09.009292 1142862 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 13:56:09.010750 1142862 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 13:56:09.022810 1142862 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 13:56:09.052132 1142862 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 13:56:09.052150 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:09.052150 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-817450 minikube.k8s.io/updated_at=2024_06_03T13_56_09_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354 minikube.k8s.io/name=no-preload-817450 minikube.k8s.io/primary=true
	I0603 13:56:09.291610 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:09.296892 1142862 ops.go:34] apiserver oom_adj: -16
	I0603 13:56:09.792736 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:10.292471 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:10.792688 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:11.291782 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:11.792454 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:12.292056 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:12.792150 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:13.292620 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:13.792024 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:14.292501 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:14.791790 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:15.292128 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:15.792608 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:16.292106 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:16.791876 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:17.292276 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:17.791876 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:18.292644 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:18.792571 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:19.292064 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:19.791908 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:20.292511 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:20.792137 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:21.292153 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:21.791809 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:21.882178 1142862 kubeadm.go:1107] duration metric: took 12.830108615s to wait for elevateKubeSystemPrivileges
	W0603 13:56:21.882223 1142862 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0603 13:56:21.882236 1142862 kubeadm.go:393] duration metric: took 5m15.237452092s to StartCluster
	I0603 13:56:21.882260 1142862 settings.go:142] acquiring lock: {Name:mka7155af15d143794eb08b8670f7d850f44839e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:56:21.882368 1142862 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 13:56:21.883986 1142862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/kubeconfig: {Name:mk082a4c41fd0f4876b4085806e1bc5ef6533b14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:56:21.884288 1142862 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.125 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 13:56:21.885915 1142862 out.go:177] * Verifying Kubernetes components...
	I0603 13:56:21.884411 1142862 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 13:56:21.884504 1142862 config.go:182] Loaded profile config "no-preload-817450": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:56:21.887156 1142862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:56:21.887168 1142862 addons.go:69] Setting storage-provisioner=true in profile "no-preload-817450"
	I0603 13:56:21.887199 1142862 addons.go:69] Setting metrics-server=true in profile "no-preload-817450"
	I0603 13:56:21.887230 1142862 addons.go:234] Setting addon storage-provisioner=true in "no-preload-817450"
	W0603 13:56:21.887245 1142862 addons.go:243] addon storage-provisioner should already be in state true
	I0603 13:56:21.887261 1142862 addons.go:234] Setting addon metrics-server=true in "no-preload-817450"
	W0603 13:56:21.887276 1142862 addons.go:243] addon metrics-server should already be in state true
	I0603 13:56:21.887295 1142862 host.go:66] Checking if "no-preload-817450" exists ...
	I0603 13:56:21.887316 1142862 host.go:66] Checking if "no-preload-817450" exists ...
	I0603 13:56:21.887156 1142862 addons.go:69] Setting default-storageclass=true in profile "no-preload-817450"
	I0603 13:56:21.887366 1142862 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-817450"
	I0603 13:56:21.887709 1142862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:56:21.887711 1142862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:56:21.887749 1142862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:56:21.887752 1142862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:56:21.887779 1142862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:56:21.887778 1142862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:56:21.906019 1142862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37881
	I0603 13:56:21.906319 1142862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42985
	I0603 13:56:21.906563 1142862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43371
	I0603 13:56:21.906601 1142862 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:56:21.906714 1142862 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:56:21.907043 1142862 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:56:21.907126 1142862 main.go:141] libmachine: Using API Version  1
	I0603 13:56:21.907143 1142862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:56:21.907269 1142862 main.go:141] libmachine: Using API Version  1
	I0603 13:56:21.907288 1142862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:56:21.907558 1142862 main.go:141] libmachine: Using API Version  1
	I0603 13:56:21.907578 1142862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:56:21.907752 1142862 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:56:21.907891 1142862 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:56:21.908248 1142862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:56:21.908269 1142862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:56:21.908419 1142862 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:56:21.908487 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetState
	I0603 13:56:21.909150 1142862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:56:21.909175 1142862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:56:21.912898 1142862 addons.go:234] Setting addon default-storageclass=true in "no-preload-817450"
	W0603 13:56:21.912926 1142862 addons.go:243] addon default-storageclass should already be in state true
	I0603 13:56:21.912963 1142862 host.go:66] Checking if "no-preload-817450" exists ...
	I0603 13:56:21.913361 1142862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:56:21.913413 1142862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:56:21.928877 1142862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32983
	I0603 13:56:21.929336 1142862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44735
	I0603 13:56:21.929541 1142862 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:56:21.930006 1142862 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:56:21.930064 1142862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43105
	I0603 13:56:21.930161 1142862 main.go:141] libmachine: Using API Version  1
	I0603 13:56:21.930186 1142862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:56:21.930580 1142862 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:56:21.930723 1142862 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:56:21.930798 1142862 main.go:141] libmachine: Using API Version  1
	I0603 13:56:21.930812 1142862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:56:21.930891 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetState
	I0603 13:56:21.931037 1142862 main.go:141] libmachine: Using API Version  1
	I0603 13:56:21.931052 1142862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:56:21.931187 1142862 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:56:21.931369 1142862 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:56:21.931394 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetState
	I0603 13:56:21.932113 1142862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:56:21.932140 1142862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:56:21.933613 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:56:21.936068 1142862 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:56:21.934518 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:56:21.937788 1142862 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 13:56:21.937821 1142862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 13:56:21.937844 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:56:21.939174 1142862 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0603 13:56:21.940435 1142862 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0603 13:56:21.940458 1142862 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0603 13:56:21.940559 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:56:21.942628 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:56:21.943950 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:56:21.944227 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:56:21.944257 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:56:21.944449 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:56:21.944658 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:56:21.944734 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:56:21.944754 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:56:21.944780 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:56:21.944919 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:56:21.944932 1142862 sshutil.go:53] new ssh client: &{IP:192.168.72.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa Username:docker}
	I0603 13:56:21.945154 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:56:21.945309 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:56:21.945457 1142862 sshutil.go:53] new ssh client: &{IP:192.168.72.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa Username:docker}
	I0603 13:56:21.951140 1142862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45619
	I0603 13:56:21.951606 1142862 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:56:21.952125 1142862 main.go:141] libmachine: Using API Version  1
	I0603 13:56:21.952152 1142862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:56:21.952579 1142862 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:56:21.952808 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetState
	I0603 13:56:21.954505 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:56:21.954760 1142862 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 13:56:21.954781 1142862 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 13:56:21.954801 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:56:21.958298 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:56:21.958816 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:56:21.958851 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:56:21.959086 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:56:21.959325 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:56:21.959515 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:56:21.959678 1142862 sshutil.go:53] new ssh client: &{IP:192.168.72.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa Username:docker}
	I0603 13:56:22.102359 1142862 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:56:22.121380 1142862 node_ready.go:35] waiting up to 6m0s for node "no-preload-817450" to be "Ready" ...
	I0603 13:56:22.135572 1142862 node_ready.go:49] node "no-preload-817450" has status "Ready":"True"
	I0603 13:56:22.135599 1142862 node_ready.go:38] duration metric: took 14.156504ms for node "no-preload-817450" to be "Ready" ...
	I0603 13:56:22.135614 1142862 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:56:22.151036 1142862 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-f8pbl" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:22.283805 1142862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 13:56:22.288913 1142862 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0603 13:56:22.288938 1142862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0603 13:56:22.297769 1142862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 13:56:22.329187 1142862 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0603 13:56:22.329221 1142862 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0603 13:56:22.393569 1142862 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 13:56:22.393594 1142862 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0603 13:56:22.435605 1142862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 13:56:23.470078 1142862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.18622743s)
	I0603 13:56:23.470155 1142862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.172344092s)
	I0603 13:56:23.470171 1142862 main.go:141] libmachine: Making call to close driver server
	I0603 13:56:23.470192 1142862 main.go:141] libmachine: (no-preload-817450) Calling .Close
	I0603 13:56:23.470200 1142862 main.go:141] libmachine: Making call to close driver server
	I0603 13:56:23.470216 1142862 main.go:141] libmachine: (no-preload-817450) Calling .Close
	I0603 13:56:23.470515 1142862 main.go:141] libmachine: (no-preload-817450) DBG | Closing plugin on server side
	I0603 13:56:23.470553 1142862 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:56:23.470567 1142862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:56:23.470576 1142862 main.go:141] libmachine: Making call to close driver server
	I0603 13:56:23.470586 1142862 main.go:141] libmachine: (no-preload-817450) Calling .Close
	I0603 13:56:23.470589 1142862 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:56:23.470602 1142862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:56:23.470613 1142862 main.go:141] libmachine: Making call to close driver server
	I0603 13:56:23.470625 1142862 main.go:141] libmachine: (no-preload-817450) Calling .Close
	I0603 13:56:23.470807 1142862 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:56:23.470823 1142862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:56:23.471108 1142862 main.go:141] libmachine: (no-preload-817450) DBG | Closing plugin on server side
	I0603 13:56:23.471138 1142862 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:56:23.471180 1142862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:56:23.492187 1142862 main.go:141] libmachine: Making call to close driver server
	I0603 13:56:23.492226 1142862 main.go:141] libmachine: (no-preload-817450) Calling .Close
	I0603 13:56:23.492596 1142862 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:56:23.492618 1142862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:56:23.492636 1142862 main.go:141] libmachine: (no-preload-817450) DBG | Closing plugin on server side
	I0603 13:56:23.892903 1142862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.45716212s)
	I0603 13:56:23.892991 1142862 main.go:141] libmachine: Making call to close driver server
	I0603 13:56:23.893006 1142862 main.go:141] libmachine: (no-preload-817450) Calling .Close
	I0603 13:56:23.893418 1142862 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:56:23.893426 1142862 main.go:141] libmachine: (no-preload-817450) DBG | Closing plugin on server side
	I0603 13:56:23.893442 1142862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:56:23.893459 1142862 main.go:141] libmachine: Making call to close driver server
	I0603 13:56:23.893468 1142862 main.go:141] libmachine: (no-preload-817450) Calling .Close
	I0603 13:56:23.893745 1142862 main.go:141] libmachine: (no-preload-817450) DBG | Closing plugin on server side
	I0603 13:56:23.893790 1142862 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:56:23.893811 1142862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:56:23.893832 1142862 addons.go:475] Verifying addon metrics-server=true in "no-preload-817450"
	I0603 13:56:23.895990 1142862 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0603 13:56:23.897968 1142862 addons.go:510] duration metric: took 2.013558036s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0603 13:56:24.157803 1142862 pod_ready.go:102] pod "coredns-7db6d8ff4d-f8pbl" in "kube-system" namespace has status "Ready":"False"
	I0603 13:56:24.658730 1142862 pod_ready.go:92] pod "coredns-7db6d8ff4d-f8pbl" in "kube-system" namespace has status "Ready":"True"
	I0603 13:56:24.658765 1142862 pod_ready.go:81] duration metric: took 2.507699067s for pod "coredns-7db6d8ff4d-f8pbl" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.658779 1142862 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.664053 1142862 pod_ready.go:92] pod "etcd-no-preload-817450" in "kube-system" namespace has status "Ready":"True"
	I0603 13:56:24.664084 1142862 pod_ready.go:81] duration metric: took 5.2962ms for pod "etcd-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.664096 1142862 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.668496 1142862 pod_ready.go:92] pod "kube-apiserver-no-preload-817450" in "kube-system" namespace has status "Ready":"True"
	I0603 13:56:24.668521 1142862 pod_ready.go:81] duration metric: took 4.417565ms for pod "kube-apiserver-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.668533 1142862 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.673549 1142862 pod_ready.go:92] pod "kube-controller-manager-no-preload-817450" in "kube-system" namespace has status "Ready":"True"
	I0603 13:56:24.673568 1142862 pod_ready.go:81] duration metric: took 5.026882ms for pod "kube-controller-manager-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.673577 1142862 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-t45fn" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.678207 1142862 pod_ready.go:92] pod "kube-proxy-t45fn" in "kube-system" namespace has status "Ready":"True"
	I0603 13:56:24.678228 1142862 pod_ready.go:81] duration metric: took 4.644345ms for pod "kube-proxy-t45fn" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.678239 1142862 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:25.056174 1142862 pod_ready.go:92] pod "kube-scheduler-no-preload-817450" in "kube-system" namespace has status "Ready":"True"
	I0603 13:56:25.056204 1142862 pod_ready.go:81] duration metric: took 377.957963ms for pod "kube-scheduler-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:25.056214 1142862 pod_ready.go:38] duration metric: took 2.920586356s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:56:25.056231 1142862 api_server.go:52] waiting for apiserver process to appear ...
	I0603 13:56:25.056294 1142862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:56:25.071253 1142862 api_server.go:72] duration metric: took 3.186917827s to wait for apiserver process to appear ...
	I0603 13:56:25.071291 1142862 api_server.go:88] waiting for apiserver healthz status ...
	I0603 13:56:25.071319 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:56:25.076592 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 200:
	ok
	I0603 13:56:25.077531 1142862 api_server.go:141] control plane version: v1.30.1
	I0603 13:56:25.077553 1142862 api_server.go:131] duration metric: took 6.255263ms to wait for apiserver health ...
	I0603 13:56:25.077561 1142862 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 13:56:25.258520 1142862 system_pods.go:59] 9 kube-system pods found
	I0603 13:56:25.258552 1142862 system_pods.go:61] "coredns-7db6d8ff4d-f8pbl" [201e687b-1c1b-4030-8b59-b0257a0f876c] Running
	I0603 13:56:25.258557 1142862 system_pods.go:61] "coredns-7db6d8ff4d-jgk4p" [75956644-426d-49a7-b80c-492c4284f438] Running
	I0603 13:56:25.258560 1142862 system_pods.go:61] "etcd-no-preload-817450" [51d6541e-42ba-4d69-938d-0f2d379572ec] Running
	I0603 13:56:25.258565 1142862 system_pods.go:61] "kube-apiserver-no-preload-817450" [76c05ee7-8f8c-4280-af34-534c73422c51] Running
	I0603 13:56:25.258569 1142862 system_pods.go:61] "kube-controller-manager-no-preload-817450" [e3394427-3c75-4fb4-bd08-b22b8b6ad9eb] Running
	I0603 13:56:25.258573 1142862 system_pods.go:61] "kube-proxy-t45fn" [0578c151-2b36-4125-83f8-f4fbd62a1dc4] Running
	I0603 13:56:25.258578 1142862 system_pods.go:61] "kube-scheduler-no-preload-817450" [9d7c419f-d671-4b0a-bfee-7fe26c690312] Running
	I0603 13:56:25.258585 1142862 system_pods.go:61] "metrics-server-569cc877fc-j2lpf" [4f776017-1575-4461-a7c8-656e5a170460] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:56:25.258591 1142862 system_pods.go:61] "storage-provisioner" [f22655fc-5571-496e-a93f-3970d1693435] Running
	I0603 13:56:25.258603 1142862 system_pods.go:74] duration metric: took 181.034608ms to wait for pod list to return data ...
	I0603 13:56:25.258618 1142862 default_sa.go:34] waiting for default service account to be created ...
	I0603 13:56:25.454775 1142862 default_sa.go:45] found service account: "default"
	I0603 13:56:25.454810 1142862 default_sa.go:55] duration metric: took 196.18004ms for default service account to be created ...
	I0603 13:56:25.454820 1142862 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 13:56:25.658868 1142862 system_pods.go:86] 9 kube-system pods found
	I0603 13:56:25.658908 1142862 system_pods.go:89] "coredns-7db6d8ff4d-f8pbl" [201e687b-1c1b-4030-8b59-b0257a0f876c] Running
	I0603 13:56:25.658919 1142862 system_pods.go:89] "coredns-7db6d8ff4d-jgk4p" [75956644-426d-49a7-b80c-492c4284f438] Running
	I0603 13:56:25.658926 1142862 system_pods.go:89] "etcd-no-preload-817450" [51d6541e-42ba-4d69-938d-0f2d379572ec] Running
	I0603 13:56:25.658932 1142862 system_pods.go:89] "kube-apiserver-no-preload-817450" [76c05ee7-8f8c-4280-af34-534c73422c51] Running
	I0603 13:56:25.658938 1142862 system_pods.go:89] "kube-controller-manager-no-preload-817450" [e3394427-3c75-4fb4-bd08-b22b8b6ad9eb] Running
	I0603 13:56:25.658944 1142862 system_pods.go:89] "kube-proxy-t45fn" [0578c151-2b36-4125-83f8-f4fbd62a1dc4] Running
	I0603 13:56:25.658950 1142862 system_pods.go:89] "kube-scheduler-no-preload-817450" [9d7c419f-d671-4b0a-bfee-7fe26c690312] Running
	I0603 13:56:25.658959 1142862 system_pods.go:89] "metrics-server-569cc877fc-j2lpf" [4f776017-1575-4461-a7c8-656e5a170460] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:56:25.658970 1142862 system_pods.go:89] "storage-provisioner" [f22655fc-5571-496e-a93f-3970d1693435] Running
	I0603 13:56:25.658983 1142862 system_pods.go:126] duration metric: took 204.156078ms to wait for k8s-apps to be running ...
	I0603 13:56:25.658999 1142862 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 13:56:25.659058 1142862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:56:25.674728 1142862 system_svc.go:56] duration metric: took 15.717684ms WaitForService to wait for kubelet
	I0603 13:56:25.674759 1142862 kubeadm.go:576] duration metric: took 3.790431991s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 13:56:25.674777 1142862 node_conditions.go:102] verifying NodePressure condition ...
	I0603 13:56:25.855640 1142862 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 13:56:25.855671 1142862 node_conditions.go:123] node cpu capacity is 2
	I0603 13:56:25.855684 1142862 node_conditions.go:105] duration metric: took 180.901974ms to run NodePressure ...
	I0603 13:56:25.855696 1142862 start.go:240] waiting for startup goroutines ...
	I0603 13:56:25.855703 1142862 start.go:245] waiting for cluster config update ...
	I0603 13:56:25.855716 1142862 start.go:254] writing updated cluster config ...
	I0603 13:56:25.856020 1142862 ssh_runner.go:195] Run: rm -f paused
	I0603 13:56:25.908747 1142862 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 13:56:25.911049 1142862 out.go:177] * Done! kubectl is now configured to use "no-preload-817450" cluster and "default" namespace by default
	I0603 13:56:44.813650 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:56:44.813933 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:56:44.813964 1143678 kubeadm.go:309] 
	I0603 13:56:44.814039 1143678 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0603 13:56:44.814075 1143678 kubeadm.go:309] 		timed out waiting for the condition
	I0603 13:56:44.814115 1143678 kubeadm.go:309] 
	I0603 13:56:44.814197 1143678 kubeadm.go:309] 	This error is likely caused by:
	I0603 13:56:44.814246 1143678 kubeadm.go:309] 		- The kubelet is not running
	I0603 13:56:44.814369 1143678 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0603 13:56:44.814378 1143678 kubeadm.go:309] 
	I0603 13:56:44.814496 1143678 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0603 13:56:44.814540 1143678 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0603 13:56:44.814573 1143678 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0603 13:56:44.814580 1143678 kubeadm.go:309] 
	I0603 13:56:44.814685 1143678 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0603 13:56:44.814785 1143678 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0603 13:56:44.814798 1143678 kubeadm.go:309] 
	I0603 13:56:44.814896 1143678 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0603 13:56:44.815001 1143678 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0603 13:56:44.815106 1143678 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0603 13:56:44.815208 1143678 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0603 13:56:44.815220 1143678 kubeadm.go:309] 
	I0603 13:56:44.816032 1143678 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 13:56:44.816137 1143678 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0603 13:56:44.816231 1143678 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0603 13:56:44.816405 1143678 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0603 13:56:44.816480 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 13:56:45.288649 1143678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:56:45.305284 1143678 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 13:56:45.316705 1143678 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 13:56:45.316736 1143678 kubeadm.go:156] found existing configuration files:
	
	I0603 13:56:45.316804 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 13:56:45.327560 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 13:56:45.327630 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 13:56:45.337910 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 13:56:45.349864 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 13:56:45.349948 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 13:56:45.361369 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 13:56:45.371797 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 13:56:45.371866 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 13:56:45.382861 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 13:56:45.393310 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 13:56:45.393382 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 13:56:45.403822 1143678 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 13:56:45.476725 1143678 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0603 13:56:45.476794 1143678 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 13:56:45.630786 1143678 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 13:56:45.630956 1143678 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 13:56:45.631125 1143678 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 13:56:45.814370 1143678 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 13:56:45.816372 1143678 out.go:204]   - Generating certificates and keys ...
	I0603 13:56:45.816481 1143678 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 13:56:45.816556 1143678 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 13:56:45.816710 1143678 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 13:56:45.816831 1143678 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 13:56:45.816928 1143678 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 13:56:45.817003 1143678 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 13:56:45.817093 1143678 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 13:56:45.817178 1143678 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 13:56:45.817328 1143678 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 13:56:45.817477 1143678 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 13:56:45.817533 1143678 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 13:56:45.817607 1143678 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 13:56:46.025905 1143678 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 13:56:46.331809 1143678 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 13:56:46.551488 1143678 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 13:56:46.636938 1143678 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 13:56:46.663292 1143678 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 13:56:46.663400 1143678 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 13:56:46.663448 1143678 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 13:56:46.840318 1143678 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 13:56:46.842399 1143678 out.go:204]   - Booting up control plane ...
	I0603 13:56:46.842530 1143678 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 13:56:46.851940 1143678 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 13:56:46.855283 1143678 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 13:56:46.855443 1143678 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 13:56:46.857883 1143678 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0603 13:57:26.860915 1143678 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0603 13:57:26.861047 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:57:26.861296 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:57:31.861724 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:57:31.862046 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:57:41.862803 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:57:41.863057 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:58:01.862907 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:58:01.863136 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:58:41.862069 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:58:41.862391 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:58:41.862430 1143678 kubeadm.go:309] 
	I0603 13:58:41.862535 1143678 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0603 13:58:41.862613 1143678 kubeadm.go:309] 		timed out waiting for the condition
	I0603 13:58:41.862624 1143678 kubeadm.go:309] 
	I0603 13:58:41.862675 1143678 kubeadm.go:309] 	This error is likely caused by:
	I0603 13:58:41.862737 1143678 kubeadm.go:309] 		- The kubelet is not running
	I0603 13:58:41.862895 1143678 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0603 13:58:41.862909 1143678 kubeadm.go:309] 
	I0603 13:58:41.863030 1143678 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0603 13:58:41.863060 1143678 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0603 13:58:41.863090 1143678 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0603 13:58:41.863100 1143678 kubeadm.go:309] 
	I0603 13:58:41.863230 1143678 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0603 13:58:41.863388 1143678 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0603 13:58:41.863406 1143678 kubeadm.go:309] 
	I0603 13:58:41.863583 1143678 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0603 13:58:41.863709 1143678 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0603 13:58:41.863811 1143678 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0603 13:58:41.863894 1143678 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0603 13:58:41.863917 1143678 kubeadm.go:309] 
	I0603 13:58:41.865001 1143678 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 13:58:41.865120 1143678 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0603 13:58:41.865209 1143678 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0603 13:58:41.865361 1143678 kubeadm.go:393] duration metric: took 8m3.432874561s to StartCluster
	I0603 13:58:41.865460 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:58:41.865537 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:58:41.912780 1143678 cri.go:89] found id: ""
	I0603 13:58:41.912812 1143678 logs.go:276] 0 containers: []
	W0603 13:58:41.912826 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:58:41.912832 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:58:41.912901 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:58:41.951372 1143678 cri.go:89] found id: ""
	I0603 13:58:41.951402 1143678 logs.go:276] 0 containers: []
	W0603 13:58:41.951411 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:58:41.951418 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:58:41.951490 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:58:41.989070 1143678 cri.go:89] found id: ""
	I0603 13:58:41.989104 1143678 logs.go:276] 0 containers: []
	W0603 13:58:41.989115 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:58:41.989123 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:58:41.989191 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:58:42.026208 1143678 cri.go:89] found id: ""
	I0603 13:58:42.026238 1143678 logs.go:276] 0 containers: []
	W0603 13:58:42.026246 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:58:42.026252 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:58:42.026312 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:58:42.064899 1143678 cri.go:89] found id: ""
	I0603 13:58:42.064941 1143678 logs.go:276] 0 containers: []
	W0603 13:58:42.064950 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:58:42.064971 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:58:42.065043 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:58:42.098817 1143678 cri.go:89] found id: ""
	I0603 13:58:42.098858 1143678 logs.go:276] 0 containers: []
	W0603 13:58:42.098868 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:58:42.098876 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:58:42.098939 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:58:42.133520 1143678 cri.go:89] found id: ""
	I0603 13:58:42.133558 1143678 logs.go:276] 0 containers: []
	W0603 13:58:42.133570 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:58:42.133579 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:58:42.133639 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:58:42.187356 1143678 cri.go:89] found id: ""
	I0603 13:58:42.187387 1143678 logs.go:276] 0 containers: []
	W0603 13:58:42.187399 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:58:42.187412 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:58:42.187434 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:58:42.249992 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:58:42.250034 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:58:42.272762 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:58:42.272801 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:58:42.362004 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:58:42.362030 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:58:42.362046 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:58:42.468630 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:58:42.468676 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0603 13:58:42.510945 1143678 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0603 13:58:42.511002 1143678 out.go:239] * 
	W0603 13:58:42.511094 1143678 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0603 13:58:42.511119 1143678 out.go:239] * 
	W0603 13:58:42.512307 1143678 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0603 13:58:42.516199 1143678 out.go:177] 
	W0603 13:58:42.517774 1143678 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0603 13:58:42.517848 1143678 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0603 13:58:42.517883 1143678 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0603 13:58:42.519747 1143678 out.go:177] 
	
	
	==> CRI-O <==
	Jun 03 14:05:28 no-preload-817450 crio[720]: time="2024-06-03 14:05:28.270383278Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7e411775-34bb-42d3-bfb8-ee1c25642286 name=/runtime.v1.RuntimeService/Version
	Jun 03 14:05:28 no-preload-817450 crio[720]: time="2024-06-03 14:05:28.272315604Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c1069c9f-cbb4-423e-9396-1f2157b3c24f name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:05:28 no-preload-817450 crio[720]: time="2024-06-03 14:05:28.272683200Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717423528272660581,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99934,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c1069c9f-cbb4-423e-9396-1f2157b3c24f name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:05:28 no-preload-817450 crio[720]: time="2024-06-03 14:05:28.273598122Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a1653740-ddd0-45e5-92fe-a49c14099fb4 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:05:28 no-preload-817450 crio[720]: time="2024-06-03 14:05:28.273667193Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a1653740-ddd0-45e5-92fe-a49c14099fb4 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:05:28 no-preload-817450 crio[720]: time="2024-06-03 14:05:28.273919748Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e396c54cedb357a57ac99faf2643088bd5f4ac32e3d087d6b89793d9ec4eeb08,PodSandboxId:3b302ef8b6487e80d6530924059a703e7464f8e33800431474b978710a47ed55,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717422983947690813,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f22655fc-5571-496e-a93f-3970d1693435,},Annotations:map[string]string{io.kubernetes.container.hash: 15a9a52d,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c27c4962bb8982cea91927cd93e77d3643624f2c8bdcea1c57eb199d8f543711,PodSandboxId:97cb74082cec07a03da7f17124d58169d0d5ce6aae5edd2354bac9172ac01fab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717422983345182653,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f8pbl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 201e687b-1c1b-4030-8b59-b0257a0f876c,},Annotations:map[string]string{io.kubernetes.container.hash: b7f28944,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84874b63b7fd20e3d3decf270af77e030db0b85759bd926f234f6279866448ef,PodSandboxId:5241ddb542f5c62f506f5b18a1941134c2de81ac18f5ae8821b307f4f5883d95,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717422983362215651,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jgk4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75
956644-426d-49a7-b80c-492c4284f438,},Annotations:map[string]string{io.kubernetes.container.hash: 351ca9b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:526214f62ac9877b8679bd74972c9e5a7fff1255392e765ade04863755dcd249,PodSandboxId:4abcb5707b6281b704951dd006b0fc5235c4a67bdea8cb5d9884e7caf44fe1a1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:
1717422982692339386,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t45fn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0578c151-2b36-4125-83f8-f4fbd62a1dc4,},Annotations:map[string]string{io.kubernetes.container.hash: 112a4db8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3712516987b541315caa42ec191de1f8490381a8905601bc18670783e5b465a5,PodSandboxId:70a59eaf41f06bbd52f930cfcd9094a16ee18caa9e889af1077f5514e16c8b59,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717422962836173073,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-817450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb44b3b772d07e11b206e5b0f01ae231,},Annotations:map[string]string{io.kubernetes.container.hash: 866818ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84ce44663e901258ef10d9f88a3641193e8adb038271772c2d6c42f3265d96a2,PodSandboxId:7706a8490750828708105889445e13e10bec6f79ba757e00264a76ca5a45746d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717422962784502292,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-817450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8375333af2ee4d43244d7eb8597636ed,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a2ab9144d51792c0c66d2253bdb0cd54218b45f89025eb3186e6d8a4cf17b13,PodSandboxId:32b672762a022aeb960a5dc333847256cd576a93185aa0f3fb8194f4ceec2629,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717422962753743904,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-817450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68c0344827ff03b6ad52446b7293abdc,},Annotations:map[string]string{io.kubernetes.container.hash: 64cf595c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87bd39c3f2658eb439cf6bf643a144708940ab9ca35268e56fb38dd0f2b2d0ba,PodSandboxId:2d0df016f48d0b036573ad32fbe476386c90d4c761148c4fccbbb88836f4a372,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717422962660373747,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-817450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7e13d65dd08a92b3dadcddfd215dae3,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08ddf7aff7fc5db99feefd13d7ac8694eade514d615f96858e0a149d53aeda96,PodSandboxId:22af9160ec5ca93dfc01af0b91c1583b0172a9efef2b983b6648117cb92bea4d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717422669211181789,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-817450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68c0344827ff03b6ad52446b7293abdc,},Annotations:map[string]string{io.kubernetes.container.hash: 64cf595c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a1653740-ddd0-45e5-92fe-a49c14099fb4 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:05:28 no-preload-817450 crio[720]: time="2024-06-03 14:05:28.307509737Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7648bdc7-1488-4aad-ac64-8abbcfc55ce9 name=/runtime.v1.RuntimeService/Version
	Jun 03 14:05:28 no-preload-817450 crio[720]: time="2024-06-03 14:05:28.307610134Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7648bdc7-1488-4aad-ac64-8abbcfc55ce9 name=/runtime.v1.RuntimeService/Version
	Jun 03 14:05:28 no-preload-817450 crio[720]: time="2024-06-03 14:05:28.308491415Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a2b3e8fc-e98f-40f8-af76-611e28071a38 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:05:28 no-preload-817450 crio[720]: time="2024-06-03 14:05:28.309038059Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717423528308861690,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99934,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a2b3e8fc-e98f-40f8-af76-611e28071a38 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:05:28 no-preload-817450 crio[720]: time="2024-06-03 14:05:28.309510511Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1d6dbe67-edbe-473a-827e-54520ba47e10 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:05:28 no-preload-817450 crio[720]: time="2024-06-03 14:05:28.309605900Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1d6dbe67-edbe-473a-827e-54520ba47e10 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:05:28 no-preload-817450 crio[720]: time="2024-06-03 14:05:28.309806971Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e396c54cedb357a57ac99faf2643088bd5f4ac32e3d087d6b89793d9ec4eeb08,PodSandboxId:3b302ef8b6487e80d6530924059a703e7464f8e33800431474b978710a47ed55,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717422983947690813,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f22655fc-5571-496e-a93f-3970d1693435,},Annotations:map[string]string{io.kubernetes.container.hash: 15a9a52d,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c27c4962bb8982cea91927cd93e77d3643624f2c8bdcea1c57eb199d8f543711,PodSandboxId:97cb74082cec07a03da7f17124d58169d0d5ce6aae5edd2354bac9172ac01fab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717422983345182653,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f8pbl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 201e687b-1c1b-4030-8b59-b0257a0f876c,},Annotations:map[string]string{io.kubernetes.container.hash: b7f28944,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84874b63b7fd20e3d3decf270af77e030db0b85759bd926f234f6279866448ef,PodSandboxId:5241ddb542f5c62f506f5b18a1941134c2de81ac18f5ae8821b307f4f5883d95,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717422983362215651,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jgk4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75
956644-426d-49a7-b80c-492c4284f438,},Annotations:map[string]string{io.kubernetes.container.hash: 351ca9b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:526214f62ac9877b8679bd74972c9e5a7fff1255392e765ade04863755dcd249,PodSandboxId:4abcb5707b6281b704951dd006b0fc5235c4a67bdea8cb5d9884e7caf44fe1a1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:
1717422982692339386,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t45fn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0578c151-2b36-4125-83f8-f4fbd62a1dc4,},Annotations:map[string]string{io.kubernetes.container.hash: 112a4db8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3712516987b541315caa42ec191de1f8490381a8905601bc18670783e5b465a5,PodSandboxId:70a59eaf41f06bbd52f930cfcd9094a16ee18caa9e889af1077f5514e16c8b59,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717422962836173073,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-817450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb44b3b772d07e11b206e5b0f01ae231,},Annotations:map[string]string{io.kubernetes.container.hash: 866818ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84ce44663e901258ef10d9f88a3641193e8adb038271772c2d6c42f3265d96a2,PodSandboxId:7706a8490750828708105889445e13e10bec6f79ba757e00264a76ca5a45746d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717422962784502292,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-817450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8375333af2ee4d43244d7eb8597636ed,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a2ab9144d51792c0c66d2253bdb0cd54218b45f89025eb3186e6d8a4cf17b13,PodSandboxId:32b672762a022aeb960a5dc333847256cd576a93185aa0f3fb8194f4ceec2629,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717422962753743904,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-817450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68c0344827ff03b6ad52446b7293abdc,},Annotations:map[string]string{io.kubernetes.container.hash: 64cf595c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87bd39c3f2658eb439cf6bf643a144708940ab9ca35268e56fb38dd0f2b2d0ba,PodSandboxId:2d0df016f48d0b036573ad32fbe476386c90d4c761148c4fccbbb88836f4a372,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717422962660373747,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-817450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7e13d65dd08a92b3dadcddfd215dae3,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08ddf7aff7fc5db99feefd13d7ac8694eade514d615f96858e0a149d53aeda96,PodSandboxId:22af9160ec5ca93dfc01af0b91c1583b0172a9efef2b983b6648117cb92bea4d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717422669211181789,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-817450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68c0344827ff03b6ad52446b7293abdc,},Annotations:map[string]string{io.kubernetes.container.hash: 64cf595c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1d6dbe67-edbe-473a-827e-54520ba47e10 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:05:28 no-preload-817450 crio[720]: time="2024-06-03 14:05:28.326076466Z" level=debug msg="Request: &ImageStatusRequest{Image:&ImageSpec{Image:fake.domain/registry.k8s.io/echoserver:1.4,Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03T13:56:23.696561796Z,kubernetes.io/config.source: api,},UserSpecifiedImage:,RuntimeHandler:,},Verbose:false,}" file="otel-collector/interceptors.go:62" id=dcd1cd96-fa7d-4e3f-b3e6-5dc5cb1210eb name=/runtime.v1.ImageService/ImageStatus
	Jun 03 14:05:28 no-preload-817450 crio[720]: time="2024-06-03 14:05:28.326182432Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" file="server/image_status.go:27" id=dcd1cd96-fa7d-4e3f-b3e6-5dc5cb1210eb name=/runtime.v1.ImageService/ImageStatus
	Jun 03 14:05:28 no-preload-817450 crio[720]: time="2024-06-03 14:05:28.326312328Z" level=debug msg="reference \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]fake.domain/registry.k8s.io/echoserver:1.4\" does not resolve to an image ID" file="storage/storage_reference.go:149"
	Jun 03 14:05:28 no-preload-817450 crio[720]: time="2024-06-03 14:05:28.326379944Z" level=debug msg="Can't find fake.domain/registry.k8s.io/echoserver:1.4" file="server/image_status.go:97" id=dcd1cd96-fa7d-4e3f-b3e6-5dc5cb1210eb name=/runtime.v1.ImageService/ImageStatus
	Jun 03 14:05:28 no-preload-817450 crio[720]: time="2024-06-03 14:05:28.326442985Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" file="server/image_status.go:111" id=dcd1cd96-fa7d-4e3f-b3e6-5dc5cb1210eb name=/runtime.v1.ImageService/ImageStatus
	Jun 03 14:05:28 no-preload-817450 crio[720]: time="2024-06-03 14:05:28.326483971Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" file="server/image_status.go:33" id=dcd1cd96-fa7d-4e3f-b3e6-5dc5cb1210eb name=/runtime.v1.ImageService/ImageStatus
	Jun 03 14:05:28 no-preload-817450 crio[720]: time="2024-06-03 14:05:28.326534535Z" level=debug msg="Response: &ImageStatusResponse{Image:nil,Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=dcd1cd96-fa7d-4e3f-b3e6-5dc5cb1210eb name=/runtime.v1.ImageService/ImageStatus
	Jun 03 14:05:28 no-preload-817450 crio[720]: time="2024-06-03 14:05:28.326805007Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1a876591-f846-41f8-8713-f01fdba77e4c name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 03 14:05:28 no-preload-817450 crio[720]: time="2024-06-03 14:05:28.327157643Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:ba10c4cf4ff54616733a281188f600df9178cb3020e523973dc33f48025ae44b,Metadata:&PodSandboxMetadata{Name:metrics-server-569cc877fc-j2lpf,Uid:4f776017-1575-4461-a7c8-656e5a170460,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717422984009808228,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-569cc877fc-j2lpf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f776017-1575-4461-a7c8-656e5a170460,k8s-app: metrics-server,pod-template-hash: 569cc877fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03T13:56:23.696561796Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3b302ef8b6487e80d6530924059a703e7464f8e33800431474b978710a47ed55,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:f22655fc-5571-496e-a93f-3970d1693435,Na
mespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717422983776804841,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f22655fc-5571-496e-a93f-3970d1693435,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volu
mes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-06-03T13:56:23.460271572Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5241ddb542f5c62f506f5b18a1941134c2de81ac18f5ae8821b307f4f5883d95,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-jgk4p,Uid:75956644-426d-49a7-b80c-492c4284f438,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717422982488820404,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-jgk4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75956644-426d-49a7-b80c-492c4284f438,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03T13:56:22.178096158Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:97cb74082cec07a03da7f17124d58169d0d5ce6aae5edd2354bac9172ac01fab,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-f8pbl,Uid:201e687b-1c1b-4030-
8b59-b0257a0f876c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717422982446016564,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-f8pbl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 201e687b-1c1b-4030-8b59-b0257a0f876c,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03T13:56:22.132828487Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4abcb5707b6281b704951dd006b0fc5235c4a67bdea8cb5d9884e7caf44fe1a1,Metadata:&PodSandboxMetadata{Name:kube-proxy-t45fn,Uid:0578c151-2b36-4125-83f8-f4fbd62a1dc4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717422982333964960,Labels:map[string]string{controller-revision-hash: 5dbf89796d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-t45fn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0578c151-2b36-4125-83f8-f4fbd62a1dc4,k8s-app: kube-proxy,pod-temp
late-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03T13:56:22.019029464Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7706a8490750828708105889445e13e10bec6f79ba757e00264a76ca5a45746d,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-817450,Uid:8375333af2ee4d43244d7eb8597636ed,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717422962523271485,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-817450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8375333af2ee4d43244d7eb8597636ed,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8375333af2ee4d43244d7eb8597636ed,kubernetes.io/config.seen: 2024-06-03T13:56:02.028969931Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:32b672762a022aeb960a5dc333847256cd576a93185aa0f3fb8194f4ceec2629,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no
-preload-817450,Uid:68c0344827ff03b6ad52446b7293abdc,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1717422962522629752,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-817450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68c0344827ff03b6ad52446b7293abdc,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.125:8443,kubernetes.io/config.hash: 68c0344827ff03b6ad52446b7293abdc,kubernetes.io/config.seen: 2024-06-03T13:56:02.028967525Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:70a59eaf41f06bbd52f930cfcd9094a16ee18caa9e889af1077f5514e16c8b59,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-817450,Uid:eb44b3b772d07e11b206e5b0f01ae231,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717422962509157511,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kub
ernetes.pod.name: etcd-no-preload-817450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb44b3b772d07e11b206e5b0f01ae231,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.125:2379,kubernetes.io/config.hash: eb44b3b772d07e11b206e5b0f01ae231,kubernetes.io/config.seen: 2024-06-03T13:56:02.028962106Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2d0df016f48d0b036573ad32fbe476386c90d4c761148c4fccbbb88836f4a372,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-817450,Uid:b7e13d65dd08a92b3dadcddfd215dae3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717422962507257203,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-817450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7e13d65dd08a92b3dadcddfd215dae3,tier: control-plane,},Annotations:map[string]strin
g{kubernetes.io/config.hash: b7e13d65dd08a92b3dadcddfd215dae3,kubernetes.io/config.seen: 2024-06-03T13:56:02.028969092Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=1a876591-f846-41f8-8713-f01fdba77e4c name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 03 14:05:28 no-preload-817450 crio[720]: time="2024-06-03 14:05:28.327953137Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c24dcaf3-2536-4098-8d13-b114fdec5609 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:05:28 no-preload-817450 crio[720]: time="2024-06-03 14:05:28.328029705Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c24dcaf3-2536-4098-8d13-b114fdec5609 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:05:28 no-preload-817450 crio[720]: time="2024-06-03 14:05:28.328274043Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e396c54cedb357a57ac99faf2643088bd5f4ac32e3d087d6b89793d9ec4eeb08,PodSandboxId:3b302ef8b6487e80d6530924059a703e7464f8e33800431474b978710a47ed55,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717422983947690813,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f22655fc-5571-496e-a93f-3970d1693435,},Annotations:map[string]string{io.kubernetes.container.hash: 15a9a52d,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c27c4962bb8982cea91927cd93e77d3643624f2c8bdcea1c57eb199d8f543711,PodSandboxId:97cb74082cec07a03da7f17124d58169d0d5ce6aae5edd2354bac9172ac01fab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717422983345182653,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f8pbl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 201e687b-1c1b-4030-8b59-b0257a0f876c,},Annotations:map[string]string{io.kubernetes.container.hash: b7f28944,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84874b63b7fd20e3d3decf270af77e030db0b85759bd926f234f6279866448ef,PodSandboxId:5241ddb542f5c62f506f5b18a1941134c2de81ac18f5ae8821b307f4f5883d95,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717422983362215651,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jgk4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75
956644-426d-49a7-b80c-492c4284f438,},Annotations:map[string]string{io.kubernetes.container.hash: 351ca9b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:526214f62ac9877b8679bd74972c9e5a7fff1255392e765ade04863755dcd249,PodSandboxId:4abcb5707b6281b704951dd006b0fc5235c4a67bdea8cb5d9884e7caf44fe1a1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:
1717422982692339386,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t45fn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0578c151-2b36-4125-83f8-f4fbd62a1dc4,},Annotations:map[string]string{io.kubernetes.container.hash: 112a4db8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3712516987b541315caa42ec191de1f8490381a8905601bc18670783e5b465a5,PodSandboxId:70a59eaf41f06bbd52f930cfcd9094a16ee18caa9e889af1077f5514e16c8b59,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717422962836173073,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-817450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb44b3b772d07e11b206e5b0f01ae231,},Annotations:map[string]string{io.kubernetes.container.hash: 866818ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84ce44663e901258ef10d9f88a3641193e8adb038271772c2d6c42f3265d96a2,PodSandboxId:7706a8490750828708105889445e13e10bec6f79ba757e00264a76ca5a45746d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717422962784502292,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-817450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8375333af2ee4d43244d7eb8597636ed,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a2ab9144d51792c0c66d2253bdb0cd54218b45f89025eb3186e6d8a4cf17b13,PodSandboxId:32b672762a022aeb960a5dc333847256cd576a93185aa0f3fb8194f4ceec2629,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717422962753743904,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-817450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68c0344827ff03b6ad52446b7293abdc,},Annotations:map[string]string{io.kubernetes.container.hash: 64cf595c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87bd39c3f2658eb439cf6bf643a144708940ab9ca35268e56fb38dd0f2b2d0ba,PodSandboxId:2d0df016f48d0b036573ad32fbe476386c90d4c761148c4fccbbb88836f4a372,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717422962660373747,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-817450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7e13d65dd08a92b3dadcddfd215dae3,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c24dcaf3-2536-4098-8d13-b114fdec5609 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e396c54cedb35       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   3b302ef8b6487       storage-provisioner
	84874b63b7fd2       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   5241ddb542f5c       coredns-7db6d8ff4d-jgk4p
	c27c4962bb898       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   97cb74082cec0       coredns-7db6d8ff4d-f8pbl
	526214f62ac98       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   9 minutes ago       Running             kube-proxy                0                   4abcb5707b628       kube-proxy-t45fn
	3712516987b54       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   70a59eaf41f06       etcd-no-preload-817450
	84ce44663e901       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   9 minutes ago       Running             kube-scheduler            2                   7706a84907508       kube-scheduler-no-preload-817450
	1a2ab9144d517       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   9 minutes ago       Running             kube-apiserver            2                   32b672762a022       kube-apiserver-no-preload-817450
	87bd39c3f2658       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   9 minutes ago       Running             kube-controller-manager   2                   2d0df016f48d0       kube-controller-manager-no-preload-817450
	08ddf7aff7fc5       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   14 minutes ago      Exited              kube-apiserver            1                   22af9160ec5ca       kube-apiserver-no-preload-817450
	
	
	==> coredns [84874b63b7fd20e3d3decf270af77e030db0b85759bd926f234f6279866448ef] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [c27c4962bb8982cea91927cd93e77d3643624f2c8bdcea1c57eb199d8f543711] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-817450
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-817450
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	                    minikube.k8s.io/name=no-preload-817450
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_03T13_56_09_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 13:56:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-817450
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 14:05:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 14:01:34 +0000   Mon, 03 Jun 2024 13:56:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 14:01:34 +0000   Mon, 03 Jun 2024 13:56:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 14:01:34 +0000   Mon, 03 Jun 2024 13:56:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 14:01:34 +0000   Mon, 03 Jun 2024 13:56:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.125
	  Hostname:    no-preload-817450
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f556b2d5de8f43ba90b51bc125687665
	  System UUID:                f556b2d5-de8f-43ba-90b5-1bc125687665
	  Boot ID:                    4d33bd4d-32f2-4a4a-abf6-785601422159
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-f8pbl                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m6s
	  kube-system                 coredns-7db6d8ff4d-jgk4p                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m6s
	  kube-system                 etcd-no-preload-817450                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-apiserver-no-preload-817450             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-controller-manager-no-preload-817450    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-proxy-t45fn                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                 kube-scheduler-no-preload-817450             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 metrics-server-569cc877fc-j2lpf              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m5s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m5s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m26s (x8 over 9m26s)  kubelet          Node no-preload-817450 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m26s (x8 over 9m26s)  kubelet          Node no-preload-817450 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m26s (x7 over 9m26s)  kubelet          Node no-preload-817450 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m20s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m20s                  kubelet          Node no-preload-817450 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m20s                  kubelet          Node no-preload-817450 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m20s                  kubelet          Node no-preload-817450 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m7s                   node-controller  Node no-preload-817450 event: Registered Node no-preload-817450 in Controller
	
	
	==> dmesg <==
	[  +0.043992] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.950479] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.516387] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.683132] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.786286] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.061639] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058477] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.189366] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +0.113044] systemd-fstab-generator[676]: Ignoring "noauto" option for root device
	[  +0.297563] systemd-fstab-generator[705]: Ignoring "noauto" option for root device
	[Jun 3 13:51] systemd-fstab-generator[1222]: Ignoring "noauto" option for root device
	[  +0.069704] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.137649] systemd-fstab-generator[1345]: Ignoring "noauto" option for root device
	[  +4.111210] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.914404] kauditd_printk_skb: 53 callbacks suppressed
	[  +7.346448] kauditd_printk_skb: 24 callbacks suppressed
	[Jun 3 13:55] kauditd_printk_skb: 3 callbacks suppressed
	[Jun 3 13:56] systemd-fstab-generator[3968]: Ignoring "noauto" option for root device
	[  +6.567762] systemd-fstab-generator[4288]: Ignoring "noauto" option for root device
	[  +0.105887] kauditd_printk_skb: 58 callbacks suppressed
	[ +13.824813] systemd-fstab-generator[4485]: Ignoring "noauto" option for root device
	[  +0.113315] kauditd_printk_skb: 12 callbacks suppressed
	[Jun 3 13:57] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [3712516987b541315caa42ec191de1f8490381a8905601bc18670783e5b465a5] <==
	{"level":"info","ts":"2024-06-03T13:56:03.203719Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-03T13:56:03.204141Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"2db48a961a30b16c","initial-advertise-peer-urls":["https://192.168.72.125:2380"],"listen-peer-urls":["https://192.168.72.125:2380"],"advertise-client-urls":["https://192.168.72.125:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.125:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-03T13:56:03.204227Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-03T13:56:03.204383Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.125:2380"}
	{"level":"info","ts":"2024-06-03T13:56:03.204409Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.125:2380"}
	{"level":"info","ts":"2024-06-03T13:56:03.214446Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2db48a961a30b16c switched to configuration voters=(3293409604803801452)"}
	{"level":"info","ts":"2024-06-03T13:56:03.214698Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1634120b80e66761","local-member-id":"2db48a961a30b16c","added-peer-id":"2db48a961a30b16c","added-peer-peer-urls":["https://192.168.72.125:2380"]}
	{"level":"info","ts":"2024-06-03T13:56:04.132995Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2db48a961a30b16c is starting a new election at term 1"}
	{"level":"info","ts":"2024-06-03T13:56:04.133122Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2db48a961a30b16c became pre-candidate at term 1"}
	{"level":"info","ts":"2024-06-03T13:56:04.133164Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2db48a961a30b16c received MsgPreVoteResp from 2db48a961a30b16c at term 1"}
	{"level":"info","ts":"2024-06-03T13:56:04.133193Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2db48a961a30b16c became candidate at term 2"}
	{"level":"info","ts":"2024-06-03T13:56:04.133217Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2db48a961a30b16c received MsgVoteResp from 2db48a961a30b16c at term 2"}
	{"level":"info","ts":"2024-06-03T13:56:04.133244Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2db48a961a30b16c became leader at term 2"}
	{"level":"info","ts":"2024-06-03T13:56:04.133272Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2db48a961a30b16c elected leader 2db48a961a30b16c at term 2"}
	{"level":"info","ts":"2024-06-03T13:56:04.138206Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"2db48a961a30b16c","local-member-attributes":"{Name:no-preload-817450 ClientURLs:[https://192.168.72.125:2379]}","request-path":"/0/members/2db48a961a30b16c/attributes","cluster-id":"1634120b80e66761","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-03T13:56:04.139973Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T13:56:04.140164Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-03T13:56:04.144388Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-03T13:56:04.153727Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-03T13:56:04.153765Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-03T13:56:04.156098Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1634120b80e66761","local-member-id":"2db48a961a30b16c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T13:56:04.156423Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T13:56:04.159949Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T13:56:04.163961Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-03T13:56:04.168137Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.125:2379"}
	
	
	==> kernel <==
	 14:05:28 up 14 min,  0 users,  load average: 0.13, 0.12, 0.10
	Linux no-preload-817450 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [08ddf7aff7fc5db99feefd13d7ac8694eade514d615f96858e0a149d53aeda96] <==
	W0603 13:55:55.815060       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 13:55:55.826085       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 13:55:55.849857       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 13:55:55.858186       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 13:55:55.862070       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 13:55:55.925954       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 13:55:56.125628       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 13:55:56.208409       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 13:55:56.220387       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 13:55:56.255041       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 13:55:56.271410       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 13:55:56.276365       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 13:55:56.282499       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 13:55:56.297186       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 13:55:56.393385       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 13:55:56.782558       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 13:55:56.804714       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 13:55:56.904627       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 13:55:56.944660       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 13:55:56.946990       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 13:55:56.951988       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 13:55:57.066575       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 13:55:57.080185       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 13:55:57.257218       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 13:55:57.370225       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [1a2ab9144d51792c0c66d2253bdb0cd54218b45f89025eb3186e6d8a4cf17b13] <==
	I0603 13:59:24.550681       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 14:01:05.698530       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 14:01:05.698673       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0603 14:01:06.699383       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 14:01:06.699499       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0603 14:01:06.699617       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 14:01:06.699499       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 14:01:06.699745       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0603 14:01:06.700911       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 14:02:06.700068       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 14:02:06.700156       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0603 14:02:06.700168       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 14:02:06.701385       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 14:02:06.701440       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0603 14:02:06.701446       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 14:04:06.700756       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 14:04:06.701208       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0603 14:04:06.701245       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 14:04:06.701913       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 14:04:06.702006       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0603 14:04:06.703221       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [87bd39c3f2658eb439cf6bf643a144708940ab9ca35268e56fb38dd0f2b2d0ba] <==
	I0603 13:59:53.340658       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="132.6µs"
	E0603 14:00:21.790832       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:00:22.393835       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 14:00:51.797334       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:00:52.405721       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 14:01:21.802576       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:01:22.413577       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 14:01:51.809032       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:01:52.434408       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 14:02:21.815230       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:02:22.443475       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0603 14:02:31.342752       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="300.514µs"
	I0603 14:02:46.340653       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="79.723µs"
	E0603 14:02:51.822842       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:02:52.452819       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 14:03:21.830213       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:03:22.462806       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 14:03:51.836132       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:03:52.479213       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 14:04:21.842470       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:04:22.488355       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 14:04:51.847650       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:04:52.496405       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 14:05:21.853073       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:05:22.505124       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [526214f62ac9877b8679bd74972c9e5a7fff1255392e765ade04863755dcd249] <==
	I0603 13:56:23.024212       1 server_linux.go:69] "Using iptables proxy"
	I0603 13:56:23.040278       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.125"]
	I0603 13:56:23.274188       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 13:56:23.274277       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 13:56:23.274293       1 server_linux.go:165] "Using iptables Proxier"
	I0603 13:56:23.279805       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 13:56:23.280058       1 server.go:872] "Version info" version="v1.30.1"
	I0603 13:56:23.280076       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 13:56:23.282741       1 config.go:192] "Starting service config controller"
	I0603 13:56:23.282773       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 13:56:23.282794       1 config.go:101] "Starting endpoint slice config controller"
	I0603 13:56:23.282797       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 13:56:23.283308       1 config.go:319] "Starting node config controller"
	I0603 13:56:23.283314       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 13:56:23.384416       1 shared_informer.go:320] Caches are synced for service config
	I0603 13:56:23.384458       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 13:56:23.384472       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [84ce44663e901258ef10d9f88a3641193e8adb038271772c2d6c42f3265d96a2] <==
	W0603 13:56:05.749841       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0603 13:56:05.751372       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0603 13:56:05.749051       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0603 13:56:05.751383       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0603 13:56:06.606062       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0603 13:56:06.606112       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0603 13:56:06.706050       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0603 13:56:06.706097       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0603 13:56:06.757931       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0603 13:56:06.758023       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0603 13:56:06.776279       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0603 13:56:06.776373       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0603 13:56:06.848588       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0603 13:56:06.848715       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0603 13:56:06.860770       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0603 13:56:06.860847       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0603 13:56:06.875185       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0603 13:56:06.875238       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0603 13:56:06.915410       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0603 13:56:06.915588       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0603 13:56:06.918373       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0603 13:56:06.918480       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0603 13:56:06.991037       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0603 13:56:06.992004       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 13:56:09.638054       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 03 14:03:08 no-preload-817450 kubelet[4295]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 14:03:08 no-preload-817450 kubelet[4295]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 14:03:08 no-preload-817450 kubelet[4295]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 14:03:10 no-preload-817450 kubelet[4295]: E0603 14:03:10.324566    4295 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j2lpf" podUID="4f776017-1575-4461-a7c8-656e5a170460"
	Jun 03 14:03:21 no-preload-817450 kubelet[4295]: E0603 14:03:21.323787    4295 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j2lpf" podUID="4f776017-1575-4461-a7c8-656e5a170460"
	Jun 03 14:03:36 no-preload-817450 kubelet[4295]: E0603 14:03:36.324476    4295 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j2lpf" podUID="4f776017-1575-4461-a7c8-656e5a170460"
	Jun 03 14:03:47 no-preload-817450 kubelet[4295]: E0603 14:03:47.325350    4295 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j2lpf" podUID="4f776017-1575-4461-a7c8-656e5a170460"
	Jun 03 14:04:01 no-preload-817450 kubelet[4295]: E0603 14:04:01.324063    4295 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j2lpf" podUID="4f776017-1575-4461-a7c8-656e5a170460"
	Jun 03 14:04:08 no-preload-817450 kubelet[4295]: E0603 14:04:08.364530    4295 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 14:04:08 no-preload-817450 kubelet[4295]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 14:04:08 no-preload-817450 kubelet[4295]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 14:04:08 no-preload-817450 kubelet[4295]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 14:04:08 no-preload-817450 kubelet[4295]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 14:04:13 no-preload-817450 kubelet[4295]: E0603 14:04:13.324403    4295 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j2lpf" podUID="4f776017-1575-4461-a7c8-656e5a170460"
	Jun 03 14:04:27 no-preload-817450 kubelet[4295]: E0603 14:04:27.324683    4295 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j2lpf" podUID="4f776017-1575-4461-a7c8-656e5a170460"
	Jun 03 14:04:38 no-preload-817450 kubelet[4295]: E0603 14:04:38.324586    4295 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j2lpf" podUID="4f776017-1575-4461-a7c8-656e5a170460"
	Jun 03 14:04:49 no-preload-817450 kubelet[4295]: E0603 14:04:49.325025    4295 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j2lpf" podUID="4f776017-1575-4461-a7c8-656e5a170460"
	Jun 03 14:05:02 no-preload-817450 kubelet[4295]: E0603 14:05:02.323863    4295 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j2lpf" podUID="4f776017-1575-4461-a7c8-656e5a170460"
	Jun 03 14:05:08 no-preload-817450 kubelet[4295]: E0603 14:05:08.364353    4295 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 14:05:08 no-preload-817450 kubelet[4295]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 14:05:08 no-preload-817450 kubelet[4295]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 14:05:08 no-preload-817450 kubelet[4295]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 14:05:08 no-preload-817450 kubelet[4295]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 14:05:14 no-preload-817450 kubelet[4295]: E0603 14:05:14.324787    4295 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j2lpf" podUID="4f776017-1575-4461-a7c8-656e5a170460"
	Jun 03 14:05:28 no-preload-817450 kubelet[4295]: E0603 14:05:28.327041    4295 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j2lpf" podUID="4f776017-1575-4461-a7c8-656e5a170460"
	
	
	==> storage-provisioner [e396c54cedb357a57ac99faf2643088bd5f4ac32e3d087d6b89793d9ec4eeb08] <==
	I0603 13:56:24.054201       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0603 13:56:24.069760       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0603 13:56:24.069936       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0603 13:56:24.085538       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0603 13:56:24.085705       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-817450_f3e3ca56-c2e6-4354-9398-171f9ff71371!
	I0603 13:56:24.086815       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0c63b3e1-dae4-4baa-9434-99efb0ec2ea8", APIVersion:"v1", ResourceVersion:"442", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-817450_f3e3ca56-c2e6-4354-9398-171f9ff71371 became leader
	I0603 13:56:24.186237       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-817450_f3e3ca56-c2e6-4354-9398-171f9ff71371!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-817450 -n no-preload-817450
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-817450 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-j2lpf
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-817450 describe pod metrics-server-569cc877fc-j2lpf
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-817450 describe pod metrics-server-569cc877fc-j2lpf: exit status 1 (65.799373ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-j2lpf" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-817450 describe pod metrics-server-569cc877fc-j2lpf: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
E0603 13:58:50.984189 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/auto-021279/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
E0603 13:59:13.231480 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/custom-flannel-021279/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
E0603 13:59:30.123803 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kindnet-021279/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
E0603 13:59:45.059203 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/calico-021279/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
E0603 13:59:58.228483 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/functional-093300/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
E0603 14:00:11.526000 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/flannel-021279/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
E0603 14:00:15.397389 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/enable-default-cni-021279/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
E0603 14:00:36.279642 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/custom-flannel-021279/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
E0603 14:00:48.593289 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
E0603 14:00:53.168391 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kindnet-021279/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
E0603 14:00:54.355544 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/bridge-021279/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
E0603 14:01:34.572629 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/flannel-021279/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
E0603 14:01:38.443587 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/enable-default-cni-021279/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
E0603 14:02:17.399544 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/bridge-021279/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
E0603 14:02:27.933579 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/auto-021279/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
E0603 14:02:45.541287 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
E0603 14:03:22.013345 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/calico-021279/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
E0603 14:04:13.231485 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/custom-flannel-021279/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
E0603 14:04:30.123925 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kindnet-021279/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
E0603 14:04:58.228382 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/functional-093300/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
E0603 14:05:11.526066 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/flannel-021279/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
E0603 14:05:15.397043 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/enable-default-cni-021279/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
E0603 14:05:54.354566 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/bridge-021279/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
E0603 14:07:27.933307 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/auto-021279/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
E0603 14:07:45.541923 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-151788 -n old-k8s-version-151788
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-151788 -n old-k8s-version-151788: exit status 2 (228.411406ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-151788" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-151788 -n old-k8s-version-151788
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-151788 -n old-k8s-version-151788: exit status 2 (223.004179ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-151788 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-151788 logs -n 25: (1.720694804s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-021279 sudo cat                              | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-021279 sudo                                  | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-021279 sudo                                  | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-021279 sudo                                  | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-021279 sudo find                             | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-021279 sudo crio                             | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-021279                                       | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	| delete  | -p                                                     | disable-driver-mounts-069000 | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | disable-driver-mounts-069000                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-030870 | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:43 UTC |
	|         | default-k8s-diff-port-030870                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-817450             | no-preload-817450            | jenkins | v1.33.1 | 03 Jun 24 13:42 UTC | 03 Jun 24 13:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-817450                                   | no-preload-817450            | jenkins | v1.33.1 | 03 Jun 24 13:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-223260            | embed-certs-223260           | jenkins | v1.33.1 | 03 Jun 24 13:43 UTC | 03 Jun 24 13:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-223260                                  | embed-certs-223260           | jenkins | v1.33.1 | 03 Jun 24 13:43 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-030870  | default-k8s-diff-port-030870 | jenkins | v1.33.1 | 03 Jun 24 13:43 UTC | 03 Jun 24 13:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-030870 | jenkins | v1.33.1 | 03 Jun 24 13:43 UTC |                     |
	|         | default-k8s-diff-port-030870                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-817450                  | no-preload-817450            | jenkins | v1.33.1 | 03 Jun 24 13:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-151788        | old-k8s-version-151788       | jenkins | v1.33.1 | 03 Jun 24 13:44 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p no-preload-817450                                   | no-preload-817450            | jenkins | v1.33.1 | 03 Jun 24 13:44 UTC | 03 Jun 24 13:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-223260                 | embed-certs-223260           | jenkins | v1.33.1 | 03 Jun 24 13:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-223260                                  | embed-certs-223260           | jenkins | v1.33.1 | 03 Jun 24 13:45 UTC | 03 Jun 24 13:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-030870       | default-k8s-diff-port-030870 | jenkins | v1.33.1 | 03 Jun 24 13:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-030870 | jenkins | v1.33.1 | 03 Jun 24 13:46 UTC | 03 Jun 24 13:54 UTC |
	|         | default-k8s-diff-port-030870                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-151788                              | old-k8s-version-151788       | jenkins | v1.33.1 | 03 Jun 24 13:46 UTC | 03 Jun 24 13:46 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-151788             | old-k8s-version-151788       | jenkins | v1.33.1 | 03 Jun 24 13:46 UTC | 03 Jun 24 13:46 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-151788                              | old-k8s-version-151788       | jenkins | v1.33.1 | 03 Jun 24 13:46 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 13:46:22
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 13:46:22.347386 1143678 out.go:291] Setting OutFile to fd 1 ...
	I0603 13:46:22.347655 1143678 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:46:22.347666 1143678 out.go:304] Setting ErrFile to fd 2...
	I0603 13:46:22.347672 1143678 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:46:22.347855 1143678 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
	I0603 13:46:22.348458 1143678 out.go:298] Setting JSON to false
	I0603 13:46:22.349502 1143678 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":16129,"bootTime":1717406253,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 13:46:22.349571 1143678 start.go:139] virtualization: kvm guest
	I0603 13:46:22.351720 1143678 out.go:177] * [old-k8s-version-151788] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 13:46:22.353180 1143678 out.go:177]   - MINIKUBE_LOCATION=19011
	I0603 13:46:22.353235 1143678 notify.go:220] Checking for updates...
	I0603 13:46:22.354400 1143678 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 13:46:22.355680 1143678 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 13:46:22.356796 1143678 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 13:46:22.357952 1143678 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 13:46:22.359052 1143678 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 13:46:22.360807 1143678 config.go:182] Loaded profile config "old-k8s-version-151788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0603 13:46:22.361230 1143678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:46:22.361306 1143678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:46:22.376241 1143678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42729
	I0603 13:46:22.376679 1143678 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:46:22.377267 1143678 main.go:141] libmachine: Using API Version  1
	I0603 13:46:22.377292 1143678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:46:22.377663 1143678 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:46:22.377897 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:46:22.379705 1143678 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0603 13:46:22.380895 1143678 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 13:46:22.381188 1143678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:46:22.381222 1143678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:46:22.396163 1143678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43961
	I0603 13:46:22.396669 1143678 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:46:22.397158 1143678 main.go:141] libmachine: Using API Version  1
	I0603 13:46:22.397180 1143678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:46:22.397509 1143678 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:46:22.397693 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:46:22.433731 1143678 out.go:177] * Using the kvm2 driver based on existing profile
	I0603 13:46:22.434876 1143678 start.go:297] selected driver: kvm2
	I0603 13:46:22.434897 1143678 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-151788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-151788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:46:22.435028 1143678 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 13:46:22.435716 1143678 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 13:46:22.435807 1143678 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19011-1078924/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0603 13:46:22.451200 1143678 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0603 13:46:22.451663 1143678 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 13:46:22.451755 1143678 cni.go:84] Creating CNI manager for ""
	I0603 13:46:22.451773 1143678 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:46:22.451832 1143678 start.go:340] cluster config:
	{Name:old-k8s-version-151788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-151788 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:46:22.451961 1143678 iso.go:125] acquiring lock: {Name:mka26d6a83f88b83737ccc78b57cc462fbe70fe1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 13:46:22.454327 1143678 out.go:177] * Starting "old-k8s-version-151788" primary control-plane node in "old-k8s-version-151788" cluster
	I0603 13:46:22.057705 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:46:22.455453 1143678 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0603 13:46:22.455492 1143678 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0603 13:46:22.455501 1143678 cache.go:56] Caching tarball of preloaded images
	I0603 13:46:22.455591 1143678 preload.go:173] Found /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 13:46:22.455604 1143678 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0603 13:46:22.455685 1143678 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/config.json ...
	I0603 13:46:22.455860 1143678 start.go:360] acquireMachinesLock for old-k8s-version-151788: {Name:mk20baaab39609d00406b78ad309423511e633ec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 13:46:28.137725 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:46:31.209684 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:46:37.289692 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:46:40.361614 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:46:46.441692 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:46:49.513686 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:46:55.593727 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:46:58.665749 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:04.745752 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:07.817726 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:13.897702 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:16.969727 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:23.049716 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:26.121758 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:32.201765 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:35.273759 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:41.353716 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:44.425767 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:50.505743 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:53.577777 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:59.657729 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:02.729769 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:08.809709 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:11.881708 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:17.961759 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:21.033726 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:27.113698 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:30.185691 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:36.265722 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:39.337764 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:45.417711 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:48.489729 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:54.569746 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:57.641701 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:49:03.721772 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:49:06.793709 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:49:12.873710 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:49:15.945728 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:49:22.025678 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:49:25.097675 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:49:28.102218 1143252 start.go:364] duration metric: took 3m44.709006863s to acquireMachinesLock for "embed-certs-223260"
	I0603 13:49:28.102293 1143252 start.go:96] Skipping create...Using existing machine configuration
	I0603 13:49:28.102302 1143252 fix.go:54] fixHost starting: 
	I0603 13:49:28.102635 1143252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:49:28.102666 1143252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:49:28.118384 1143252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44729
	I0603 13:49:28.119014 1143252 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:49:28.119601 1143252 main.go:141] libmachine: Using API Version  1
	I0603 13:49:28.119630 1143252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:49:28.119930 1143252 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:49:28.120116 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:49:28.120302 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetState
	I0603 13:49:28.122003 1143252 fix.go:112] recreateIfNeeded on embed-certs-223260: state=Stopped err=<nil>
	I0603 13:49:28.122030 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	W0603 13:49:28.122167 1143252 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 13:49:28.123963 1143252 out.go:177] * Restarting existing kvm2 VM for "embed-certs-223260" ...
	I0603 13:49:28.125564 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .Start
	I0603 13:49:28.125750 1143252 main.go:141] libmachine: (embed-certs-223260) Ensuring networks are active...
	I0603 13:49:28.126598 1143252 main.go:141] libmachine: (embed-certs-223260) Ensuring network default is active
	I0603 13:49:28.126965 1143252 main.go:141] libmachine: (embed-certs-223260) Ensuring network mk-embed-certs-223260 is active
	I0603 13:49:28.127319 1143252 main.go:141] libmachine: (embed-certs-223260) Getting domain xml...
	I0603 13:49:28.128017 1143252 main.go:141] libmachine: (embed-certs-223260) Creating domain...
	I0603 13:49:28.099474 1142862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 13:49:28.099536 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetMachineName
	I0603 13:49:28.099883 1142862 buildroot.go:166] provisioning hostname "no-preload-817450"
	I0603 13:49:28.099915 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetMachineName
	I0603 13:49:28.100115 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:49:28.102052 1142862 machine.go:97] duration metric: took 4m37.409499751s to provisionDockerMachine
	I0603 13:49:28.102123 1142862 fix.go:56] duration metric: took 4m37.432963538s for fixHost
	I0603 13:49:28.102135 1142862 start.go:83] releasing machines lock for "no-preload-817450", held for 4m37.432994587s
	W0603 13:49:28.102158 1142862 start.go:713] error starting host: provision: host is not running
	W0603 13:49:28.102317 1142862 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0603 13:49:28.102332 1142862 start.go:728] Will try again in 5 seconds ...
	I0603 13:49:29.332986 1143252 main.go:141] libmachine: (embed-certs-223260) Waiting to get IP...
	I0603 13:49:29.333963 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:29.334430 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:29.334475 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:29.334403 1144333 retry.go:31] will retry after 203.681987ms: waiting for machine to come up
	I0603 13:49:29.539995 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:29.540496 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:29.540564 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:29.540457 1144333 retry.go:31] will retry after 368.548292ms: waiting for machine to come up
	I0603 13:49:29.911212 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:29.911632 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:29.911665 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:29.911566 1144333 retry.go:31] will retry after 402.690969ms: waiting for machine to come up
	I0603 13:49:30.316480 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:30.316889 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:30.316920 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:30.316852 1144333 retry.go:31] will retry after 500.397867ms: waiting for machine to come up
	I0603 13:49:30.818653 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:30.819082 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:30.819107 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:30.819026 1144333 retry.go:31] will retry after 663.669804ms: waiting for machine to come up
	I0603 13:49:31.483776 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:31.484117 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:31.484144 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:31.484079 1144333 retry.go:31] will retry after 938.433137ms: waiting for machine to come up
	I0603 13:49:32.424128 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:32.424609 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:32.424640 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:32.424548 1144333 retry.go:31] will retry after 919.793328ms: waiting for machine to come up
	I0603 13:49:33.103895 1142862 start.go:360] acquireMachinesLock for no-preload-817450: {Name:mk20baaab39609d00406b78ad309423511e633ec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 13:49:33.346091 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:33.346549 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:33.346574 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:33.346511 1144333 retry.go:31] will retry after 1.115349726s: waiting for machine to come up
	I0603 13:49:34.463875 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:34.464588 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:34.464616 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:34.464529 1144333 retry.go:31] will retry after 1.153940362s: waiting for machine to come up
	I0603 13:49:35.619844 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:35.620243 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:35.620275 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:35.620176 1144333 retry.go:31] will retry after 1.514504154s: waiting for machine to come up
	I0603 13:49:37.135961 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:37.136409 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:37.136431 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:37.136382 1144333 retry.go:31] will retry after 2.757306897s: waiting for machine to come up
	I0603 13:49:39.895589 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:39.895942 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:39.895970 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:39.895881 1144333 retry.go:31] will retry after 3.019503072s: waiting for machine to come up
	I0603 13:49:42.919177 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:42.919640 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:42.919670 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:42.919588 1144333 retry.go:31] will retry after 3.150730989s: waiting for machine to come up
	I0603 13:49:47.494462 1143450 start.go:364] duration metric: took 3m37.207410663s to acquireMachinesLock for "default-k8s-diff-port-030870"
	I0603 13:49:47.494544 1143450 start.go:96] Skipping create...Using existing machine configuration
	I0603 13:49:47.494557 1143450 fix.go:54] fixHost starting: 
	I0603 13:49:47.494876 1143450 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:49:47.494918 1143450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:49:47.511570 1143450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44939
	I0603 13:49:47.512072 1143450 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:49:47.512568 1143450 main.go:141] libmachine: Using API Version  1
	I0603 13:49:47.512593 1143450 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:49:47.512923 1143450 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:49:47.513117 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:49:47.513276 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetState
	I0603 13:49:47.514783 1143450 fix.go:112] recreateIfNeeded on default-k8s-diff-port-030870: state=Stopped err=<nil>
	I0603 13:49:47.514817 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	W0603 13:49:47.514999 1143450 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 13:49:47.517441 1143450 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-030870" ...
	I0603 13:49:46.071609 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.072094 1143252 main.go:141] libmachine: (embed-certs-223260) Found IP for machine: 192.168.83.246
	I0603 13:49:46.072117 1143252 main.go:141] libmachine: (embed-certs-223260) Reserving static IP address...
	I0603 13:49:46.072132 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has current primary IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.072552 1143252 main.go:141] libmachine: (embed-certs-223260) Reserved static IP address: 192.168.83.246
	I0603 13:49:46.072585 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "embed-certs-223260", mac: "52:54:00:8e:14:a8", ip: "192.168.83.246"} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.072593 1143252 main.go:141] libmachine: (embed-certs-223260) Waiting for SSH to be available...
	I0603 13:49:46.072632 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | skip adding static IP to network mk-embed-certs-223260 - found existing host DHCP lease matching {name: "embed-certs-223260", mac: "52:54:00:8e:14:a8", ip: "192.168.83.246"}
	I0603 13:49:46.072655 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | Getting to WaitForSSH function...
	I0603 13:49:46.074738 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.075059 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.075091 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.075189 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | Using SSH client type: external
	I0603 13:49:46.075213 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | Using SSH private key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa (-rw-------)
	I0603 13:49:46.075249 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.246 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 13:49:46.075271 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | About to run SSH command:
	I0603 13:49:46.075283 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | exit 0
	I0603 13:49:46.197971 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | SSH cmd err, output: <nil>: 
	I0603 13:49:46.198498 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetConfigRaw
	I0603 13:49:46.199179 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetIP
	I0603 13:49:46.201821 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.202239 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.202277 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.202533 1143252 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260/config.json ...
	I0603 13:49:46.202727 1143252 machine.go:94] provisionDockerMachine start ...
	I0603 13:49:46.202745 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:49:46.202964 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:46.205259 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.205636 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.205663 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.205773 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:46.205954 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.206100 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.206318 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:46.206538 1143252 main.go:141] libmachine: Using SSH client type: native
	I0603 13:49:46.206819 1143252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.246 22 <nil> <nil>}
	I0603 13:49:46.206837 1143252 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 13:49:46.310241 1143252 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 13:49:46.310277 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetMachineName
	I0603 13:49:46.310583 1143252 buildroot.go:166] provisioning hostname "embed-certs-223260"
	I0603 13:49:46.310616 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetMachineName
	I0603 13:49:46.310836 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:46.313692 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.314078 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.314116 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.314222 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:46.314446 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.314631 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.314800 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:46.314969 1143252 main.go:141] libmachine: Using SSH client type: native
	I0603 13:49:46.315166 1143252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.246 22 <nil> <nil>}
	I0603 13:49:46.315183 1143252 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-223260 && echo "embed-certs-223260" | sudo tee /etc/hostname
	I0603 13:49:46.428560 1143252 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-223260
	
	I0603 13:49:46.428600 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:46.431381 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.431757 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.431784 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.432021 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:46.432283 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.432477 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.432609 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:46.432785 1143252 main.go:141] libmachine: Using SSH client type: native
	I0603 13:49:46.432960 1143252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.246 22 <nil> <nil>}
	I0603 13:49:46.432976 1143252 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-223260' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-223260/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-223260' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 13:49:46.542400 1143252 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 13:49:46.542446 1143252 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19011-1078924/.minikube CaCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19011-1078924/.minikube}
	I0603 13:49:46.542536 1143252 buildroot.go:174] setting up certificates
	I0603 13:49:46.542557 1143252 provision.go:84] configureAuth start
	I0603 13:49:46.542576 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetMachineName
	I0603 13:49:46.542913 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetIP
	I0603 13:49:46.545940 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.546339 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.546368 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.546499 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:46.548715 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.549097 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.549127 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.549294 1143252 provision.go:143] copyHostCerts
	I0603 13:49:46.549382 1143252 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem, removing ...
	I0603 13:49:46.549397 1143252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 13:49:46.549486 1143252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem (1078 bytes)
	I0603 13:49:46.549578 1143252 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem, removing ...
	I0603 13:49:46.549587 1143252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 13:49:46.549613 1143252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem (1123 bytes)
	I0603 13:49:46.549664 1143252 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem, removing ...
	I0603 13:49:46.549671 1143252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 13:49:46.549690 1143252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem (1675 bytes)
	I0603 13:49:46.549740 1143252 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem org=jenkins.embed-certs-223260 san=[127.0.0.1 192.168.83.246 embed-certs-223260 localhost minikube]
	I0603 13:49:46.807050 1143252 provision.go:177] copyRemoteCerts
	I0603 13:49:46.807111 1143252 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 13:49:46.807140 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:46.809916 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.810303 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.810347 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.810513 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:46.810758 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.810929 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:46.811168 1143252 sshutil.go:53] new ssh client: &{IP:192.168.83.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa Username:docker}
	I0603 13:49:46.892182 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 13:49:46.916657 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0603 13:49:46.941896 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 13:49:46.967292 1143252 provision.go:87] duration metric: took 424.714334ms to configureAuth
	I0603 13:49:46.967331 1143252 buildroot.go:189] setting minikube options for container-runtime
	I0603 13:49:46.967539 1143252 config.go:182] Loaded profile config "embed-certs-223260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:49:46.967626 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:46.970350 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.970668 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.970703 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.970870 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:46.971115 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.971314 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.971454 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:46.971625 1143252 main.go:141] libmachine: Using SSH client type: native
	I0603 13:49:46.971809 1143252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.246 22 <nil> <nil>}
	I0603 13:49:46.971831 1143252 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 13:49:47.264894 1143252 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 13:49:47.264922 1143252 machine.go:97] duration metric: took 1.062182146s to provisionDockerMachine
	I0603 13:49:47.264935 1143252 start.go:293] postStartSetup for "embed-certs-223260" (driver="kvm2")
	I0603 13:49:47.264946 1143252 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 13:49:47.264963 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:49:47.265368 1143252 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 13:49:47.265398 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:47.268412 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.268765 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:47.268796 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.268989 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:47.269223 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:47.269455 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:47.269625 1143252 sshutil.go:53] new ssh client: &{IP:192.168.83.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa Username:docker}
	I0603 13:49:47.348583 1143252 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 13:49:47.352828 1143252 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 13:49:47.352867 1143252 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/addons for local assets ...
	I0603 13:49:47.352949 1143252 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/files for local assets ...
	I0603 13:49:47.353046 1143252 filesync.go:149] local asset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> 10862512.pem in /etc/ssl/certs
	I0603 13:49:47.353164 1143252 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 13:49:47.363222 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:49:47.388132 1143252 start.go:296] duration metric: took 123.177471ms for postStartSetup
	I0603 13:49:47.388202 1143252 fix.go:56] duration metric: took 19.285899119s for fixHost
	I0603 13:49:47.388233 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:47.390960 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.391414 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:47.391477 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.391681 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:47.391937 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:47.392127 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:47.392266 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:47.392436 1143252 main.go:141] libmachine: Using SSH client type: native
	I0603 13:49:47.392670 1143252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.246 22 <nil> <nil>}
	I0603 13:49:47.392687 1143252 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 13:49:47.494294 1143252 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717422587.469729448
	
	I0603 13:49:47.494320 1143252 fix.go:216] guest clock: 1717422587.469729448
	I0603 13:49:47.494328 1143252 fix.go:229] Guest: 2024-06-03 13:49:47.469729448 +0000 UTC Remote: 2024-06-03 13:49:47.388208749 +0000 UTC m=+244.138441135 (delta=81.520699ms)
	I0603 13:49:47.494354 1143252 fix.go:200] guest clock delta is within tolerance: 81.520699ms
	I0603 13:49:47.494361 1143252 start.go:83] releasing machines lock for "embed-certs-223260", held for 19.392103897s
	I0603 13:49:47.494394 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:49:47.494686 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetIP
	I0603 13:49:47.497515 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.497930 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:47.497976 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.498110 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:49:47.498672 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:49:47.498859 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:49:47.498934 1143252 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 13:49:47.498988 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:47.499062 1143252 ssh_runner.go:195] Run: cat /version.json
	I0603 13:49:47.499082 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:47.501788 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.502075 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.502131 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:47.502156 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.502291 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:47.502390 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:47.502427 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.502550 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:47.502647 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:47.502738 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:47.502806 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:47.502942 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:47.502955 1143252 sshutil.go:53] new ssh client: &{IP:192.168.83.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa Username:docker}
	I0603 13:49:47.503078 1143252 sshutil.go:53] new ssh client: &{IP:192.168.83.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa Username:docker}
	I0603 13:49:47.612706 1143252 ssh_runner.go:195] Run: systemctl --version
	I0603 13:49:47.618922 1143252 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 13:49:47.764749 1143252 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 13:49:47.770936 1143252 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 13:49:47.771023 1143252 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 13:49:47.788401 1143252 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 13:49:47.788427 1143252 start.go:494] detecting cgroup driver to use...
	I0603 13:49:47.788486 1143252 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 13:49:47.805000 1143252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 13:49:47.822258 1143252 docker.go:217] disabling cri-docker service (if available) ...
	I0603 13:49:47.822315 1143252 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 13:49:47.837826 1143252 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 13:49:47.853818 1143252 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 13:49:47.978204 1143252 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 13:49:48.106302 1143252 docker.go:233] disabling docker service ...
	I0603 13:49:48.106366 1143252 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 13:49:48.120974 1143252 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 13:49:48.134911 1143252 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 13:49:48.278103 1143252 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 13:49:48.398238 1143252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 13:49:48.413207 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 13:49:48.432211 1143252 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 13:49:48.432281 1143252 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:49:48.443668 1143252 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 13:49:48.443746 1143252 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:49:48.454990 1143252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:49:48.467119 1143252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:49:48.479875 1143252 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 13:49:48.496767 1143252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:49:48.508872 1143252 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:49:48.530972 1143252 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:49:48.542631 1143252 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 13:49:48.552775 1143252 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 13:49:48.552836 1143252 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 13:49:48.566528 1143252 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 13:49:48.582917 1143252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:49:48.716014 1143252 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 13:49:48.860157 1143252 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 13:49:48.860283 1143252 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 13:49:48.865046 1143252 start.go:562] Will wait 60s for crictl version
	I0603 13:49:48.865121 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:49:48.869520 1143252 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 13:49:48.909721 1143252 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 13:49:48.909819 1143252 ssh_runner.go:195] Run: crio --version
	I0603 13:49:48.939080 1143252 ssh_runner.go:195] Run: crio --version
	I0603 13:49:48.970595 1143252 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 13:49:47.518807 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .Start
	I0603 13:49:47.518981 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Ensuring networks are active...
	I0603 13:49:47.519623 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Ensuring network default is active
	I0603 13:49:47.519926 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Ensuring network mk-default-k8s-diff-port-030870 is active
	I0603 13:49:47.520408 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Getting domain xml...
	I0603 13:49:47.521014 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Creating domain...
	I0603 13:49:48.798483 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting to get IP...
	I0603 13:49:48.799695 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:48.800174 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:48.800305 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:48.800165 1144471 retry.go:31] will retry after 204.161843ms: waiting for machine to come up
	I0603 13:49:49.005669 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:49.006143 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:49.006180 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:49.006091 1144471 retry.go:31] will retry after 382.751679ms: waiting for machine to come up
	I0603 13:49:49.391162 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:49.391717 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:49.391750 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:49.391670 1144471 retry.go:31] will retry after 314.248576ms: waiting for machine to come up
	I0603 13:49:49.707349 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:49.707957 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:49.707990 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:49.707856 1144471 retry.go:31] will retry after 446.461931ms: waiting for machine to come up
	I0603 13:49:50.155616 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:50.156238 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:50.156274 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:50.156174 1144471 retry.go:31] will retry after 712.186964ms: waiting for machine to come up
	I0603 13:49:48.971971 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetIP
	I0603 13:49:48.975079 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:48.975439 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:48.975471 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:48.975721 1143252 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0603 13:49:48.980114 1143252 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:49:48.993380 1143252 kubeadm.go:877] updating cluster {Name:embed-certs-223260 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:embed-certs-223260 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.246 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 13:49:48.993543 1143252 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 13:49:48.993636 1143252 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:49:49.032289 1143252 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0603 13:49:49.032364 1143252 ssh_runner.go:195] Run: which lz4
	I0603 13:49:49.036707 1143252 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 13:49:49.040973 1143252 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 13:49:49.041000 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0603 13:49:50.554295 1143252 crio.go:462] duration metric: took 1.517623353s to copy over tarball
	I0603 13:49:50.554387 1143252 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 13:49:52.823733 1143252 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.269303423s)
	I0603 13:49:52.823785 1143252 crio.go:469] duration metric: took 2.269454274s to extract the tarball
	I0603 13:49:52.823799 1143252 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 13:49:52.862060 1143252 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:49:52.906571 1143252 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 13:49:52.906602 1143252 cache_images.go:84] Images are preloaded, skipping loading
	I0603 13:49:52.906618 1143252 kubeadm.go:928] updating node { 192.168.83.246 8443 v1.30.1 crio true true} ...
	I0603 13:49:52.906774 1143252 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-223260 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:embed-certs-223260 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 13:49:52.906866 1143252 ssh_runner.go:195] Run: crio config
	I0603 13:49:52.954082 1143252 cni.go:84] Creating CNI manager for ""
	I0603 13:49:52.954111 1143252 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:49:52.954129 1143252 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 13:49:52.954159 1143252 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.246 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-223260 NodeName:embed-certs-223260 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.246"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.246 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 13:49:52.954355 1143252 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.246
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-223260"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.246
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.246"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 13:49:52.954446 1143252 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 13:49:52.964488 1143252 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 13:49:52.964582 1143252 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 13:49:52.974118 1143252 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0603 13:49:52.990701 1143252 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 13:49:53.007539 1143252 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0603 13:49:53.024943 1143252 ssh_runner.go:195] Run: grep 192.168.83.246	control-plane.minikube.internal$ /etc/hosts
	I0603 13:49:53.029097 1143252 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.246	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:49:53.041234 1143252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:49:53.178449 1143252 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:49:53.195718 1143252 certs.go:68] Setting up /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260 for IP: 192.168.83.246
	I0603 13:49:53.195750 1143252 certs.go:194] generating shared ca certs ...
	I0603 13:49:53.195769 1143252 certs.go:226] acquiring lock for ca certs: {Name:mkeec5aabce7c9540fcb31b78e4f96c2851d54f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:49:53.195954 1143252 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key
	I0603 13:49:53.196021 1143252 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key
	I0603 13:49:53.196035 1143252 certs.go:256] generating profile certs ...
	I0603 13:49:53.196256 1143252 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260/client.key
	I0603 13:49:53.196341 1143252 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260/apiserver.key.90d43877
	I0603 13:49:53.196437 1143252 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260/proxy-client.key
	I0603 13:49:53.196605 1143252 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem (1338 bytes)
	W0603 13:49:53.196663 1143252 certs.go:480] ignoring /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251_empty.pem, impossibly tiny 0 bytes
	I0603 13:49:53.196678 1143252 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 13:49:53.196708 1143252 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem (1078 bytes)
	I0603 13:49:53.196756 1143252 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem (1123 bytes)
	I0603 13:49:53.196787 1143252 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem (1675 bytes)
	I0603 13:49:53.196838 1143252 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:49:53.197895 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 13:49:53.231612 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 13:49:53.263516 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 13:49:50.870317 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:50.870816 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:50.870841 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:50.870781 1144471 retry.go:31] will retry after 855.15183ms: waiting for machine to come up
	I0603 13:49:51.727393 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:51.727926 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:51.727960 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:51.727869 1144471 retry.go:31] will retry after 997.293541ms: waiting for machine to come up
	I0603 13:49:52.726578 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:52.727036 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:52.727073 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:52.726953 1144471 retry.go:31] will retry after 1.4233414s: waiting for machine to come up
	I0603 13:49:54.151594 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:54.152072 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:54.152099 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:54.152021 1144471 retry.go:31] will retry after 1.348888248s: waiting for machine to come up
	I0603 13:49:53.303724 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0603 13:49:53.334700 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0603 13:49:53.371594 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 13:49:53.396381 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 13:49:53.420985 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0603 13:49:53.445334 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem --> /usr/share/ca-certificates/1086251.pem (1338 bytes)
	I0603 13:49:53.469632 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /usr/share/ca-certificates/10862512.pem (1708 bytes)
	I0603 13:49:53.495720 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 13:49:53.522416 1143252 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 13:49:53.541593 1143252 ssh_runner.go:195] Run: openssl version
	I0603 13:49:53.547653 1143252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1086251.pem && ln -fs /usr/share/ca-certificates/1086251.pem /etc/ssl/certs/1086251.pem"
	I0603 13:49:53.558802 1143252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1086251.pem
	I0603 13:49:53.563511 1143252 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:37 /usr/share/ca-certificates/1086251.pem
	I0603 13:49:53.563579 1143252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1086251.pem
	I0603 13:49:53.569691 1143252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1086251.pem /etc/ssl/certs/51391683.0"
	I0603 13:49:53.582814 1143252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10862512.pem && ln -fs /usr/share/ca-certificates/10862512.pem /etc/ssl/certs/10862512.pem"
	I0603 13:49:53.595684 1143252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10862512.pem
	I0603 13:49:53.600613 1143252 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:37 /usr/share/ca-certificates/10862512.pem
	I0603 13:49:53.600675 1143252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10862512.pem
	I0603 13:49:53.607008 1143252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10862512.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 13:49:53.619919 1143252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 13:49:53.632663 1143252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:49:53.637604 1143252 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:24 /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:49:53.637675 1143252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:49:53.643844 1143252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 13:49:53.655934 1143252 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 13:49:53.660801 1143252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 13:49:53.667391 1143252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 13:49:53.674382 1143252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 13:49:53.681121 1143252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 13:49:53.687496 1143252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 13:49:53.693623 1143252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 13:49:53.699764 1143252 kubeadm.go:391] StartCluster: {Name:embed-certs-223260 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:embed-certs-223260 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.246 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:49:53.699871 1143252 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 13:49:53.699928 1143252 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:49:53.736588 1143252 cri.go:89] found id: ""
	I0603 13:49:53.736662 1143252 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0603 13:49:53.750620 1143252 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 13:49:53.750644 1143252 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 13:49:53.750652 1143252 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 13:49:53.750716 1143252 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 13:49:53.765026 1143252 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 13:49:53.766297 1143252 kubeconfig.go:125] found "embed-certs-223260" server: "https://192.168.83.246:8443"
	I0603 13:49:53.768662 1143252 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 13:49:53.779583 1143252 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.83.246
	I0603 13:49:53.779625 1143252 kubeadm.go:1154] stopping kube-system containers ...
	I0603 13:49:53.779639 1143252 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0603 13:49:53.779695 1143252 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:49:53.820312 1143252 cri.go:89] found id: ""
	I0603 13:49:53.820398 1143252 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 13:49:53.838446 1143252 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 13:49:53.849623 1143252 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 13:49:53.849643 1143252 kubeadm.go:156] found existing configuration files:
	
	I0603 13:49:53.849700 1143252 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 13:49:53.859379 1143252 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 13:49:53.859451 1143252 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 13:49:53.869939 1143252 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 13:49:53.880455 1143252 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 13:49:53.880527 1143252 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 13:49:53.890918 1143252 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 13:49:53.900841 1143252 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 13:49:53.900894 1143252 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 13:49:53.910968 1143252 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 13:49:53.921064 1143252 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 13:49:53.921121 1143252 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 13:49:53.931550 1143252 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 13:49:53.942309 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:49:54.078959 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:49:54.842079 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:49:55.043420 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:49:55.111164 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:49:55.220384 1143252 api_server.go:52] waiting for apiserver process to appear ...
	I0603 13:49:55.220475 1143252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:49:55.721612 1143252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:49:56.221513 1143252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:49:56.257801 1143252 api_server.go:72] duration metric: took 1.037411844s to wait for apiserver process to appear ...
	I0603 13:49:56.257845 1143252 api_server.go:88] waiting for apiserver healthz status ...
	I0603 13:49:56.257874 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:49:55.502734 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:55.503282 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:55.503313 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:55.503226 1144471 retry.go:31] will retry after 1.733012887s: waiting for machine to come up
	I0603 13:49:57.238544 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:57.238975 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:57.239006 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:57.238917 1144471 retry.go:31] will retry after 2.565512625s: waiting for machine to come up
	I0603 13:49:59.806662 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:59.807077 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:59.807105 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:59.807024 1144471 retry.go:31] will retry after 2.759375951s: waiting for machine to come up
	I0603 13:49:59.684015 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 13:49:59.684058 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 13:49:59.684078 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:49:59.757751 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 13:49:59.757791 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 13:49:59.758846 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:49:59.779923 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:49:59.779974 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:00.258098 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:50:00.265061 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:00.265089 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:00.758643 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:50:00.764364 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:00.764400 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:01.257950 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:50:01.262846 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:01.262875 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:01.758078 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:50:01.763269 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:01.763301 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:02.258641 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:50:02.263628 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:02.263658 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:02.758205 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:50:02.765436 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:02.765470 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:03.258663 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:50:03.263141 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 200:
	ok
	I0603 13:50:03.269787 1143252 api_server.go:141] control plane version: v1.30.1
	I0603 13:50:03.269817 1143252 api_server.go:131] duration metric: took 7.011964721s to wait for apiserver health ...
	I0603 13:50:03.269827 1143252 cni.go:84] Creating CNI manager for ""
	I0603 13:50:03.269833 1143252 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:50:03.271812 1143252 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 13:50:03.273154 1143252 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 13:50:03.285329 1143252 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 13:50:03.305480 1143252 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 13:50:03.317546 1143252 system_pods.go:59] 8 kube-system pods found
	I0603 13:50:03.317601 1143252 system_pods.go:61] "coredns-7db6d8ff4d-qdjrv" [9a490ea5-c189-4d28-bd6b-509610d35f37] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 13:50:03.317614 1143252 system_pods.go:61] "etcd-embed-certs-223260" [97807b62-195b-4d94-a7f8-754f68ad4f03] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0603 13:50:03.317627 1143252 system_pods.go:61] "kube-apiserver-embed-certs-223260" [df2f6cde-407c-4ed2-8fec-5fa61a428a88] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0603 13:50:03.317637 1143252 system_pods.go:61] "kube-controller-manager-embed-certs-223260" [9b8bc1b7-3f43-4626-b9ee-37f5176b7fd6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0603 13:50:03.317645 1143252 system_pods.go:61] "kube-proxy-s5vdl" [4c515f67-d265-4140-82ec-ba9ac4ddda80] Running
	I0603 13:50:03.317658 1143252 system_pods.go:61] "kube-scheduler-embed-certs-223260" [d23001bf-d971-42d2-a901-b2ec4b4db649] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0603 13:50:03.317667 1143252 system_pods.go:61] "metrics-server-569cc877fc-v7d9t" [e89c698d-7aab-4acd-a9b3-5ba0315ad681] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:50:03.317677 1143252 system_pods.go:61] "storage-provisioner" [6ff65744-2d90-4589-a97f-d6b4d792eab4] Running
	I0603 13:50:03.317686 1143252 system_pods.go:74] duration metric: took 12.177585ms to wait for pod list to return data ...
	I0603 13:50:03.317695 1143252 node_conditions.go:102] verifying NodePressure condition ...
	I0603 13:50:03.321445 1143252 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 13:50:03.321479 1143252 node_conditions.go:123] node cpu capacity is 2
	I0603 13:50:03.321493 1143252 node_conditions.go:105] duration metric: took 3.787651ms to run NodePressure ...
	I0603 13:50:03.321512 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:03.598576 1143252 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0603 13:50:03.604196 1143252 kubeadm.go:733] kubelet initialised
	I0603 13:50:03.604219 1143252 kubeadm.go:734] duration metric: took 5.606021ms waiting for restarted kubelet to initialise ...
	I0603 13:50:03.604236 1143252 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:50:03.611441 1143252 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-qdjrv" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:03.615911 1143252 pod_ready.go:97] node "embed-certs-223260" hosting pod "coredns-7db6d8ff4d-qdjrv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:03.615936 1143252 pod_ready.go:81] duration metric: took 4.468017ms for pod "coredns-7db6d8ff4d-qdjrv" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:03.615945 1143252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-223260" hosting pod "coredns-7db6d8ff4d-qdjrv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:03.615955 1143252 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:03.620663 1143252 pod_ready.go:97] node "embed-certs-223260" hosting pod "etcd-embed-certs-223260" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:03.620683 1143252 pod_ready.go:81] duration metric: took 4.71967ms for pod "etcd-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:03.620691 1143252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-223260" hosting pod "etcd-embed-certs-223260" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:03.620697 1143252 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:03.624894 1143252 pod_ready.go:97] node "embed-certs-223260" hosting pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:03.624917 1143252 pod_ready.go:81] duration metric: took 4.212227ms for pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:03.624925 1143252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-223260" hosting pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:03.624933 1143252 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:03.708636 1143252 pod_ready.go:97] node "embed-certs-223260" hosting pod "kube-controller-manager-embed-certs-223260" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:03.708665 1143252 pod_ready.go:81] duration metric: took 83.72445ms for pod "kube-controller-manager-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:03.708675 1143252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-223260" hosting pod "kube-controller-manager-embed-certs-223260" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:03.708681 1143252 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-s5vdl" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:04.109391 1143252 pod_ready.go:97] node "embed-certs-223260" hosting pod "kube-proxy-s5vdl" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:04.109454 1143252 pod_ready.go:81] duration metric: took 400.761651ms for pod "kube-proxy-s5vdl" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:04.109469 1143252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-223260" hosting pod "kube-proxy-s5vdl" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:04.109478 1143252 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:04.509683 1143252 pod_ready.go:97] node "embed-certs-223260" hosting pod "kube-scheduler-embed-certs-223260" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:04.509712 1143252 pod_ready.go:81] duration metric: took 400.226435ms for pod "kube-scheduler-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:04.509723 1143252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-223260" hosting pod "kube-scheduler-embed-certs-223260" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:04.509730 1143252 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:04.909629 1143252 pod_ready.go:97] node "embed-certs-223260" hosting pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:04.909659 1143252 pod_ready.go:81] duration metric: took 399.917901ms for pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:04.909669 1143252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-223260" hosting pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:04.909679 1143252 pod_ready.go:38] duration metric: took 1.30543039s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:50:04.909697 1143252 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 13:50:04.921682 1143252 ops.go:34] apiserver oom_adj: -16
	I0603 13:50:04.921708 1143252 kubeadm.go:591] duration metric: took 11.171050234s to restartPrimaryControlPlane
	I0603 13:50:04.921717 1143252 kubeadm.go:393] duration metric: took 11.221962831s to StartCluster
	I0603 13:50:04.921737 1143252 settings.go:142] acquiring lock: {Name:mka7155af15d143794eb08b8670f7d850f44839e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:50:04.921807 1143252 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 13:50:04.923342 1143252 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/kubeconfig: {Name:mk082a4c41fd0f4876b4085806e1bc5ef6533b14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:50:04.923628 1143252 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.83.246 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 13:50:04.927063 1143252 out.go:177] * Verifying Kubernetes components...
	I0603 13:50:04.923693 1143252 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 13:50:04.923865 1143252 config.go:182] Loaded profile config "embed-certs-223260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:50:04.928850 1143252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:50:04.928873 1143252 addons.go:69] Setting default-storageclass=true in profile "embed-certs-223260"
	I0603 13:50:04.928872 1143252 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-223260"
	I0603 13:50:04.928889 1143252 addons.go:69] Setting metrics-server=true in profile "embed-certs-223260"
	I0603 13:50:04.928906 1143252 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-223260"
	I0603 13:50:04.928923 1143252 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-223260"
	I0603 13:50:04.928935 1143252 addons.go:234] Setting addon metrics-server=true in "embed-certs-223260"
	W0603 13:50:04.928938 1143252 addons.go:243] addon storage-provisioner should already be in state true
	W0603 13:50:04.928945 1143252 addons.go:243] addon metrics-server should already be in state true
	I0603 13:50:04.928980 1143252 host.go:66] Checking if "embed-certs-223260" exists ...
	I0603 13:50:04.928980 1143252 host.go:66] Checking if "embed-certs-223260" exists ...
	I0603 13:50:04.929307 1143252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:04.929346 1143252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:04.929352 1143252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:04.929372 1143252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:04.929597 1143252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:04.929630 1143252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:04.944948 1143252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38847
	I0603 13:50:04.945071 1143252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44523
	I0603 13:50:04.945489 1143252 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:04.945571 1143252 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:04.946137 1143252 main.go:141] libmachine: Using API Version  1
	I0603 13:50:04.946166 1143252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:04.946299 1143252 main.go:141] libmachine: Using API Version  1
	I0603 13:50:04.946319 1143252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:04.946589 1143252 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:04.946650 1143252 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:04.946798 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetState
	I0603 13:50:04.947022 1143252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35157
	I0603 13:50:04.947210 1143252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:04.947250 1143252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:04.947517 1143252 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:04.948043 1143252 main.go:141] libmachine: Using API Version  1
	I0603 13:50:04.948069 1143252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:04.948437 1143252 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:04.949064 1143252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:04.949107 1143252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:04.950532 1143252 addons.go:234] Setting addon default-storageclass=true in "embed-certs-223260"
	W0603 13:50:04.950558 1143252 addons.go:243] addon default-storageclass should already be in state true
	I0603 13:50:04.950589 1143252 host.go:66] Checking if "embed-certs-223260" exists ...
	I0603 13:50:04.950951 1143252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:04.951008 1143252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:04.964051 1143252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37589
	I0603 13:50:04.964078 1143252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35097
	I0603 13:50:04.964513 1143252 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:04.964562 1143252 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:04.965062 1143252 main.go:141] libmachine: Using API Version  1
	I0603 13:50:04.965088 1143252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:04.965128 1143252 main.go:141] libmachine: Using API Version  1
	I0603 13:50:04.965153 1143252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:04.965473 1143252 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:04.965532 1143252 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:04.965652 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetState
	I0603 13:50:04.965740 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetState
	I0603 13:50:04.967606 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:50:04.967739 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:50:04.969783 1143252 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:04.971193 1143252 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0603 13:50:02.567560 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:02.567988 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:50:02.568020 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:50:02.567915 1144471 retry.go:31] will retry after 3.955051362s: waiting for machine to come up
	I0603 13:50:04.972568 1143252 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0603 13:50:04.972588 1143252 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0603 13:50:04.972606 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:50:04.971275 1143252 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 13:50:04.972634 1143252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 13:50:04.972658 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:50:04.971495 1143252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42379
	I0603 13:50:04.973108 1143252 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:04.973575 1143252 main.go:141] libmachine: Using API Version  1
	I0603 13:50:04.973599 1143252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:04.973931 1143252 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:04.974623 1143252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:04.974658 1143252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:04.976128 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:50:04.976251 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:50:04.976535 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:50:04.976559 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:50:04.976709 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:50:04.976724 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:50:04.976768 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:50:04.976915 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:50:04.976989 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:50:04.977099 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:50:04.977156 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:50:04.977242 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:50:04.977305 1143252 sshutil.go:53] new ssh client: &{IP:192.168.83.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa Username:docker}
	I0603 13:50:04.977500 1143252 sshutil.go:53] new ssh client: &{IP:192.168.83.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa Username:docker}
	I0603 13:50:04.990810 1143252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36607
	I0603 13:50:04.991293 1143252 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:04.991844 1143252 main.go:141] libmachine: Using API Version  1
	I0603 13:50:04.991875 1143252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:04.992279 1143252 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:04.992499 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetState
	I0603 13:50:04.994225 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:50:04.994456 1143252 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 13:50:04.994476 1143252 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 13:50:04.994490 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:50:04.997771 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:50:04.998210 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:50:04.998239 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:50:04.998418 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:50:04.998627 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:50:04.998811 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:50:04.998941 1143252 sshutil.go:53] new ssh client: &{IP:192.168.83.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa Username:docker}
	I0603 13:50:05.119962 1143252 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:50:05.140880 1143252 node_ready.go:35] waiting up to 6m0s for node "embed-certs-223260" to be "Ready" ...
	I0603 13:50:05.271863 1143252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 13:50:05.275815 1143252 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0603 13:50:05.275843 1143252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0603 13:50:05.294572 1143252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 13:50:05.346520 1143252 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0603 13:50:05.346553 1143252 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0603 13:50:05.417100 1143252 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 13:50:05.417141 1143252 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0603 13:50:05.496250 1143252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 13:50:06.207746 1143252 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:06.207781 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .Close
	I0603 13:50:06.207849 1143252 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:06.207873 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .Close
	I0603 13:50:06.208103 1143252 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:06.208152 1143252 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:06.208161 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | Closing plugin on server side
	I0603 13:50:06.208182 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | Closing plugin on server side
	I0603 13:50:06.208157 1143252 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:06.208197 1143252 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:06.208200 1143252 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:06.208216 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .Close
	I0603 13:50:06.208208 1143252 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:06.208284 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .Close
	I0603 13:50:06.208572 1143252 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:06.208590 1143252 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:06.208691 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | Closing plugin on server side
	I0603 13:50:06.208703 1143252 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:06.208724 1143252 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:06.216764 1143252 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:06.216783 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .Close
	I0603 13:50:06.217095 1143252 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:06.217111 1143252 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:06.374254 1143252 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:06.374281 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .Close
	I0603 13:50:06.374603 1143252 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:06.374623 1143252 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:06.374634 1143252 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:06.374638 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | Closing plugin on server side
	I0603 13:50:06.374644 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .Close
	I0603 13:50:06.374901 1143252 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:06.374916 1143252 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:06.374933 1143252 addons.go:475] Verifying addon metrics-server=true in "embed-certs-223260"
	I0603 13:50:06.374948 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | Closing plugin on server side
	I0603 13:50:06.377491 1143252 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0603 13:50:08.083130 1143678 start.go:364] duration metric: took 3m45.627229097s to acquireMachinesLock for "old-k8s-version-151788"
	I0603 13:50:08.083256 1143678 start.go:96] Skipping create...Using existing machine configuration
	I0603 13:50:08.083266 1143678 fix.go:54] fixHost starting: 
	I0603 13:50:08.083762 1143678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:08.083812 1143678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:08.103187 1143678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36483
	I0603 13:50:08.103693 1143678 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:08.104269 1143678 main.go:141] libmachine: Using API Version  1
	I0603 13:50:08.104299 1143678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:08.104746 1143678 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:08.105115 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:50:08.105347 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetState
	I0603 13:50:08.107125 1143678 fix.go:112] recreateIfNeeded on old-k8s-version-151788: state=Stopped err=<nil>
	I0603 13:50:08.107173 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	W0603 13:50:08.107340 1143678 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 13:50:08.109207 1143678 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-151788" ...
	I0603 13:50:06.378684 1143252 addons.go:510] duration metric: took 1.4549999s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0603 13:50:07.145643 1143252 node_ready.go:53] node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:06.526793 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.527302 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Found IP for machine: 192.168.39.177
	I0603 13:50:06.527341 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has current primary IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.527366 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Reserving static IP address...
	I0603 13:50:06.527822 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Reserved static IP address: 192.168.39.177
	I0603 13:50:06.527857 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for SSH to be available...
	I0603 13:50:06.527902 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-030870", mac: "52:54:00:62:09:d4", ip: "192.168.39.177"} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:06.527956 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | skip adding static IP to network mk-default-k8s-diff-port-030870 - found existing host DHCP lease matching {name: "default-k8s-diff-port-030870", mac: "52:54:00:62:09:d4", ip: "192.168.39.177"}
	I0603 13:50:06.527973 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | Getting to WaitForSSH function...
	I0603 13:50:06.530287 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.530662 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:06.530696 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.530802 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | Using SSH client type: external
	I0603 13:50:06.530827 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | Using SSH private key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa (-rw-------)
	I0603 13:50:06.530849 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.177 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 13:50:06.530866 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | About to run SSH command:
	I0603 13:50:06.530877 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | exit 0
	I0603 13:50:06.653910 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | SSH cmd err, output: <nil>: 
	I0603 13:50:06.654259 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetConfigRaw
	I0603 13:50:06.654981 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetIP
	I0603 13:50:06.658094 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.658561 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:06.658600 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.658921 1143450 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870/config.json ...
	I0603 13:50:06.659144 1143450 machine.go:94] provisionDockerMachine start ...
	I0603 13:50:06.659168 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:06.659486 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:06.662534 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.662915 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:06.662959 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.663059 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:06.663258 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:06.663476 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:06.663660 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:06.663866 1143450 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:06.664103 1143450 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0603 13:50:06.664115 1143450 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 13:50:06.766054 1143450 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 13:50:06.766083 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetMachineName
	I0603 13:50:06.766406 1143450 buildroot.go:166] provisioning hostname "default-k8s-diff-port-030870"
	I0603 13:50:06.766440 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetMachineName
	I0603 13:50:06.766708 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:06.769445 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.769820 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:06.769871 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.770029 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:06.770244 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:06.770423 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:06.770670 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:06.770893 1143450 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:06.771057 1143450 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0603 13:50:06.771070 1143450 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-030870 && echo "default-k8s-diff-port-030870" | sudo tee /etc/hostname
	I0603 13:50:06.889997 1143450 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-030870
	
	I0603 13:50:06.890029 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:06.893778 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.894260 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:06.894297 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.894614 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:06.894826 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:06.895029 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:06.895211 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:06.895423 1143450 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:06.895608 1143450 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0603 13:50:06.895625 1143450 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-030870' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-030870/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-030870' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 13:50:07.007930 1143450 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 13:50:07.007971 1143450 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19011-1078924/.minikube CaCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19011-1078924/.minikube}
	I0603 13:50:07.008009 1143450 buildroot.go:174] setting up certificates
	I0603 13:50:07.008020 1143450 provision.go:84] configureAuth start
	I0603 13:50:07.008034 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetMachineName
	I0603 13:50:07.008433 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetIP
	I0603 13:50:07.011208 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.011607 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:07.011640 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.011774 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:07.013986 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.014431 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:07.014462 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.014655 1143450 provision.go:143] copyHostCerts
	I0603 13:50:07.014726 1143450 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem, removing ...
	I0603 13:50:07.014737 1143450 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 13:50:07.014787 1143450 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem (1078 bytes)
	I0603 13:50:07.014874 1143450 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem, removing ...
	I0603 13:50:07.014882 1143450 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 13:50:07.014902 1143450 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem (1123 bytes)
	I0603 13:50:07.014952 1143450 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem, removing ...
	I0603 13:50:07.014959 1143450 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 13:50:07.014974 1143450 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem (1675 bytes)
	I0603 13:50:07.015020 1143450 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-030870 san=[127.0.0.1 192.168.39.177 default-k8s-diff-port-030870 localhost minikube]
	I0603 13:50:07.402535 1143450 provision.go:177] copyRemoteCerts
	I0603 13:50:07.402595 1143450 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 13:50:07.402626 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:07.405891 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.406240 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:07.406272 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.406484 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:07.406718 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:07.406943 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:07.407132 1143450 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa Username:docker}
	I0603 13:50:07.489480 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 13:50:07.517212 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0603 13:50:07.543510 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 13:50:07.570284 1143450 provision.go:87] duration metric: took 562.244781ms to configureAuth
	I0603 13:50:07.570318 1143450 buildroot.go:189] setting minikube options for container-runtime
	I0603 13:50:07.570537 1143450 config.go:182] Loaded profile config "default-k8s-diff-port-030870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:50:07.570629 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:07.574171 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.574706 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:07.574739 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.574948 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:07.575262 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:07.575549 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:07.575781 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:07.575965 1143450 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:07.576217 1143450 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0603 13:50:07.576247 1143450 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 13:50:07.839415 1143450 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 13:50:07.839455 1143450 machine.go:97] duration metric: took 1.180296439s to provisionDockerMachine
	I0603 13:50:07.839468 1143450 start.go:293] postStartSetup for "default-k8s-diff-port-030870" (driver="kvm2")
	I0603 13:50:07.839482 1143450 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 13:50:07.839506 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:07.839843 1143450 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 13:50:07.839872 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:07.842547 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.842884 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:07.842918 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.843234 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:07.843471 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:07.843708 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:07.843952 1143450 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa Username:docker}
	I0603 13:50:07.927654 1143450 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 13:50:07.932965 1143450 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 13:50:07.932997 1143450 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/addons for local assets ...
	I0603 13:50:07.933082 1143450 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/files for local assets ...
	I0603 13:50:07.933202 1143450 filesync.go:149] local asset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> 10862512.pem in /etc/ssl/certs
	I0603 13:50:07.933343 1143450 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 13:50:07.945059 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:50:07.975774 1143450 start.go:296] duration metric: took 136.280559ms for postStartSetup
	I0603 13:50:07.975822 1143450 fix.go:56] duration metric: took 20.481265153s for fixHost
	I0603 13:50:07.975848 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:07.979035 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.979436 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:07.979486 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.979737 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:07.980012 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:07.980228 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:07.980452 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:07.980691 1143450 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:07.980935 1143450 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0603 13:50:07.980954 1143450 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 13:50:08.082946 1143450 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717422608.057620379
	
	I0603 13:50:08.082978 1143450 fix.go:216] guest clock: 1717422608.057620379
	I0603 13:50:08.082988 1143450 fix.go:229] Guest: 2024-06-03 13:50:08.057620379 +0000 UTC Remote: 2024-06-03 13:50:07.975826846 +0000 UTC m=+237.845886752 (delta=81.793533ms)
	I0603 13:50:08.083018 1143450 fix.go:200] guest clock delta is within tolerance: 81.793533ms
	I0603 13:50:08.083025 1143450 start.go:83] releasing machines lock for "default-k8s-diff-port-030870", held for 20.588515063s
	I0603 13:50:08.083060 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:08.083369 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetIP
	I0603 13:50:08.086674 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:08.087202 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:08.087285 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:08.087508 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:08.088324 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:08.088575 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:08.088673 1143450 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 13:50:08.088758 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:08.088823 1143450 ssh_runner.go:195] Run: cat /version.json
	I0603 13:50:08.088852 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:08.092020 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:08.092175 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:08.092406 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:08.092485 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:08.092863 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:08.092893 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:08.092916 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:08.092924 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:08.093273 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:08.093276 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:08.093522 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:08.093541 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:08.093708 1143450 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa Username:docker}
	I0603 13:50:08.093710 1143450 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa Username:docker}
	I0603 13:50:08.176292 1143450 ssh_runner.go:195] Run: systemctl --version
	I0603 13:50:08.204977 1143450 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 13:50:08.367121 1143450 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 13:50:08.376347 1143450 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 13:50:08.376431 1143450 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 13:50:08.398639 1143450 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 13:50:08.398672 1143450 start.go:494] detecting cgroup driver to use...
	I0603 13:50:08.398750 1143450 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 13:50:08.422776 1143450 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 13:50:08.443035 1143450 docker.go:217] disabling cri-docker service (if available) ...
	I0603 13:50:08.443108 1143450 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 13:50:08.459853 1143450 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 13:50:08.482009 1143450 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 13:50:08.631237 1143450 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 13:50:08.806623 1143450 docker.go:233] disabling docker service ...
	I0603 13:50:08.806715 1143450 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 13:50:08.827122 1143450 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 13:50:08.842457 1143450 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 13:50:08.999795 1143450 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 13:50:09.148706 1143450 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 13:50:09.167314 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 13:50:09.188867 1143450 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 13:50:09.188959 1143450 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:09.202239 1143450 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 13:50:09.202319 1143450 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:09.216228 1143450 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:09.231140 1143450 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:09.246767 1143450 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 13:50:09.260418 1143450 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:09.274349 1143450 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:09.300588 1143450 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:09.314659 1143450 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 13:50:09.326844 1143450 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 13:50:09.326919 1143450 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 13:50:09.344375 1143450 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 13:50:09.357955 1143450 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:50:09.504105 1143450 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 13:50:09.685468 1143450 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 13:50:09.685562 1143450 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 13:50:09.690863 1143450 start.go:562] Will wait 60s for crictl version
	I0603 13:50:09.690943 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:50:09.696532 1143450 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 13:50:09.742785 1143450 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 13:50:09.742891 1143450 ssh_runner.go:195] Run: crio --version
	I0603 13:50:09.782137 1143450 ssh_runner.go:195] Run: crio --version
	I0603 13:50:09.816251 1143450 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 13:50:09.817854 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetIP
	I0603 13:50:09.821049 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:09.821555 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:09.821595 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:09.821855 1143450 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0603 13:50:09.826658 1143450 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:50:09.841351 1143450 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-030870 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:default-k8s-diff-port-030870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.177 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 13:50:09.841521 1143450 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 13:50:09.841586 1143450 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:50:09.883751 1143450 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0603 13:50:09.883825 1143450 ssh_runner.go:195] Run: which lz4
	I0603 13:50:09.888383 1143450 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 13:50:09.893662 1143450 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 13:50:09.893704 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0603 13:50:08.110706 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .Start
	I0603 13:50:08.110954 1143678 main.go:141] libmachine: (old-k8s-version-151788) Ensuring networks are active...
	I0603 13:50:08.111890 1143678 main.go:141] libmachine: (old-k8s-version-151788) Ensuring network default is active
	I0603 13:50:08.112291 1143678 main.go:141] libmachine: (old-k8s-version-151788) Ensuring network mk-old-k8s-version-151788 is active
	I0603 13:50:08.112708 1143678 main.go:141] libmachine: (old-k8s-version-151788) Getting domain xml...
	I0603 13:50:08.113547 1143678 main.go:141] libmachine: (old-k8s-version-151788) Creating domain...
	I0603 13:50:09.528855 1143678 main.go:141] libmachine: (old-k8s-version-151788) Waiting to get IP...
	I0603 13:50:09.529978 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:09.530410 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:09.530453 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:09.530382 1144654 retry.go:31] will retry after 208.935457ms: waiting for machine to come up
	I0603 13:50:09.741245 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:09.741816 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:09.741864 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:09.741769 1144654 retry.go:31] will retry after 376.532154ms: waiting for machine to come up
	I0603 13:50:10.120533 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:10.121261 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:10.121337 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:10.121239 1144654 retry.go:31] will retry after 339.126643ms: waiting for machine to come up
	I0603 13:50:10.461708 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:10.462488 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:10.462514 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:10.462425 1144654 retry.go:31] will retry after 490.057426ms: waiting for machine to come up
	I0603 13:50:10.954107 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:10.954887 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:10.954921 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:10.954840 1144654 retry.go:31] will retry after 711.209001ms: waiting for machine to come up
	I0603 13:50:11.667459 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:11.668198 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:11.668231 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:11.668135 1144654 retry.go:31] will retry after 928.879285ms: waiting for machine to come up
	I0603 13:50:09.645006 1143252 node_ready.go:53] node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:10.146403 1143252 node_ready.go:49] node "embed-certs-223260" has status "Ready":"True"
	I0603 13:50:10.146438 1143252 node_ready.go:38] duration metric: took 5.005510729s for node "embed-certs-223260" to be "Ready" ...
	I0603 13:50:10.146453 1143252 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:50:10.154249 1143252 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-qdjrv" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:10.164361 1143252 pod_ready.go:92] pod "coredns-7db6d8ff4d-qdjrv" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:10.164401 1143252 pod_ready.go:81] duration metric: took 10.115855ms for pod "coredns-7db6d8ff4d-qdjrv" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:10.164419 1143252 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:11.675214 1143252 pod_ready.go:92] pod "etcd-embed-certs-223260" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:11.675243 1143252 pod_ready.go:81] duration metric: took 1.510815036s for pod "etcd-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:11.675254 1143252 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:11.522734 1143450 crio.go:462] duration metric: took 1.634406537s to copy over tarball
	I0603 13:50:11.522837 1143450 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 13:50:13.983446 1143450 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.460564522s)
	I0603 13:50:13.983484 1143450 crio.go:469] duration metric: took 2.460706596s to extract the tarball
	I0603 13:50:13.983503 1143450 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 13:50:14.029942 1143450 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:50:14.083084 1143450 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 13:50:14.083113 1143450 cache_images.go:84] Images are preloaded, skipping loading
	I0603 13:50:14.083122 1143450 kubeadm.go:928] updating node { 192.168.39.177 8444 v1.30.1 crio true true} ...
	I0603 13:50:14.083247 1143450 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-030870 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.177
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-030870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 13:50:14.083319 1143450 ssh_runner.go:195] Run: crio config
	I0603 13:50:14.142320 1143450 cni.go:84] Creating CNI manager for ""
	I0603 13:50:14.142344 1143450 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:50:14.142354 1143450 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 13:50:14.142379 1143450 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.177 APIServerPort:8444 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-030870 NodeName:default-k8s-diff-port-030870 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.177"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.177 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 13:50:14.142517 1143450 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.177
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-030870"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.177
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.177"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 13:50:14.142577 1143450 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 13:50:14.153585 1143450 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 13:50:14.153687 1143450 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 13:50:14.164499 1143450 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0603 13:50:14.186564 1143450 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 13:50:14.205489 1143450 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0603 13:50:14.227005 1143450 ssh_runner.go:195] Run: grep 192.168.39.177	control-plane.minikube.internal$ /etc/hosts
	I0603 13:50:14.231782 1143450 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.177	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:50:14.247433 1143450 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:50:14.368336 1143450 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:50:14.391791 1143450 certs.go:68] Setting up /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870 for IP: 192.168.39.177
	I0603 13:50:14.391816 1143450 certs.go:194] generating shared ca certs ...
	I0603 13:50:14.391840 1143450 certs.go:226] acquiring lock for ca certs: {Name:mkeec5aabce7c9540fcb31b78e4f96c2851d54f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:50:14.392015 1143450 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key
	I0603 13:50:14.392075 1143450 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key
	I0603 13:50:14.392090 1143450 certs.go:256] generating profile certs ...
	I0603 13:50:14.392282 1143450 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870/client.key
	I0603 13:50:14.392373 1143450 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870/apiserver.key.7a30187e
	I0603 13:50:14.392428 1143450 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870/proxy-client.key
	I0603 13:50:14.392545 1143450 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem (1338 bytes)
	W0603 13:50:14.392602 1143450 certs.go:480] ignoring /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251_empty.pem, impossibly tiny 0 bytes
	I0603 13:50:14.392616 1143450 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 13:50:14.392650 1143450 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem (1078 bytes)
	I0603 13:50:14.392687 1143450 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem (1123 bytes)
	I0603 13:50:14.392722 1143450 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem (1675 bytes)
	I0603 13:50:14.392780 1143450 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:50:14.393706 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 13:50:14.424354 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 13:50:14.476267 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 13:50:14.514457 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0603 13:50:14.548166 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0603 13:50:14.584479 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0603 13:50:14.626894 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 13:50:14.663103 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0603 13:50:14.696750 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /usr/share/ca-certificates/10862512.pem (1708 bytes)
	I0603 13:50:14.725770 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 13:50:14.755779 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem --> /usr/share/ca-certificates/1086251.pem (1338 bytes)
	I0603 13:50:14.786060 1143450 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 13:50:14.805976 1143450 ssh_runner.go:195] Run: openssl version
	I0603 13:50:14.812737 1143450 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10862512.pem && ln -fs /usr/share/ca-certificates/10862512.pem /etc/ssl/certs/10862512.pem"
	I0603 13:50:14.824707 1143450 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10862512.pem
	I0603 13:50:14.831139 1143450 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:37 /usr/share/ca-certificates/10862512.pem
	I0603 13:50:14.831255 1143450 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10862512.pem
	I0603 13:50:14.838855 1143450 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10862512.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 13:50:14.850974 1143450 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 13:50:14.865613 1143450 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:50:14.871431 1143450 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:24 /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:50:14.871518 1143450 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:50:14.878919 1143450 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 13:50:14.891371 1143450 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1086251.pem && ln -fs /usr/share/ca-certificates/1086251.pem /etc/ssl/certs/1086251.pem"
	I0603 13:50:14.903721 1143450 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1086251.pem
	I0603 13:50:14.909069 1143450 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:37 /usr/share/ca-certificates/1086251.pem
	I0603 13:50:14.909180 1143450 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1086251.pem
	I0603 13:50:14.915904 1143450 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1086251.pem /etc/ssl/certs/51391683.0"
	I0603 13:50:14.928622 1143450 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 13:50:14.934466 1143450 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 13:50:14.941321 1143450 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 13:50:14.947960 1143450 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 13:50:14.955629 1143450 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 13:50:14.962761 1143450 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 13:50:14.970396 1143450 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 13:50:14.977381 1143450 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-030870 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.1 ClusterName:default-k8s-diff-port-030870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.177 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:50:14.977543 1143450 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 13:50:14.977599 1143450 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:50:15.042628 1143450 cri.go:89] found id: ""
	I0603 13:50:15.042733 1143450 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0603 13:50:15.055439 1143450 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 13:50:15.055469 1143450 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 13:50:15.055476 1143450 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 13:50:15.055535 1143450 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 13:50:15.067250 1143450 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 13:50:15.068159 1143450 kubeconfig.go:125] found "default-k8s-diff-port-030870" server: "https://192.168.39.177:8444"
	I0603 13:50:15.070060 1143450 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 13:50:15.082723 1143450 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.177
	I0603 13:50:15.082788 1143450 kubeadm.go:1154] stopping kube-system containers ...
	I0603 13:50:15.082809 1143450 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0603 13:50:15.082972 1143450 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:50:15.124369 1143450 cri.go:89] found id: ""
	I0603 13:50:15.124509 1143450 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 13:50:15.144064 1143450 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 13:50:15.156148 1143450 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 13:50:15.156174 1143450 kubeadm.go:156] found existing configuration files:
	
	I0603 13:50:15.156240 1143450 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0603 13:50:15.166927 1143450 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 13:50:15.167006 1143450 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 13:50:12.598536 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:12.598972 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:12.599008 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:12.598948 1144654 retry.go:31] will retry after 882.970422ms: waiting for machine to come up
	I0603 13:50:13.483171 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:13.483723 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:13.483758 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:13.483640 1144654 retry.go:31] will retry after 1.215665556s: waiting for machine to come up
	I0603 13:50:14.701392 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:14.701960 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:14.701991 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:14.701899 1144654 retry.go:31] will retry after 1.614371992s: waiting for machine to come up
	I0603 13:50:16.318708 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:16.319127 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:16.319148 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:16.319103 1144654 retry.go:31] will retry after 2.146267337s: waiting for machine to come up
	I0603 13:50:13.683419 1143252 pod_ready.go:102] pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:15.684744 1143252 pod_ready.go:102] pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:16.792510 1143252 pod_ready.go:92] pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:16.792538 1143252 pod_ready.go:81] duration metric: took 5.117277447s for pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:16.792549 1143252 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:16.798083 1143252 pod_ready.go:92] pod "kube-controller-manager-embed-certs-223260" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:16.798112 1143252 pod_ready.go:81] duration metric: took 5.554915ms for pod "kube-controller-manager-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:16.798126 1143252 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s5vdl" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:16.804217 1143252 pod_ready.go:92] pod "kube-proxy-s5vdl" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:16.804247 1143252 pod_ready.go:81] duration metric: took 6.113411ms for pod "kube-proxy-s5vdl" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:16.804262 1143252 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:16.810317 1143252 pod_ready.go:92] pod "kube-scheduler-embed-certs-223260" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:16.810343 1143252 pod_ready.go:81] duration metric: took 6.073098ms for pod "kube-scheduler-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:16.810357 1143252 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:15.178645 1143450 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0603 13:50:15.486524 1143450 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 13:50:15.486608 1143450 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 13:50:15.497694 1143450 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0603 13:50:15.509586 1143450 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 13:50:15.509665 1143450 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 13:50:15.521976 1143450 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0603 13:50:15.533446 1143450 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 13:50:15.533535 1143450 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 13:50:15.545525 1143450 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 13:50:15.557558 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:15.710109 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:16.725380 1143450 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.015227554s)
	I0603 13:50:16.725452 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:16.964275 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:17.061586 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:17.183665 1143450 api_server.go:52] waiting for apiserver process to appear ...
	I0603 13:50:17.183764 1143450 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:17.684365 1143450 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:18.184269 1143450 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:18.254733 1143450 api_server.go:72] duration metric: took 1.07106398s to wait for apiserver process to appear ...
	I0603 13:50:18.254769 1143450 api_server.go:88] waiting for apiserver healthz status ...
	I0603 13:50:18.254797 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:18.466825 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:18.467260 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:18.467292 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:18.467187 1144654 retry.go:31] will retry after 2.752334209s: waiting for machine to come up
	I0603 13:50:21.220813 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:21.221235 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:21.221267 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:21.221182 1144654 retry.go:31] will retry after 3.082080728s: waiting for machine to come up
	I0603 13:50:18.819188 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:21.323790 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:21.193140 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 13:50:21.193177 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 13:50:21.193193 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:21.265534 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 13:50:21.265580 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 13:50:21.265602 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:21.277669 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 13:50:21.277703 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 13:50:21.754973 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:21.761802 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:21.761841 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:22.255071 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:22.262166 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:22.262227 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:22.755128 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:22.759896 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:22.759936 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:23.255520 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:23.262093 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:23.262128 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:23.755784 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:23.760053 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:23.760079 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:24.255534 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:24.259793 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:24.259820 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:24.755365 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:24.759964 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 200:
	ok
	I0603 13:50:24.768830 1143450 api_server.go:141] control plane version: v1.30.1
	I0603 13:50:24.768862 1143450 api_server.go:131] duration metric: took 6.51408552s to wait for apiserver health ...
	I0603 13:50:24.768872 1143450 cni.go:84] Creating CNI manager for ""
	I0603 13:50:24.768879 1143450 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:50:24.771099 1143450 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 13:50:24.772806 1143450 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 13:50:24.784204 1143450 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 13:50:24.805572 1143450 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 13:50:24.816944 1143450 system_pods.go:59] 8 kube-system pods found
	I0603 13:50:24.816988 1143450 system_pods.go:61] "coredns-7db6d8ff4d-flxqj" [a116f363-ca50-4e2d-8c77-e99498c81e36] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 13:50:24.816997 1143450 system_pods.go:61] "etcd-default-k8s-diff-port-030870" [4134b8e4-b7c4-4571-ae7f-f1eff2be2427] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0603 13:50:24.817008 1143450 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-030870" [38fe3d48-9d20-448a-b8d1-7c3af8ab1d2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0603 13:50:24.817021 1143450 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-030870" [5c8f2fc4-fc4f-48f8-8d81-3b64aa9a93c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0603 13:50:24.817028 1143450 system_pods.go:61] "kube-proxy-thsrx" [96df5442-b343-47c8-a561-681a2d568d50] Running
	I0603 13:50:24.817037 1143450 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-030870" [1f2c23a1-1c2c-463f-a5f0-e8f1bb8956f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0603 13:50:24.817044 1143450 system_pods.go:61] "metrics-server-569cc877fc-8xw9v" [4ab08177-2171-493b-928c-456d8a21fd68] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:50:24.817050 1143450 system_pods.go:61] "storage-provisioner" [64d080e5-d582-4ee5-adbc-a652e8e2b820] Running
	I0603 13:50:24.817060 1143450 system_pods.go:74] duration metric: took 11.461696ms to wait for pod list to return data ...
	I0603 13:50:24.817069 1143450 node_conditions.go:102] verifying NodePressure condition ...
	I0603 13:50:24.820804 1143450 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 13:50:24.820834 1143450 node_conditions.go:123] node cpu capacity is 2
	I0603 13:50:24.820846 1143450 node_conditions.go:105] duration metric: took 3.771492ms to run NodePressure ...
	I0603 13:50:24.820865 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:25.098472 1143450 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0603 13:50:25.103237 1143450 kubeadm.go:733] kubelet initialised
	I0603 13:50:25.103263 1143450 kubeadm.go:734] duration metric: took 4.763539ms waiting for restarted kubelet to initialise ...
	I0603 13:50:25.103274 1143450 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:50:25.109364 1143450 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-flxqj" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:25.114629 1143450 pod_ready.go:97] node "default-k8s-diff-port-030870" hosting pod "coredns-7db6d8ff4d-flxqj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.114662 1143450 pod_ready.go:81] duration metric: took 5.268473ms for pod "coredns-7db6d8ff4d-flxqj" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:25.114676 1143450 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-030870" hosting pod "coredns-7db6d8ff4d-flxqj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.114687 1143450 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:25.118734 1143450 pod_ready.go:97] node "default-k8s-diff-port-030870" hosting pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.118777 1143450 pod_ready.go:81] duration metric: took 4.079659ms for pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:25.118790 1143450 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-030870" hosting pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.118810 1143450 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:25.123298 1143450 pod_ready.go:97] node "default-k8s-diff-port-030870" hosting pod "kube-apiserver-default-k8s-diff-port-030870" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.123334 1143450 pod_ready.go:81] duration metric: took 4.509948ms for pod "kube-apiserver-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:25.123351 1143450 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-030870" hosting pod "kube-apiserver-default-k8s-diff-port-030870" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.123361 1143450 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:25.210283 1143450 pod_ready.go:97] node "default-k8s-diff-port-030870" hosting pod "kube-controller-manager-default-k8s-diff-port-030870" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.210316 1143450 pod_ready.go:81] duration metric: took 86.945898ms for pod "kube-controller-manager-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:25.210329 1143450 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-030870" hosting pod "kube-controller-manager-default-k8s-diff-port-030870" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.210338 1143450 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-thsrx" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:25.609043 1143450 pod_ready.go:97] node "default-k8s-diff-port-030870" hosting pod "kube-proxy-thsrx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.609074 1143450 pod_ready.go:81] duration metric: took 398.728553ms for pod "kube-proxy-thsrx" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:25.609084 1143450 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-030870" hosting pod "kube-proxy-thsrx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.609091 1143450 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:26.009831 1143450 pod_ready.go:97] node "default-k8s-diff-port-030870" hosting pod "kube-scheduler-default-k8s-diff-port-030870" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:26.009866 1143450 pod_ready.go:81] duration metric: took 400.766037ms for pod "kube-scheduler-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:26.009880 1143450 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-030870" hosting pod "kube-scheduler-default-k8s-diff-port-030870" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:26.009888 1143450 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:26.410271 1143450 pod_ready.go:97] node "default-k8s-diff-port-030870" hosting pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:26.410301 1143450 pod_ready.go:81] duration metric: took 400.402293ms for pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:26.410315 1143450 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-030870" hosting pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:26.410326 1143450 pod_ready.go:38] duration metric: took 1.307039933s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:50:26.410347 1143450 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 13:50:26.422726 1143450 ops.go:34] apiserver oom_adj: -16
	I0603 13:50:26.422753 1143450 kubeadm.go:591] duration metric: took 11.367271168s to restartPrimaryControlPlane
	I0603 13:50:26.422763 1143450 kubeadm.go:393] duration metric: took 11.445396197s to StartCluster
	I0603 13:50:26.422784 1143450 settings.go:142] acquiring lock: {Name:mka7155af15d143794eb08b8670f7d850f44839e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:50:26.422866 1143450 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 13:50:26.424423 1143450 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/kubeconfig: {Name:mk082a4c41fd0f4876b4085806e1bc5ef6533b14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:50:26.424744 1143450 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.177 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 13:50:26.426628 1143450 out.go:177] * Verifying Kubernetes components...
	I0603 13:50:26.424855 1143450 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 13:50:26.424985 1143450 config.go:182] Loaded profile config "default-k8s-diff-port-030870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:50:26.428227 1143450 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:50:26.428239 1143450 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-030870"
	I0603 13:50:26.428241 1143450 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-030870"
	I0603 13:50:26.428275 1143450 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-030870"
	I0603 13:50:26.428285 1143450 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-030870"
	W0603 13:50:26.428297 1143450 addons.go:243] addon storage-provisioner should already be in state true
	I0603 13:50:26.428243 1143450 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-030870"
	I0603 13:50:26.428338 1143450 host.go:66] Checking if "default-k8s-diff-port-030870" exists ...
	I0603 13:50:26.428404 1143450 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-030870"
	W0603 13:50:26.428428 1143450 addons.go:243] addon metrics-server should already be in state true
	I0603 13:50:26.428501 1143450 host.go:66] Checking if "default-k8s-diff-port-030870" exists ...
	I0603 13:50:26.428650 1143450 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:26.428676 1143450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:26.428724 1143450 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:26.428751 1143450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:26.428948 1143450 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:26.429001 1143450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:26.445709 1143450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33951
	I0603 13:50:26.446187 1143450 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:26.446719 1143450 main.go:141] libmachine: Using API Version  1
	I0603 13:50:26.446743 1143450 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:26.447152 1143450 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:26.447817 1143450 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:26.447852 1143450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:26.449660 1143450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46507
	I0603 13:50:26.449721 1143450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37871
	I0603 13:50:26.450120 1143450 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:26.450161 1143450 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:26.450735 1143450 main.go:141] libmachine: Using API Version  1
	I0603 13:50:26.450755 1143450 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:26.450906 1143450 main.go:141] libmachine: Using API Version  1
	I0603 13:50:26.450930 1143450 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:26.451177 1143450 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:26.451333 1143450 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:26.451421 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetState
	I0603 13:50:26.451909 1143450 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:26.451951 1143450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:26.455458 1143450 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-030870"
	W0603 13:50:26.455484 1143450 addons.go:243] addon default-storageclass should already be in state true
	I0603 13:50:26.455523 1143450 host.go:66] Checking if "default-k8s-diff-port-030870" exists ...
	I0603 13:50:26.455776 1143450 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:26.455825 1143450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:26.470807 1143450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36503
	I0603 13:50:26.471179 1143450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44131
	I0603 13:50:26.471763 1143450 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:26.471921 1143450 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:26.472042 1143450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34451
	I0603 13:50:26.472471 1143450 main.go:141] libmachine: Using API Version  1
	I0603 13:50:26.472501 1143450 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:26.472575 1143450 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:26.472750 1143450 main.go:141] libmachine: Using API Version  1
	I0603 13:50:26.472760 1143450 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:26.472966 1143450 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:26.473095 1143450 main.go:141] libmachine: Using API Version  1
	I0603 13:50:26.473118 1143450 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:26.473132 1143450 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:26.473134 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetState
	I0603 13:50:26.473357 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetState
	I0603 13:50:26.473486 1143450 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:26.474129 1143450 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:26.474183 1143450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:26.475437 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:26.475594 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:26.477911 1143450 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:26.479474 1143450 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0603 13:50:24.304462 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:24.305104 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:24.305175 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:24.305099 1144654 retry.go:31] will retry after 4.178596743s: waiting for machine to come up
	I0603 13:50:26.480998 1143450 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0603 13:50:26.481021 1143450 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0603 13:50:26.481047 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:26.479556 1143450 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 13:50:26.481095 1143450 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 13:50:26.481116 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:26.484634 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:26.484694 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:26.485083 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:26.485116 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:26.485147 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:26.485160 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:26.485538 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:26.485628 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:26.485729 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:26.485829 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:26.485856 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:26.485993 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:26.486040 1143450 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa Username:docker}
	I0603 13:50:26.486158 1143450 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa Username:docker}
	I0603 13:50:26.496035 1143450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45527
	I0603 13:50:26.496671 1143450 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:26.497270 1143450 main.go:141] libmachine: Using API Version  1
	I0603 13:50:26.497290 1143450 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:26.497719 1143450 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:26.497989 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetState
	I0603 13:50:26.500018 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:26.500280 1143450 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 13:50:26.500298 1143450 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 13:50:26.500318 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:26.503226 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:26.503732 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:26.503768 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:26.503967 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:26.504212 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:26.504399 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:26.504556 1143450 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa Username:docker}
	I0603 13:50:26.608774 1143450 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:50:26.629145 1143450 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-030870" to be "Ready" ...
	I0603 13:50:26.692164 1143450 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 13:50:26.784756 1143450 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 13:50:26.788686 1143450 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0603 13:50:26.788711 1143450 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0603 13:50:26.841094 1143450 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0603 13:50:26.841129 1143450 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0603 13:50:26.907657 1143450 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 13:50:26.907688 1143450 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0603 13:50:26.963244 1143450 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:26.963280 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .Close
	I0603 13:50:26.963618 1143450 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:26.963641 1143450 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:26.963649 1143450 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:26.963653 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | Closing plugin on server side
	I0603 13:50:26.963657 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .Close
	I0603 13:50:26.963962 1143450 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:26.963980 1143450 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:26.963982 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | Closing plugin on server side
	I0603 13:50:26.971726 1143450 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:26.971748 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .Close
	I0603 13:50:26.972101 1143450 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:26.972125 1143450 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:26.975238 1143450 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 13:50:27.653643 1143450 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:27.653689 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .Close
	I0603 13:50:27.654037 1143450 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:27.654061 1143450 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:27.654078 1143450 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:27.654087 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .Close
	I0603 13:50:27.654429 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | Closing plugin on server side
	I0603 13:50:27.654484 1143450 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:27.654507 1143450 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:27.847367 1143450 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:27.847397 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .Close
	I0603 13:50:27.847745 1143450 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:27.847770 1143450 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:27.847779 1143450 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:27.847785 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | Closing plugin on server side
	I0603 13:50:27.847793 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .Close
	I0603 13:50:27.848112 1143450 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:27.848130 1143450 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:27.848144 1143450 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-030870"
	I0603 13:50:27.851386 1143450 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0603 13:50:23.817272 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:25.818013 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:27.818160 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:29.798777 1142862 start.go:364] duration metric: took 56.694826675s to acquireMachinesLock for "no-preload-817450"
	I0603 13:50:29.798855 1142862 start.go:96] Skipping create...Using existing machine configuration
	I0603 13:50:29.798866 1142862 fix.go:54] fixHost starting: 
	I0603 13:50:29.799329 1142862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:29.799369 1142862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:29.817787 1142862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46057
	I0603 13:50:29.818396 1142862 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:29.819003 1142862 main.go:141] libmachine: Using API Version  1
	I0603 13:50:29.819025 1142862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:29.819450 1142862 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:29.819617 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:50:29.819782 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetState
	I0603 13:50:29.821742 1142862 fix.go:112] recreateIfNeeded on no-preload-817450: state=Stopped err=<nil>
	I0603 13:50:29.821777 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	W0603 13:50:29.821973 1142862 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 13:50:29.823915 1142862 out.go:177] * Restarting existing kvm2 VM for "no-preload-817450" ...
	I0603 13:50:27.852929 1143450 addons.go:510] duration metric: took 1.428071927s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0603 13:50:28.633355 1143450 node_ready.go:53] node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:29.825584 1142862 main.go:141] libmachine: (no-preload-817450) Calling .Start
	I0603 13:50:29.825783 1142862 main.go:141] libmachine: (no-preload-817450) Ensuring networks are active...
	I0603 13:50:29.826746 1142862 main.go:141] libmachine: (no-preload-817450) Ensuring network default is active
	I0603 13:50:29.827116 1142862 main.go:141] libmachine: (no-preload-817450) Ensuring network mk-no-preload-817450 is active
	I0603 13:50:29.827617 1142862 main.go:141] libmachine: (no-preload-817450) Getting domain xml...
	I0603 13:50:29.828419 1142862 main.go:141] libmachine: (no-preload-817450) Creating domain...
	I0603 13:50:28.485041 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.485598 1143678 main.go:141] libmachine: (old-k8s-version-151788) Found IP for machine: 192.168.50.65
	I0603 13:50:28.485624 1143678 main.go:141] libmachine: (old-k8s-version-151788) Reserving static IP address...
	I0603 13:50:28.485639 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has current primary IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.486053 1143678 main.go:141] libmachine: (old-k8s-version-151788) Reserved static IP address: 192.168.50.65
	I0603 13:50:28.486109 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "old-k8s-version-151788", mac: "52:54:00:56:4e:c1", ip: "192.168.50.65"} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.486123 1143678 main.go:141] libmachine: (old-k8s-version-151788) Waiting for SSH to be available...
	I0603 13:50:28.486144 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | skip adding static IP to network mk-old-k8s-version-151788 - found existing host DHCP lease matching {name: "old-k8s-version-151788", mac: "52:54:00:56:4e:c1", ip: "192.168.50.65"}
	I0603 13:50:28.486156 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | Getting to WaitForSSH function...
	I0603 13:50:28.488305 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.488754 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.488788 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.489025 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | Using SSH client type: external
	I0603 13:50:28.489048 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | Using SSH private key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/id_rsa (-rw-------)
	I0603 13:50:28.489114 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.65 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 13:50:28.489147 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | About to run SSH command:
	I0603 13:50:28.489167 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | exit 0
	I0603 13:50:28.613732 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | SSH cmd err, output: <nil>: 
	I0603 13:50:28.614183 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetConfigRaw
	I0603 13:50:28.614879 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetIP
	I0603 13:50:28.617742 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.618235 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.618270 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.618481 1143678 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/config.json ...
	I0603 13:50:28.618699 1143678 machine.go:94] provisionDockerMachine start ...
	I0603 13:50:28.618719 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:50:28.618967 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:28.621356 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.621655 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.621685 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.621897 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:28.622117 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:28.622321 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:28.622511 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:28.622750 1143678 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:28.622946 1143678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0603 13:50:28.622958 1143678 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 13:50:28.726383 1143678 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 13:50:28.726419 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetMachineName
	I0603 13:50:28.726740 1143678 buildroot.go:166] provisioning hostname "old-k8s-version-151788"
	I0603 13:50:28.726777 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetMachineName
	I0603 13:50:28.727042 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:28.729901 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.730372 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.730402 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.730599 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:28.730824 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:28.731031 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:28.731205 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:28.731403 1143678 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:28.731585 1143678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0603 13:50:28.731599 1143678 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-151788 && echo "old-k8s-version-151788" | sudo tee /etc/hostname
	I0603 13:50:28.848834 1143678 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-151788
	
	I0603 13:50:28.848867 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:28.852250 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.852698 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.852721 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.852980 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:28.853239 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:28.853536 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:28.853819 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:28.854093 1143678 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:28.854338 1143678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0603 13:50:28.854367 1143678 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-151788' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-151788/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-151788' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 13:50:28.967427 1143678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 13:50:28.967461 1143678 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19011-1078924/.minikube CaCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19011-1078924/.minikube}
	I0603 13:50:28.967520 1143678 buildroot.go:174] setting up certificates
	I0603 13:50:28.967538 1143678 provision.go:84] configureAuth start
	I0603 13:50:28.967550 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetMachineName
	I0603 13:50:28.967946 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetIP
	I0603 13:50:28.970841 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.971226 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.971256 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.971449 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:28.974316 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.974702 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.974732 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.974911 1143678 provision.go:143] copyHostCerts
	I0603 13:50:28.974994 1143678 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem, removing ...
	I0603 13:50:28.975010 1143678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 13:50:28.975068 1143678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem (1675 bytes)
	I0603 13:50:28.975247 1143678 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem, removing ...
	I0603 13:50:28.975260 1143678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 13:50:28.975283 1143678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem (1078 bytes)
	I0603 13:50:28.975354 1143678 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem, removing ...
	I0603 13:50:28.975362 1143678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 13:50:28.975385 1143678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem (1123 bytes)
	I0603 13:50:28.975463 1143678 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-151788 san=[127.0.0.1 192.168.50.65 localhost minikube old-k8s-version-151788]
	I0603 13:50:29.096777 1143678 provision.go:177] copyRemoteCerts
	I0603 13:50:29.096835 1143678 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 13:50:29.096865 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:29.099989 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.100408 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:29.100434 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.100644 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:29.100831 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.100975 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:29.101144 1143678 sshutil.go:53] new ssh client: &{IP:192.168.50.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/id_rsa Username:docker}
	I0603 13:50:29.184886 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 13:50:29.211432 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0603 13:50:29.238552 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 13:50:29.266803 1143678 provision.go:87] duration metric: took 299.247567ms to configureAuth
	I0603 13:50:29.266844 1143678 buildroot.go:189] setting minikube options for container-runtime
	I0603 13:50:29.267107 1143678 config.go:182] Loaded profile config "old-k8s-version-151788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0603 13:50:29.267220 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:29.270966 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.271417 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:29.271472 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.271688 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:29.271893 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.272121 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.272327 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:29.272544 1143678 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:29.272787 1143678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0603 13:50:29.272811 1143678 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 13:50:29.548407 1143678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 13:50:29.548437 1143678 machine.go:97] duration metric: took 929.724002ms to provisionDockerMachine
	I0603 13:50:29.548449 1143678 start.go:293] postStartSetup for "old-k8s-version-151788" (driver="kvm2")
	I0603 13:50:29.548461 1143678 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 13:50:29.548486 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:50:29.548924 1143678 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 13:50:29.548992 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:29.552127 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.552531 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:29.552571 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.552756 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:29.552974 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.553166 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:29.553364 1143678 sshutil.go:53] new ssh client: &{IP:192.168.50.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/id_rsa Username:docker}
	I0603 13:50:29.637026 1143678 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 13:50:29.641264 1143678 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 13:50:29.641293 1143678 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/addons for local assets ...
	I0603 13:50:29.641376 1143678 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/files for local assets ...
	I0603 13:50:29.641509 1143678 filesync.go:149] local asset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> 10862512.pem in /etc/ssl/certs
	I0603 13:50:29.641600 1143678 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 13:50:29.657273 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:50:29.688757 1143678 start.go:296] duration metric: took 140.291954ms for postStartSetup
	I0603 13:50:29.688806 1143678 fix.go:56] duration metric: took 21.605539652s for fixHost
	I0603 13:50:29.688843 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:29.691764 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.692170 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:29.692216 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.692356 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:29.692623 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.692814 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.692996 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:29.693180 1143678 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:29.693372 1143678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0603 13:50:29.693384 1143678 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 13:50:29.798629 1143678 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717422629.770375968
	
	I0603 13:50:29.798655 1143678 fix.go:216] guest clock: 1717422629.770375968
	I0603 13:50:29.798662 1143678 fix.go:229] Guest: 2024-06-03 13:50:29.770375968 +0000 UTC Remote: 2024-06-03 13:50:29.688811675 +0000 UTC m=+247.377673500 (delta=81.564293ms)
	I0603 13:50:29.798683 1143678 fix.go:200] guest clock delta is within tolerance: 81.564293ms
	I0603 13:50:29.798688 1143678 start.go:83] releasing machines lock for "old-k8s-version-151788", held for 21.715483341s
	I0603 13:50:29.798712 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:50:29.799019 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetIP
	I0603 13:50:29.802078 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.802479 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:29.802522 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.802674 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:50:29.803271 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:50:29.803496 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:50:29.803584 1143678 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 13:50:29.803646 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:29.803961 1143678 ssh_runner.go:195] Run: cat /version.json
	I0603 13:50:29.803988 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:29.806505 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.806863 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.806926 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:29.806961 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.807093 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:29.807299 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.807345 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:29.807386 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.807476 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:29.807670 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:29.807669 1143678 sshutil.go:53] new ssh client: &{IP:192.168.50.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/id_rsa Username:docker}
	I0603 13:50:29.807841 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.807947 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:29.808183 1143678 sshutil.go:53] new ssh client: &{IP:192.168.50.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/id_rsa Username:docker}
	I0603 13:50:29.890622 1143678 ssh_runner.go:195] Run: systemctl --version
	I0603 13:50:29.918437 1143678 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 13:50:30.064471 1143678 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 13:50:30.073881 1143678 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 13:50:30.073969 1143678 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 13:50:30.097037 1143678 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 13:50:30.097070 1143678 start.go:494] detecting cgroup driver to use...
	I0603 13:50:30.097147 1143678 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 13:50:30.114374 1143678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 13:50:30.132000 1143678 docker.go:217] disabling cri-docker service (if available) ...
	I0603 13:50:30.132075 1143678 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 13:50:30.148156 1143678 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 13:50:30.164601 1143678 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 13:50:30.303125 1143678 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 13:50:30.475478 1143678 docker.go:233] disabling docker service ...
	I0603 13:50:30.475578 1143678 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 13:50:30.494632 1143678 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 13:50:30.513383 1143678 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 13:50:30.691539 1143678 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 13:50:30.849280 1143678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 13:50:30.869107 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 13:50:30.893451 1143678 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0603 13:50:30.893528 1143678 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:30.909358 1143678 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 13:50:30.909465 1143678 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:30.926891 1143678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:30.941879 1143678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:30.957985 1143678 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 13:50:30.971349 1143678 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 13:50:30.984948 1143678 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 13:50:30.985023 1143678 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 13:50:30.999255 1143678 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 13:50:31.011615 1143678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:50:31.162848 1143678 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 13:50:31.352121 1143678 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 13:50:31.352190 1143678 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 13:50:31.357946 1143678 start.go:562] Will wait 60s for crictl version
	I0603 13:50:31.358032 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:31.362540 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 13:50:31.410642 1143678 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 13:50:31.410757 1143678 ssh_runner.go:195] Run: crio --version
	I0603 13:50:31.444750 1143678 ssh_runner.go:195] Run: crio --version
	I0603 13:50:31.482404 1143678 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0603 13:50:31.484218 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetIP
	I0603 13:50:31.488049 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:31.488663 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:31.488695 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:31.488985 1143678 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0603 13:50:31.494813 1143678 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:50:31.511436 1143678 kubeadm.go:877] updating cluster {Name:old-k8s-version-151788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-151788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 13:50:31.511597 1143678 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0603 13:50:31.511659 1143678 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:50:31.571733 1143678 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0603 13:50:31.571819 1143678 ssh_runner.go:195] Run: which lz4
	I0603 13:50:31.577765 1143678 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 13:50:31.583983 1143678 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 13:50:31.584025 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0603 13:50:30.319230 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:32.824874 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:30.633456 1143450 node_ready.go:53] node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:32.134192 1143450 node_ready.go:49] node "default-k8s-diff-port-030870" has status "Ready":"True"
	I0603 13:50:32.134227 1143450 node_ready.go:38] duration metric: took 5.505047986s for node "default-k8s-diff-port-030870" to be "Ready" ...
	I0603 13:50:32.134241 1143450 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:50:32.143157 1143450 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-flxqj" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:32.150075 1143450 pod_ready.go:92] pod "coredns-7db6d8ff4d-flxqj" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:32.150113 1143450 pod_ready.go:81] duration metric: took 6.922006ms for pod "coredns-7db6d8ff4d-flxqj" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:32.150128 1143450 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:34.157758 1143450 pod_ready.go:102] pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:31.283193 1142862 main.go:141] libmachine: (no-preload-817450) Waiting to get IP...
	I0603 13:50:31.284191 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:31.284681 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:31.284757 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:31.284641 1144889 retry.go:31] will retry after 246.139268ms: waiting for machine to come up
	I0603 13:50:31.532345 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:31.533024 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:31.533056 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:31.532956 1144889 retry.go:31] will retry after 283.586657ms: waiting for machine to come up
	I0603 13:50:31.818610 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:31.819271 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:31.819302 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:31.819235 1144889 retry.go:31] will retry after 345.327314ms: waiting for machine to come up
	I0603 13:50:32.165948 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:32.166532 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:32.166585 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:32.166485 1144889 retry.go:31] will retry after 567.370644ms: waiting for machine to come up
	I0603 13:50:32.735409 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:32.736074 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:32.736118 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:32.735978 1144889 retry.go:31] will retry after 523.349811ms: waiting for machine to come up
	I0603 13:50:33.261023 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:33.261738 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:33.261769 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:33.261685 1144889 retry.go:31] will retry after 617.256992ms: waiting for machine to come up
	I0603 13:50:33.880579 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:33.881159 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:33.881188 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:33.881113 1144889 retry.go:31] will retry after 975.807438ms: waiting for machine to come up
	I0603 13:50:34.858935 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:34.859418 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:34.859447 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:34.859365 1144889 retry.go:31] will retry after 1.257722281s: waiting for machine to come up
	I0603 13:50:33.399678 1143678 crio.go:462] duration metric: took 1.821959808s to copy over tarball
	I0603 13:50:33.399768 1143678 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 13:50:36.631033 1143678 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.231219364s)
	I0603 13:50:36.631081 1143678 crio.go:469] duration metric: took 3.231364789s to extract the tarball
	I0603 13:50:36.631092 1143678 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 13:50:36.677954 1143678 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:50:36.718160 1143678 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0603 13:50:36.718197 1143678 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0603 13:50:36.718295 1143678 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 13:50:36.718335 1143678 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 13:50:36.718295 1143678 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:36.718456 1143678 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0603 13:50:36.718302 1143678 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 13:50:36.718343 1143678 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0603 13:50:36.718335 1143678 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0603 13:50:36.718858 1143678 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 13:50:36.720574 1143678 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 13:50:36.720644 1143678 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:36.720573 1143678 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0603 13:50:36.720574 1143678 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 13:50:36.720576 1143678 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 13:50:36.720603 1143678 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0603 13:50:36.720608 1143678 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 13:50:36.721118 1143678 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0603 13:50:36.907182 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0603 13:50:36.907179 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0603 13:50:36.910017 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:36.920969 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 13:50:36.925739 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0603 13:50:36.935710 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0603 13:50:36.946767 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0603 13:50:36.973425 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0603 13:50:37.050763 1143678 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0603 13:50:37.050817 1143678 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0603 13:50:37.050846 1143678 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0603 13:50:37.050876 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:37.050880 1143678 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 13:50:37.050906 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:37.162505 1143678 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0603 13:50:37.162561 1143678 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 13:50:37.162608 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:37.162706 1143678 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0603 13:50:37.162727 1143678 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 13:50:37.162754 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:37.162858 1143678 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0603 13:50:37.162898 1143678 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 13:50:37.162922 1143678 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0603 13:50:37.162965 1143678 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0603 13:50:37.163001 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:37.162943 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:37.164963 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0603 13:50:37.165019 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0603 13:50:37.165136 1143678 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0603 13:50:37.165260 1143678 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0603 13:50:37.165295 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:37.188179 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0603 13:50:37.188292 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0603 13:50:37.188315 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0603 13:50:37.188371 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0603 13:50:37.188561 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 13:50:37.300592 1143678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0603 13:50:37.300642 1143678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0603 13:50:35.317483 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:37.318105 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:36.160066 1143450 pod_ready.go:102] pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:37.334685 1143450 pod_ready.go:92] pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:37.334719 1143450 pod_ready.go:81] duration metric: took 5.184582613s for pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.334732 1143450 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.341104 1143450 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-030870" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:37.341140 1143450 pod_ready.go:81] duration metric: took 6.399805ms for pod "kube-apiserver-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.341154 1143450 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.347174 1143450 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-030870" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:37.347208 1143450 pod_ready.go:81] duration metric: took 6.044519ms for pod "kube-controller-manager-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.347220 1143450 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-thsrx" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.356909 1143450 pod_ready.go:92] pod "kube-proxy-thsrx" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:37.356949 1143450 pod_ready.go:81] duration metric: took 9.72108ms for pod "kube-proxy-thsrx" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.356962 1143450 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.363891 1143450 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-030870" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:37.363915 1143450 pod_ready.go:81] duration metric: took 6.9442ms for pod "kube-scheduler-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.363927 1143450 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:39.372092 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:36.118754 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:36.119214 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:36.119251 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:36.119148 1144889 retry.go:31] will retry after 1.380813987s: waiting for machine to come up
	I0603 13:50:37.501464 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:37.501889 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:37.501937 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:37.501849 1144889 retry.go:31] will retry after 2.144177789s: waiting for machine to come up
	I0603 13:50:39.648238 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:39.648744 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:39.648768 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:39.648693 1144889 retry.go:31] will retry after 1.947487062s: waiting for machine to come up
	I0603 13:50:37.360149 1143678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0603 13:50:37.360196 1143678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0603 13:50:37.360346 1143678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0603 13:50:37.360371 1143678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0603 13:50:37.360436 1143678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0603 13:50:37.543409 1143678 cache_images.go:92] duration metric: took 825.189409ms to LoadCachedImages
	W0603 13:50:37.543559 1143678 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0603 13:50:37.543581 1143678 kubeadm.go:928] updating node { 192.168.50.65 8443 v1.20.0 crio true true} ...
	I0603 13:50:37.543723 1143678 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-151788 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-151788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 13:50:37.543804 1143678 ssh_runner.go:195] Run: crio config
	I0603 13:50:37.601388 1143678 cni.go:84] Creating CNI manager for ""
	I0603 13:50:37.601428 1143678 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:50:37.601445 1143678 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 13:50:37.601471 1143678 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.65 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-151788 NodeName:old-k8s-version-151788 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.65"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.65 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0603 13:50:37.601664 1143678 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.65
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-151788"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.65
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.65"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 13:50:37.601746 1143678 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0603 13:50:37.613507 1143678 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 13:50:37.613588 1143678 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 13:50:37.623853 1143678 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0603 13:50:37.642298 1143678 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 13:50:37.660863 1143678 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0603 13:50:37.679974 1143678 ssh_runner.go:195] Run: grep 192.168.50.65	control-plane.minikube.internal$ /etc/hosts
	I0603 13:50:37.685376 1143678 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.65	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:50:37.702732 1143678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:50:37.859343 1143678 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:50:37.880684 1143678 certs.go:68] Setting up /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788 for IP: 192.168.50.65
	I0603 13:50:37.880714 1143678 certs.go:194] generating shared ca certs ...
	I0603 13:50:37.880737 1143678 certs.go:226] acquiring lock for ca certs: {Name:mkeec5aabce7c9540fcb31b78e4f96c2851d54f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:50:37.880952 1143678 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key
	I0603 13:50:37.881012 1143678 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key
	I0603 13:50:37.881024 1143678 certs.go:256] generating profile certs ...
	I0603 13:50:37.881179 1143678 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/client.key
	I0603 13:50:37.881279 1143678 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/apiserver.key.9bfe4cc3
	I0603 13:50:37.881334 1143678 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/proxy-client.key
	I0603 13:50:37.881554 1143678 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem (1338 bytes)
	W0603 13:50:37.881602 1143678 certs.go:480] ignoring /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251_empty.pem, impossibly tiny 0 bytes
	I0603 13:50:37.881629 1143678 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 13:50:37.881667 1143678 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem (1078 bytes)
	I0603 13:50:37.881698 1143678 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem (1123 bytes)
	I0603 13:50:37.881730 1143678 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem (1675 bytes)
	I0603 13:50:37.881805 1143678 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:50:37.882741 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 13:50:37.919377 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 13:50:37.957218 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 13:50:37.987016 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0603 13:50:38.024442 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0603 13:50:38.051406 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0603 13:50:38.094816 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 13:50:38.143689 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 13:50:38.171488 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem --> /usr/share/ca-certificates/1086251.pem (1338 bytes)
	I0603 13:50:38.197296 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /usr/share/ca-certificates/10862512.pem (1708 bytes)
	I0603 13:50:38.224025 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 13:50:38.250728 1143678 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 13:50:38.270485 1143678 ssh_runner.go:195] Run: openssl version
	I0603 13:50:38.276995 1143678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10862512.pem && ln -fs /usr/share/ca-certificates/10862512.pem /etc/ssl/certs/10862512.pem"
	I0603 13:50:38.288742 1143678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10862512.pem
	I0603 13:50:38.293880 1143678 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:37 /usr/share/ca-certificates/10862512.pem
	I0603 13:50:38.293955 1143678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10862512.pem
	I0603 13:50:38.300456 1143678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10862512.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 13:50:38.312180 1143678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 13:50:38.324349 1143678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:50:38.329812 1143678 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:24 /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:50:38.329881 1143678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:50:38.337560 1143678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 13:50:38.350229 1143678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1086251.pem && ln -fs /usr/share/ca-certificates/1086251.pem /etc/ssl/certs/1086251.pem"
	I0603 13:50:38.362635 1143678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1086251.pem
	I0603 13:50:38.368842 1143678 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:37 /usr/share/ca-certificates/1086251.pem
	I0603 13:50:38.368920 1143678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1086251.pem
	I0603 13:50:38.376029 1143678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1086251.pem /etc/ssl/certs/51391683.0"
	I0603 13:50:38.387703 1143678 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 13:50:38.393071 1143678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 13:50:38.399760 1143678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 13:50:38.406332 1143678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 13:50:38.413154 1143678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 13:50:38.419162 1143678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 13:50:38.425818 1143678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 13:50:38.432495 1143678 kubeadm.go:391] StartCluster: {Name:old-k8s-version-151788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-151788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:50:38.432659 1143678 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 13:50:38.432718 1143678 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:50:38.479889 1143678 cri.go:89] found id: ""
	I0603 13:50:38.479975 1143678 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0603 13:50:38.490549 1143678 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 13:50:38.490574 1143678 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 13:50:38.490580 1143678 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 13:50:38.490637 1143678 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 13:50:38.501024 1143678 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 13:50:38.503665 1143678 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-151788" does not appear in /home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 13:50:38.504563 1143678 kubeconfig.go:62] /home/jenkins/minikube-integration/19011-1078924/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-151788" cluster setting kubeconfig missing "old-k8s-version-151788" context setting]
	I0603 13:50:38.505614 1143678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/kubeconfig: {Name:mk082a4c41fd0f4876b4085806e1bc5ef6533b14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:50:38.562691 1143678 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 13:50:38.573839 1143678 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.65
	I0603 13:50:38.573889 1143678 kubeadm.go:1154] stopping kube-system containers ...
	I0603 13:50:38.573905 1143678 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0603 13:50:38.573987 1143678 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:50:38.615876 1143678 cri.go:89] found id: ""
	I0603 13:50:38.615972 1143678 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 13:50:38.633568 1143678 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 13:50:38.645197 1143678 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 13:50:38.645229 1143678 kubeadm.go:156] found existing configuration files:
	
	I0603 13:50:38.645291 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 13:50:38.655344 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 13:50:38.655423 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 13:50:38.665789 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 13:50:38.674765 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 13:50:38.674842 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 13:50:38.684268 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 13:50:38.693586 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 13:50:38.693650 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 13:50:38.703313 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 13:50:38.712523 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 13:50:38.712597 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 13:50:38.722362 1143678 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 13:50:38.732190 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:38.875545 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:39.722534 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:39.970226 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:40.090817 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:40.193178 1143678 api_server.go:52] waiting for apiserver process to appear ...
	I0603 13:50:40.193485 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:40.693580 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:41.193579 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:41.693608 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:42.193587 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:39.318177 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:41.818337 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:41.373738 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:43.870381 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:41.597745 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:41.598343 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:41.598372 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:41.598280 1144889 retry.go:31] will retry after 2.47307834s: waiting for machine to come up
	I0603 13:50:44.074548 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:44.075009 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:44.075037 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:44.074970 1144889 retry.go:31] will retry after 3.055733752s: waiting for machine to come up
	I0603 13:50:42.693593 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:43.194448 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:43.693645 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:44.193587 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:44.694583 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:45.194065 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:45.694138 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:46.194173 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:46.694344 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:47.194063 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:44.316348 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:46.317245 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:47.133727 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.134266 1142862 main.go:141] libmachine: (no-preload-817450) Found IP for machine: 192.168.72.125
	I0603 13:50:47.134301 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has current primary IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.134308 1142862 main.go:141] libmachine: (no-preload-817450) Reserving static IP address...
	I0603 13:50:47.134745 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "no-preload-817450", mac: "52:54:00:8f:cc:be", ip: "192.168.72.125"} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.134777 1142862 main.go:141] libmachine: (no-preload-817450) Reserved static IP address: 192.168.72.125
	I0603 13:50:47.134797 1142862 main.go:141] libmachine: (no-preload-817450) DBG | skip adding static IP to network mk-no-preload-817450 - found existing host DHCP lease matching {name: "no-preload-817450", mac: "52:54:00:8f:cc:be", ip: "192.168.72.125"}
	I0603 13:50:47.134816 1142862 main.go:141] libmachine: (no-preload-817450) DBG | Getting to WaitForSSH function...
	I0603 13:50:47.134858 1142862 main.go:141] libmachine: (no-preload-817450) Waiting for SSH to be available...
	I0603 13:50:47.137239 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.137669 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.137705 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.137810 1142862 main.go:141] libmachine: (no-preload-817450) DBG | Using SSH client type: external
	I0603 13:50:47.137835 1142862 main.go:141] libmachine: (no-preload-817450) DBG | Using SSH private key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa (-rw-------)
	I0603 13:50:47.137870 1142862 main.go:141] libmachine: (no-preload-817450) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 13:50:47.137879 1142862 main.go:141] libmachine: (no-preload-817450) DBG | About to run SSH command:
	I0603 13:50:47.137889 1142862 main.go:141] libmachine: (no-preload-817450) DBG | exit 0
	I0603 13:50:47.265932 1142862 main.go:141] libmachine: (no-preload-817450) DBG | SSH cmd err, output: <nil>: 
	I0603 13:50:47.266268 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetConfigRaw
	I0603 13:50:47.267007 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetIP
	I0603 13:50:47.269463 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.269849 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.269885 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.270135 1142862 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450/config.json ...
	I0603 13:50:47.270355 1142862 machine.go:94] provisionDockerMachine start ...
	I0603 13:50:47.270375 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:50:47.270589 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:47.272915 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.273307 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.273341 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.273543 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:47.273737 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.273905 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.274061 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:47.274242 1142862 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:47.274417 1142862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.125 22 <nil> <nil>}
	I0603 13:50:47.274429 1142862 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 13:50:47.380760 1142862 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 13:50:47.380789 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetMachineName
	I0603 13:50:47.381068 1142862 buildroot.go:166] provisioning hostname "no-preload-817450"
	I0603 13:50:47.381095 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetMachineName
	I0603 13:50:47.381314 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:47.384093 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.384460 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.384482 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.384627 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:47.384798 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.384938 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.385099 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:47.385276 1142862 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:47.385533 1142862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.125 22 <nil> <nil>}
	I0603 13:50:47.385562 1142862 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-817450 && echo "no-preload-817450" | sudo tee /etc/hostname
	I0603 13:50:47.505203 1142862 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-817450
	
	I0603 13:50:47.505231 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:47.508267 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.508696 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.508721 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.508877 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:47.509066 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.509281 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.509437 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:47.509606 1142862 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:47.509780 1142862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.125 22 <nil> <nil>}
	I0603 13:50:47.509795 1142862 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-817450' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-817450/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-817450' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 13:50:47.618705 1142862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 13:50:47.618757 1142862 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19011-1078924/.minikube CaCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19011-1078924/.minikube}
	I0603 13:50:47.618822 1142862 buildroot.go:174] setting up certificates
	I0603 13:50:47.618835 1142862 provision.go:84] configureAuth start
	I0603 13:50:47.618854 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetMachineName
	I0603 13:50:47.619166 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetIP
	I0603 13:50:47.621974 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.622512 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.622548 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.622652 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:47.624950 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.625275 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.625302 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.625419 1142862 provision.go:143] copyHostCerts
	I0603 13:50:47.625504 1142862 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem, removing ...
	I0603 13:50:47.625520 1142862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 13:50:47.625591 1142862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem (1675 bytes)
	I0603 13:50:47.625697 1142862 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem, removing ...
	I0603 13:50:47.625706 1142862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 13:50:47.625725 1142862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem (1078 bytes)
	I0603 13:50:47.625790 1142862 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem, removing ...
	I0603 13:50:47.625800 1142862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 13:50:47.625826 1142862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem (1123 bytes)
	I0603 13:50:47.625891 1142862 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem org=jenkins.no-preload-817450 san=[127.0.0.1 192.168.72.125 localhost minikube no-preload-817450]
	I0603 13:50:47.733710 1142862 provision.go:177] copyRemoteCerts
	I0603 13:50:47.733769 1142862 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 13:50:47.733801 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:47.736326 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.736657 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.736686 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.736844 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:47.737036 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.737222 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:47.737341 1142862 sshutil.go:53] new ssh client: &{IP:192.168.72.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa Username:docker}
	I0603 13:50:47.821893 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 13:50:47.848085 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0603 13:50:47.875891 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 13:50:47.900761 1142862 provision.go:87] duration metric: took 281.906702ms to configureAuth
	I0603 13:50:47.900795 1142862 buildroot.go:189] setting minikube options for container-runtime
	I0603 13:50:47.900986 1142862 config.go:182] Loaded profile config "no-preload-817450": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:50:47.901072 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:47.904128 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.904551 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.904581 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.904802 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:47.905018 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.905203 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.905413 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:47.905609 1142862 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:47.905816 1142862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.125 22 <nil> <nil>}
	I0603 13:50:47.905839 1142862 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 13:50:48.176290 1142862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 13:50:48.176321 1142862 machine.go:97] duration metric: took 905.950732ms to provisionDockerMachine
	I0603 13:50:48.176333 1142862 start.go:293] postStartSetup for "no-preload-817450" (driver="kvm2")
	I0603 13:50:48.176344 1142862 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 13:50:48.176361 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:50:48.176689 1142862 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 13:50:48.176712 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:48.179595 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.179994 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:48.180020 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.180186 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:48.180398 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:48.180561 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:48.180704 1142862 sshutil.go:53] new ssh client: &{IP:192.168.72.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa Username:docker}
	I0603 13:50:48.267996 1142862 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 13:50:48.272936 1142862 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 13:50:48.272970 1142862 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/addons for local assets ...
	I0603 13:50:48.273044 1142862 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/files for local assets ...
	I0603 13:50:48.273141 1142862 filesync.go:149] local asset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> 10862512.pem in /etc/ssl/certs
	I0603 13:50:48.273285 1142862 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 13:50:48.283984 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:50:48.310846 1142862 start.go:296] duration metric: took 134.495139ms for postStartSetup
	I0603 13:50:48.310899 1142862 fix.go:56] duration metric: took 18.512032449s for fixHost
	I0603 13:50:48.310928 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:48.313969 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.314331 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:48.314358 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.314627 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:48.314896 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:48.315086 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:48.315258 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:48.315442 1142862 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:48.315681 1142862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.125 22 <nil> <nil>}
	I0603 13:50:48.315698 1142862 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 13:50:48.422576 1142862 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717422648.390814282
	
	I0603 13:50:48.422599 1142862 fix.go:216] guest clock: 1717422648.390814282
	I0603 13:50:48.422606 1142862 fix.go:229] Guest: 2024-06-03 13:50:48.390814282 +0000 UTC Remote: 2024-06-03 13:50:48.310904217 +0000 UTC m=+357.796105522 (delta=79.910065ms)
	I0603 13:50:48.422636 1142862 fix.go:200] guest clock delta is within tolerance: 79.910065ms
	I0603 13:50:48.422642 1142862 start.go:83] releasing machines lock for "no-preload-817450", held for 18.623816039s
	I0603 13:50:48.422659 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:50:48.422954 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetIP
	I0603 13:50:48.426261 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.426671 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:48.426701 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.426864 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:50:48.427460 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:50:48.427661 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:50:48.427762 1142862 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 13:50:48.427827 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:48.427878 1142862 ssh_runner.go:195] Run: cat /version.json
	I0603 13:50:48.427914 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:48.430586 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.430830 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.430965 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:48.430993 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.431177 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:48.431326 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:48.431355 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.431387 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:48.431516 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:48.431584 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:48.431676 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:48.431751 1142862 sshutil.go:53] new ssh client: &{IP:192.168.72.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa Username:docker}
	I0603 13:50:48.431798 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:48.431936 1142862 sshutil.go:53] new ssh client: &{IP:192.168.72.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa Username:docker}
	I0603 13:50:48.506899 1142862 ssh_runner.go:195] Run: systemctl --version
	I0603 13:50:48.545903 1142862 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 13:50:48.700235 1142862 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 13:50:48.706614 1142862 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 13:50:48.706704 1142862 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 13:50:48.724565 1142862 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 13:50:48.724592 1142862 start.go:494] detecting cgroup driver to use...
	I0603 13:50:48.724664 1142862 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 13:50:48.741006 1142862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 13:50:48.758824 1142862 docker.go:217] disabling cri-docker service (if available) ...
	I0603 13:50:48.758899 1142862 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 13:50:48.773280 1142862 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 13:50:48.791049 1142862 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 13:50:48.917847 1142862 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 13:50:49.081837 1142862 docker.go:233] disabling docker service ...
	I0603 13:50:49.081927 1142862 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 13:50:49.097577 1142862 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 13:50:49.112592 1142862 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 13:50:49.228447 1142862 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 13:50:49.350782 1142862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 13:50:49.366017 1142862 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 13:50:49.385685 1142862 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 13:50:49.385765 1142862 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:49.396361 1142862 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 13:50:49.396432 1142862 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:49.408606 1142862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:49.419642 1142862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:49.430431 1142862 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 13:50:49.441378 1142862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:49.451810 1142862 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:49.469080 1142862 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:49.480054 1142862 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 13:50:49.489742 1142862 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 13:50:49.489814 1142862 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 13:50:49.502889 1142862 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 13:50:49.512414 1142862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:50:49.639903 1142862 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 13:50:49.786388 1142862 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 13:50:49.786486 1142862 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 13:50:49.791642 1142862 start.go:562] Will wait 60s for crictl version
	I0603 13:50:49.791711 1142862 ssh_runner.go:195] Run: which crictl
	I0603 13:50:49.796156 1142862 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 13:50:49.841667 1142862 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 13:50:49.841765 1142862 ssh_runner.go:195] Run: crio --version
	I0603 13:50:49.872213 1142862 ssh_runner.go:195] Run: crio --version
	I0603 13:50:49.910979 1142862 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 13:50:46.370749 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:48.870860 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:49.912417 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetIP
	I0603 13:50:49.915368 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:49.915731 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:49.915759 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:49.915913 1142862 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0603 13:50:49.920247 1142862 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:50:49.933231 1142862 kubeadm.go:877] updating cluster {Name:no-preload-817450 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:no-preload-817450 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.125 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 13:50:49.933358 1142862 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 13:50:49.933388 1142862 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:50:49.970029 1142862 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0603 13:50:49.970059 1142862 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.1 registry.k8s.io/kube-controller-manager:v1.30.1 registry.k8s.io/kube-scheduler:v1.30.1 registry.k8s.io/kube-proxy:v1.30.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0603 13:50:49.970118 1142862 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:49.970147 1142862 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.1
	I0603 13:50:49.970163 1142862 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0603 13:50:49.970198 1142862 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 13:50:49.970239 1142862 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.1
	I0603 13:50:49.970316 1142862 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.1
	I0603 13:50:49.970328 1142862 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0603 13:50:49.970379 1142862 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0603 13:50:49.971809 1142862 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0603 13:50:49.971837 1142862 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0603 13:50:49.971841 1142862 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.1
	I0603 13:50:49.971809 1142862 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0603 13:50:49.971808 1142862 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.1
	I0603 13:50:49.971876 1142862 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 13:50:49.971816 1142862 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.1
	I0603 13:50:49.971813 1142862 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:50.126557 1142862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0603 13:50:50.146394 1142862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.1
	I0603 13:50:50.149455 1142862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.1
	I0603 13:50:50.149755 1142862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0603 13:50:50.154990 1142862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0603 13:50:50.162983 1142862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:50.177520 1142862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 13:50:50.188703 1142862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.1
	I0603 13:50:50.299288 1142862 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.1" does not exist at hash "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a" in container runtime
	I0603 13:50:50.299312 1142862 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.1" does not exist at hash "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035" in container runtime
	I0603 13:50:50.299345 1142862 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.1
	I0603 13:50:50.299350 1142862 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.1
	I0603 13:50:50.299389 1142862 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0603 13:50:50.299406 1142862 ssh_runner.go:195] Run: which crictl
	I0603 13:50:50.299413 1142862 ssh_runner.go:195] Run: which crictl
	I0603 13:50:50.299422 1142862 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0603 13:50:50.299488 1142862 ssh_runner.go:195] Run: which crictl
	I0603 13:50:50.353368 1142862 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0603 13:50:50.353431 1142862 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:50.353485 1142862 ssh_runner.go:195] Run: which crictl
	I0603 13:50:50.353506 1142862 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0603 13:50:50.353543 1142862 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0603 13:50:50.353591 1142862 ssh_runner.go:195] Run: which crictl
	I0603 13:50:50.379011 1142862 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.1" needs transfer: "registry.k8s.io/kube-proxy:v1.30.1" does not exist at hash "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd" in container runtime
	I0603 13:50:50.379028 1142862 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.1" does not exist at hash "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c" in container runtime
	I0603 13:50:50.379054 1142862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.1
	I0603 13:50:50.379062 1142862 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.1
	I0603 13:50:50.379105 1142862 ssh_runner.go:195] Run: which crictl
	I0603 13:50:50.379075 1142862 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 13:50:50.379146 1142862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.1
	I0603 13:50:50.379181 1142862 ssh_runner.go:195] Run: which crictl
	I0603 13:50:50.379212 1142862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0603 13:50:50.379229 1142862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0603 13:50:50.379239 1142862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:50.482204 1142862 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1
	I0603 13:50:50.482210 1142862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.1
	I0603 13:50:50.482332 1142862 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0603 13:50:50.511560 1142862 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0603 13:50:50.511671 1142862 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1
	I0603 13:50:50.511721 1142862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 13:50:50.511769 1142862 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0603 13:50:50.511772 1142862 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0603 13:50:50.511682 1142862 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0603 13:50:50.511868 1142862 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0603 13:50:50.512290 1142862 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0603 13:50:50.512360 1142862 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0603 13:50:50.549035 1142862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.1 (exists)
	I0603 13:50:50.549061 1142862 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1
	I0603 13:50:50.549066 1142862 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0603 13:50:50.549156 1142862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0603 13:50:50.549166 1142862 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.1
	I0603 13:50:47.693584 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:48.193894 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:48.694053 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:49.193587 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:49.694081 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:50.194053 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:50.694265 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:51.193572 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:51.694283 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:52.194444 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:48.321194 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:50.816679 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:52.818121 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:51.372716 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:53.372880 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:50.573615 1142862 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1
	I0603 13:50:50.573661 1142862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0603 13:50:50.573708 1142862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.1 (exists)
	I0603 13:50:50.573737 1142862 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0603 13:50:50.573754 1142862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0603 13:50:50.573816 1142862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0603 13:50:50.573839 1142862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.1 (exists)
	I0603 13:50:52.739312 1142862 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1: (2.190102069s)
	I0603 13:50:52.739333 1142862 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.1: (2.165569436s)
	I0603 13:50:52.739354 1142862 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1 from cache
	I0603 13:50:52.739365 1142862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.1 (exists)
	I0603 13:50:52.739372 1142862 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0603 13:50:52.739420 1142862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0603 13:50:54.995960 1142862 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1: (2.256502953s)
	I0603 13:50:54.996000 1142862 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1 from cache
	I0603 13:50:54.996019 1142862 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0603 13:50:54.996076 1142862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0603 13:50:52.694071 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:53.193597 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:53.694503 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:54.193609 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:54.694446 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:55.193856 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:55.693583 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:56.194271 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:56.693558 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:57.194427 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:55.317668 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:57.318423 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:55.872030 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:58.376034 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:55.844775 1142862 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0603 13:50:55.844853 1142862 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0603 13:50:55.844967 1142862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0603 13:50:58.110074 1142862 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1: (2.265068331s)
	I0603 13:50:58.110103 1142862 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1 from cache
	I0603 13:50:58.110115 1142862 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0603 13:50:58.110169 1142862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0603 13:50:59.979789 1142862 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.869594477s)
	I0603 13:50:59.979817 1142862 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0603 13:50:59.979832 1142862 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0603 13:50:59.979875 1142862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0603 13:50:57.694027 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:58.193718 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:58.693488 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:59.193725 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:59.694310 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:00.194455 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:00.694182 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:01.193916 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:01.693504 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:02.194236 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:59.816444 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:01.817757 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:00.872105 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:03.373427 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:04.067476 1142862 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.087571936s)
	I0603 13:51:04.067529 1142862 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0603 13:51:04.067549 1142862 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.1
	I0603 13:51:04.067605 1142862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1
	I0603 13:51:02.694248 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:03.194094 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:03.694072 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:04.194494 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:04.693899 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:05.193578 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:05.693584 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:06.193934 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:06.693586 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:07.193993 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:04.316979 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:06.318105 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:05.871061 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:08.371377 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:05.819264 1142862 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1: (1.75162069s)
	I0603 13:51:05.819302 1142862 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1 from cache
	I0603 13:51:05.819334 1142862 cache_images.go:123] Successfully loaded all cached images
	I0603 13:51:05.819341 1142862 cache_images.go:92] duration metric: took 15.849267186s to LoadCachedImages
	I0603 13:51:05.819352 1142862 kubeadm.go:928] updating node { 192.168.72.125 8443 v1.30.1 crio true true} ...
	I0603 13:51:05.819549 1142862 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-817450 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:no-preload-817450 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 13:51:05.819636 1142862 ssh_runner.go:195] Run: crio config
	I0603 13:51:05.874089 1142862 cni.go:84] Creating CNI manager for ""
	I0603 13:51:05.874114 1142862 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:51:05.874127 1142862 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 13:51:05.874152 1142862 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.125 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-817450 NodeName:no-preload-817450 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 13:51:05.874339 1142862 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.125
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-817450"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 13:51:05.874411 1142862 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 13:51:05.886116 1142862 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 13:51:05.886185 1142862 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 13:51:05.896269 1142862 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0603 13:51:05.914746 1142862 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 13:51:05.931936 1142862 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0603 13:51:05.949151 1142862 ssh_runner.go:195] Run: grep 192.168.72.125	control-plane.minikube.internal$ /etc/hosts
	I0603 13:51:05.953180 1142862 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.125	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:51:05.966675 1142862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:51:06.107517 1142862 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:51:06.129233 1142862 certs.go:68] Setting up /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450 for IP: 192.168.72.125
	I0603 13:51:06.129264 1142862 certs.go:194] generating shared ca certs ...
	I0603 13:51:06.129280 1142862 certs.go:226] acquiring lock for ca certs: {Name:mkeec5aabce7c9540fcb31b78e4f96c2851d54f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:51:06.129517 1142862 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key
	I0603 13:51:06.129583 1142862 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key
	I0603 13:51:06.129597 1142862 certs.go:256] generating profile certs ...
	I0603 13:51:06.129686 1142862 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450/client.key
	I0603 13:51:06.129746 1142862 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450/apiserver.key.e8ec030b
	I0603 13:51:06.129779 1142862 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450/proxy-client.key
	I0603 13:51:06.129885 1142862 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem (1338 bytes)
	W0603 13:51:06.129912 1142862 certs.go:480] ignoring /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251_empty.pem, impossibly tiny 0 bytes
	I0603 13:51:06.129919 1142862 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 13:51:06.129939 1142862 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem (1078 bytes)
	I0603 13:51:06.129965 1142862 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem (1123 bytes)
	I0603 13:51:06.129991 1142862 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem (1675 bytes)
	I0603 13:51:06.130028 1142862 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:51:06.130817 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 13:51:06.171348 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 13:51:06.206270 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 13:51:06.240508 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0603 13:51:06.292262 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0603 13:51:06.320406 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 13:51:06.346655 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 13:51:06.375908 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 13:51:06.401723 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 13:51:06.425992 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem --> /usr/share/ca-certificates/1086251.pem (1338 bytes)
	I0603 13:51:06.450484 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /usr/share/ca-certificates/10862512.pem (1708 bytes)
	I0603 13:51:06.475206 1142862 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 13:51:06.492795 1142862 ssh_runner.go:195] Run: openssl version
	I0603 13:51:06.499759 1142862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 13:51:06.511760 1142862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:51:06.516690 1142862 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:24 /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:51:06.516763 1142862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:51:06.523284 1142862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 13:51:06.535250 1142862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1086251.pem && ln -fs /usr/share/ca-certificates/1086251.pem /etc/ssl/certs/1086251.pem"
	I0603 13:51:06.545921 1142862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1086251.pem
	I0603 13:51:06.550765 1142862 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:37 /usr/share/ca-certificates/1086251.pem
	I0603 13:51:06.550823 1142862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1086251.pem
	I0603 13:51:06.556898 1142862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1086251.pem /etc/ssl/certs/51391683.0"
	I0603 13:51:06.567717 1142862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10862512.pem && ln -fs /usr/share/ca-certificates/10862512.pem /etc/ssl/certs/10862512.pem"
	I0603 13:51:06.578662 1142862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10862512.pem
	I0603 13:51:06.584084 1142862 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:37 /usr/share/ca-certificates/10862512.pem
	I0603 13:51:06.584153 1142862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10862512.pem
	I0603 13:51:06.591566 1142862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10862512.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 13:51:06.603554 1142862 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 13:51:06.608323 1142862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 13:51:06.614939 1142862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 13:51:06.621519 1142862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 13:51:06.627525 1142862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 13:51:06.633291 1142862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 13:51:06.639258 1142862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 13:51:06.644789 1142862 kubeadm.go:391] StartCluster: {Name:no-preload-817450 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:no-preload-817450 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.125 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:51:06.644876 1142862 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 13:51:06.644928 1142862 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:51:06.694731 1142862 cri.go:89] found id: ""
	I0603 13:51:06.694811 1142862 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0603 13:51:06.709773 1142862 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 13:51:06.709804 1142862 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 13:51:06.709812 1142862 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 13:51:06.709875 1142862 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 13:51:06.721095 1142862 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 13:51:06.722256 1142862 kubeconfig.go:125] found "no-preload-817450" server: "https://192.168.72.125:8443"
	I0603 13:51:06.724877 1142862 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 13:51:06.735753 1142862 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.125
	I0603 13:51:06.735789 1142862 kubeadm.go:1154] stopping kube-system containers ...
	I0603 13:51:06.735802 1142862 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0603 13:51:06.735847 1142862 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:51:06.776650 1142862 cri.go:89] found id: ""
	I0603 13:51:06.776743 1142862 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 13:51:06.796259 1142862 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 13:51:06.809765 1142862 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 13:51:06.809785 1142862 kubeadm.go:156] found existing configuration files:
	
	I0603 13:51:06.809839 1142862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 13:51:06.819821 1142862 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 13:51:06.819878 1142862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 13:51:06.829960 1142862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 13:51:06.839510 1142862 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 13:51:06.839561 1142862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 13:51:06.849346 1142862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 13:51:06.858834 1142862 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 13:51:06.858886 1142862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 13:51:06.869159 1142862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 13:51:06.879672 1142862 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 13:51:06.879739 1142862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 13:51:06.889393 1142862 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 13:51:06.899309 1142862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:51:07.021375 1142862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:51:08.119929 1142862 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.098510185s)
	I0603 13:51:08.119959 1142862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:51:08.318752 1142862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:51:08.396713 1142862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:51:08.506285 1142862 api_server.go:52] waiting for apiserver process to appear ...
	I0603 13:51:08.506384 1142862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:09.006865 1142862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:09.506528 1142862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:09.582432 1142862 api_server.go:72] duration metric: took 1.076134659s to wait for apiserver process to appear ...
	I0603 13:51:09.582463 1142862 api_server.go:88] waiting for apiserver healthz status ...
	I0603 13:51:09.582507 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:07.693540 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:08.194490 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:08.694498 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:09.194496 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:09.694286 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:10.193605 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:10.694326 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:11.193904 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:11.694504 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:12.194093 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:08.318739 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:10.817309 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:10.371622 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:12.372640 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:14.871007 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:12.049693 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 13:51:12.049731 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 13:51:12.049748 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:12.084495 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 13:51:12.084526 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 13:51:12.084541 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:12.141515 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:51:12.141555 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:51:12.582630 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:12.590238 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:51:12.590279 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:51:13.082813 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:13.097350 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:51:13.097380 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:51:13.582895 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:13.587479 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:51:13.587511 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:51:14.083076 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:14.087531 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:51:14.087561 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:51:14.583203 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:14.587735 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:51:14.587781 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:51:15.082844 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:15.087403 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:51:15.087438 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:51:15.583226 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:15.590238 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 200:
	ok
	I0603 13:51:15.601732 1142862 api_server.go:141] control plane version: v1.30.1
	I0603 13:51:15.601762 1142862 api_server.go:131] duration metric: took 6.019291333s to wait for apiserver health ...
	I0603 13:51:15.601775 1142862 cni.go:84] Creating CNI manager for ""
	I0603 13:51:15.601784 1142862 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:51:15.603654 1142862 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 13:51:12.694356 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:13.194219 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:13.693546 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:14.193588 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:14.694003 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:15.193572 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:15.694012 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:16.193567 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:16.694014 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:17.193554 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:13.320666 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:15.818073 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:17.369593 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:19.369916 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:15.605291 1142862 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 13:51:15.618333 1142862 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 13:51:15.640539 1142862 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 13:51:15.651042 1142862 system_pods.go:59] 8 kube-system pods found
	I0603 13:51:15.651086 1142862 system_pods.go:61] "coredns-7db6d8ff4d-s562v" [be995d41-2b25-4839-a36b-212a507e7db7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 13:51:15.651102 1142862 system_pods.go:61] "etcd-no-preload-817450" [1b21708b-d81b-4594-a186-546437467c26] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0603 13:51:15.651117 1142862 system_pods.go:61] "kube-apiserver-no-preload-817450" [0741a4bf-3161-4cf3-a9c6-36af2a0c4fde] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0603 13:51:15.651126 1142862 system_pods.go:61] "kube-controller-manager-no-preload-817450" [43713383-9197-4874-8aa9-7b1b1f05e4b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0603 13:51:15.651133 1142862 system_pods.go:61] "kube-proxy-2j4sg" [112657ad-311a-46ee-b5c0-6f544991465e] Running
	I0603 13:51:15.651145 1142862 system_pods.go:61] "kube-scheduler-no-preload-817450" [40db5c40-dc01-4fd3-a5e0-06a6ee1fd0a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0603 13:51:15.651152 1142862 system_pods.go:61] "metrics-server-569cc877fc-mtvrq" [00cb7657-2564-4d25-8faa-b6f618e61115] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:51:15.651163 1142862 system_pods.go:61] "storage-provisioner" [913d3120-32ce-4212-84be-9e3b99f2a894] Running
	I0603 13:51:15.651171 1142862 system_pods.go:74] duration metric: took 10.608401ms to wait for pod list to return data ...
	I0603 13:51:15.651181 1142862 node_conditions.go:102] verifying NodePressure condition ...
	I0603 13:51:15.654759 1142862 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 13:51:15.654784 1142862 node_conditions.go:123] node cpu capacity is 2
	I0603 13:51:15.654795 1142862 node_conditions.go:105] duration metric: took 3.608137ms to run NodePressure ...
	I0603 13:51:15.654813 1142862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:51:15.940085 1142862 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0603 13:51:15.944785 1142862 kubeadm.go:733] kubelet initialised
	I0603 13:51:15.944808 1142862 kubeadm.go:734] duration metric: took 4.692827ms waiting for restarted kubelet to initialise ...
	I0603 13:51:15.944817 1142862 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:51:15.950113 1142862 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-s562v" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:17.958330 1142862 pod_ready.go:102] pod "coredns-7db6d8ff4d-s562v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:20.456029 1142862 pod_ready.go:102] pod "coredns-7db6d8ff4d-s562v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:17.693856 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:18.193853 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:18.693858 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:19.193568 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:19.693680 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:20.193556 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:20.694129 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:21.193662 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:21.694445 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:22.193668 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:18.317128 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:20.317375 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:22.317530 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:21.371070 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:23.871400 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:21.958183 1142862 pod_ready.go:92] pod "coredns-7db6d8ff4d-s562v" in "kube-system" namespace has status "Ready":"True"
	I0603 13:51:21.958208 1142862 pod_ready.go:81] duration metric: took 6.008058251s for pod "coredns-7db6d8ff4d-s562v" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:21.958220 1142862 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:23.964785 1142862 pod_ready.go:102] pod "etcd-no-preload-817450" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:22.694004 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:23.193793 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:23.694340 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:24.194411 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:24.694314 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:25.194501 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:25.693545 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:26.194255 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:26.694312 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:27.194453 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:24.817165 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:27.317176 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:26.369665 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:28.370392 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:25.966060 1142862 pod_ready.go:102] pod "etcd-no-preload-817450" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:27.965236 1142862 pod_ready.go:92] pod "etcd-no-preload-817450" in "kube-system" namespace has status "Ready":"True"
	I0603 13:51:27.965267 1142862 pod_ready.go:81] duration metric: took 6.007038184s for pod "etcd-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.965281 1142862 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.969898 1142862 pod_ready.go:92] pod "kube-apiserver-no-preload-817450" in "kube-system" namespace has status "Ready":"True"
	I0603 13:51:27.969920 1142862 pod_ready.go:81] duration metric: took 4.630357ms for pod "kube-apiserver-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.969932 1142862 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.974500 1142862 pod_ready.go:92] pod "kube-controller-manager-no-preload-817450" in "kube-system" namespace has status "Ready":"True"
	I0603 13:51:27.974517 1142862 pod_ready.go:81] duration metric: took 4.577117ms for pod "kube-controller-manager-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.974526 1142862 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2j4sg" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.978510 1142862 pod_ready.go:92] pod "kube-proxy-2j4sg" in "kube-system" namespace has status "Ready":"True"
	I0603 13:51:27.978530 1142862 pod_ready.go:81] duration metric: took 3.997645ms for pod "kube-proxy-2j4sg" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.978537 1142862 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.982488 1142862 pod_ready.go:92] pod "kube-scheduler-no-preload-817450" in "kube-system" namespace has status "Ready":"True"
	I0603 13:51:27.982507 1142862 pod_ready.go:81] duration metric: took 3.962666ms for pod "kube-scheduler-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.982518 1142862 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:29.989265 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:27.694334 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:28.193809 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:28.693744 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:29.193608 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:29.693584 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:30.194111 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:30.694213 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:31.193588 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:31.694336 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:32.193716 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:29.317483 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:31.324199 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:30.370435 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:32.870510 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:34.872543 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:31.990649 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:34.488899 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:32.693501 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:33.194174 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:33.693995 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:34.194242 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:34.693961 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:35.194052 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:35.693730 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:36.193559 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:36.693763 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:37.194274 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:33.816533 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:36.316832 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:37.371543 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:39.372034 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:36.489364 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:38.490431 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:40.490888 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:37.693590 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:38.194328 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:38.694296 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:39.194272 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:39.693607 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:40.193595 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:51:40.193691 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:51:40.237747 1143678 cri.go:89] found id: ""
	I0603 13:51:40.237776 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.237785 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:51:40.237792 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:51:40.237854 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:51:40.275924 1143678 cri.go:89] found id: ""
	I0603 13:51:40.275964 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.275975 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:51:40.275983 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:51:40.276049 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:51:40.314827 1143678 cri.go:89] found id: ""
	I0603 13:51:40.314857 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.314870 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:51:40.314877 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:51:40.314939 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:51:40.359040 1143678 cri.go:89] found id: ""
	I0603 13:51:40.359072 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.359084 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:51:40.359092 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:51:40.359154 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:51:40.396136 1143678 cri.go:89] found id: ""
	I0603 13:51:40.396170 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.396185 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:51:40.396194 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:51:40.396261 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:51:40.436766 1143678 cri.go:89] found id: ""
	I0603 13:51:40.436803 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.436814 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:51:40.436828 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:51:40.436902 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:51:40.477580 1143678 cri.go:89] found id: ""
	I0603 13:51:40.477606 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.477615 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:51:40.477621 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:51:40.477713 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:51:40.518920 1143678 cri.go:89] found id: ""
	I0603 13:51:40.518960 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.518972 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:51:40.518984 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:51:40.519001 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:51:40.659881 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:51:40.659913 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:51:40.659932 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:51:40.727850 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:51:40.727894 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:51:40.774153 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:51:40.774189 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:40.828054 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:51:40.828094 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:51:38.820985 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:41.322044 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:41.870717 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:43.872112 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:42.988898 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:44.989384 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:43.342659 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:43.357063 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:51:43.357131 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:51:43.398000 1143678 cri.go:89] found id: ""
	I0603 13:51:43.398036 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.398045 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:51:43.398051 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:51:43.398106 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:51:43.436761 1143678 cri.go:89] found id: ""
	I0603 13:51:43.436805 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.436814 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:51:43.436820 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:51:43.436872 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:51:43.478122 1143678 cri.go:89] found id: ""
	I0603 13:51:43.478154 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.478164 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:51:43.478172 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:51:43.478243 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:51:43.514473 1143678 cri.go:89] found id: ""
	I0603 13:51:43.514511 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.514523 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:51:43.514532 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:51:43.514600 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:51:43.552354 1143678 cri.go:89] found id: ""
	I0603 13:51:43.552390 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.552399 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:51:43.552405 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:51:43.552489 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:51:43.590637 1143678 cri.go:89] found id: ""
	I0603 13:51:43.590665 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.590677 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:51:43.590685 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:51:43.590745 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:51:43.633958 1143678 cri.go:89] found id: ""
	I0603 13:51:43.634001 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.634013 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:51:43.634021 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:51:43.634088 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:51:43.672640 1143678 cri.go:89] found id: ""
	I0603 13:51:43.672683 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.672695 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:51:43.672716 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:51:43.672733 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:43.725880 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:51:43.725937 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:51:43.743736 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:51:43.743771 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:51:43.831757 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:51:43.831785 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:51:43.831801 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:51:43.905062 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:51:43.905114 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:51:46.459588 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:46.472911 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:51:46.472983 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:51:46.513723 1143678 cri.go:89] found id: ""
	I0603 13:51:46.513757 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.513768 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:51:46.513776 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:51:46.513845 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:51:46.549205 1143678 cri.go:89] found id: ""
	I0603 13:51:46.549234 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.549242 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:51:46.549251 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:51:46.549311 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:51:46.585004 1143678 cri.go:89] found id: ""
	I0603 13:51:46.585042 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.585053 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:51:46.585063 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:51:46.585120 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:51:46.620534 1143678 cri.go:89] found id: ""
	I0603 13:51:46.620571 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.620582 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:51:46.620590 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:51:46.620661 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:51:46.655974 1143678 cri.go:89] found id: ""
	I0603 13:51:46.656005 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.656014 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:51:46.656020 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:51:46.656091 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:51:46.693078 1143678 cri.go:89] found id: ""
	I0603 13:51:46.693141 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.693158 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:51:46.693168 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:51:46.693244 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:51:46.729177 1143678 cri.go:89] found id: ""
	I0603 13:51:46.729213 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.729223 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:51:46.729232 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:51:46.729300 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:51:46.766899 1143678 cri.go:89] found id: ""
	I0603 13:51:46.766929 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.766937 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:51:46.766946 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:51:46.766959 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:46.826715 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:51:46.826757 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:51:46.841461 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:51:46.841504 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:51:46.914505 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:51:46.914533 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:51:46.914551 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:51:46.989886 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:51:46.989928 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:51:43.817456 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:45.817576 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:46.370927 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:48.371196 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:46.990440 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:49.489483 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:49.532804 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:49.547359 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:51:49.547438 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:51:49.584262 1143678 cri.go:89] found id: ""
	I0603 13:51:49.584299 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.584311 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:51:49.584319 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:51:49.584389 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:51:49.622332 1143678 cri.go:89] found id: ""
	I0603 13:51:49.622372 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.622384 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:51:49.622393 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:51:49.622488 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:51:49.664339 1143678 cri.go:89] found id: ""
	I0603 13:51:49.664378 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.664390 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:51:49.664399 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:51:49.664468 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:51:49.712528 1143678 cri.go:89] found id: ""
	I0603 13:51:49.712558 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.712565 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:51:49.712574 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:51:49.712640 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:51:49.767343 1143678 cri.go:89] found id: ""
	I0603 13:51:49.767374 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.767382 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:51:49.767388 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:51:49.767450 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:51:49.822457 1143678 cri.go:89] found id: ""
	I0603 13:51:49.822491 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.822499 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:51:49.822505 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:51:49.822561 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:51:49.867823 1143678 cri.go:89] found id: ""
	I0603 13:51:49.867855 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.867867 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:51:49.867875 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:51:49.867936 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:51:49.906765 1143678 cri.go:89] found id: ""
	I0603 13:51:49.906797 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.906805 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:51:49.906816 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:51:49.906829 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:51:49.921731 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:51:49.921764 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:51:49.993832 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:51:49.993860 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:51:49.993878 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:51:50.070080 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:51:50.070125 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:51:50.112323 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:51:50.112357 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:48.317830 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:50.816577 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:52.817035 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:50.871664 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:52.871865 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:51.990258 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:54.489037 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:52.666289 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:52.680475 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:51:52.680550 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:51:52.722025 1143678 cri.go:89] found id: ""
	I0603 13:51:52.722063 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.722075 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:51:52.722083 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:51:52.722145 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:51:52.759709 1143678 cri.go:89] found id: ""
	I0603 13:51:52.759742 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.759754 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:51:52.759762 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:51:52.759838 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:51:52.797131 1143678 cri.go:89] found id: ""
	I0603 13:51:52.797162 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.797171 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:51:52.797176 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:51:52.797231 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:51:52.832921 1143678 cri.go:89] found id: ""
	I0603 13:51:52.832951 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.832959 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:51:52.832965 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:51:52.833024 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:51:52.869361 1143678 cri.go:89] found id: ""
	I0603 13:51:52.869389 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.869399 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:51:52.869422 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:51:52.869495 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:51:52.905863 1143678 cri.go:89] found id: ""
	I0603 13:51:52.905897 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.905909 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:51:52.905917 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:51:52.905985 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:51:52.940407 1143678 cri.go:89] found id: ""
	I0603 13:51:52.940438 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.940446 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:51:52.940452 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:51:52.940517 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:51:52.982079 1143678 cri.go:89] found id: ""
	I0603 13:51:52.982115 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.982126 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:51:52.982138 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:51:52.982155 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:51:53.066897 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:51:53.066942 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:51:53.108016 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:51:53.108056 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:53.164105 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:51:53.164151 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:51:53.178708 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:51:53.178743 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:51:53.257441 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:51:55.758633 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:55.774241 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:51:55.774329 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:51:55.809373 1143678 cri.go:89] found id: ""
	I0603 13:51:55.809436 1143678 logs.go:276] 0 containers: []
	W0603 13:51:55.809450 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:51:55.809467 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:51:55.809539 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:51:55.849741 1143678 cri.go:89] found id: ""
	I0603 13:51:55.849768 1143678 logs.go:276] 0 containers: []
	W0603 13:51:55.849776 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:51:55.849783 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:51:55.849834 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:51:55.893184 1143678 cri.go:89] found id: ""
	I0603 13:51:55.893216 1143678 logs.go:276] 0 containers: []
	W0603 13:51:55.893228 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:51:55.893238 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:51:55.893307 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:51:55.931572 1143678 cri.go:89] found id: ""
	I0603 13:51:55.931618 1143678 logs.go:276] 0 containers: []
	W0603 13:51:55.931632 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:51:55.931642 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:51:55.931713 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:51:55.969490 1143678 cri.go:89] found id: ""
	I0603 13:51:55.969527 1143678 logs.go:276] 0 containers: []
	W0603 13:51:55.969538 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:51:55.969546 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:51:55.969614 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:51:56.009266 1143678 cri.go:89] found id: ""
	I0603 13:51:56.009301 1143678 logs.go:276] 0 containers: []
	W0603 13:51:56.009313 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:51:56.009321 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:51:56.009394 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:51:56.049471 1143678 cri.go:89] found id: ""
	I0603 13:51:56.049520 1143678 logs.go:276] 0 containers: []
	W0603 13:51:56.049540 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:51:56.049547 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:51:56.049616 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:51:56.090176 1143678 cri.go:89] found id: ""
	I0603 13:51:56.090213 1143678 logs.go:276] 0 containers: []
	W0603 13:51:56.090228 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:51:56.090241 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:51:56.090266 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:51:56.175692 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:51:56.175737 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:51:56.222642 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:51:56.222683 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:56.276258 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:51:56.276301 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:51:56.291703 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:51:56.291739 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:51:56.364788 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:51:55.316604 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:57.816804 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:55.370917 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:57.372903 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:59.870783 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:56.489636 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:58.990006 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:58.865558 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:58.879983 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:51:58.880074 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:51:58.917422 1143678 cri.go:89] found id: ""
	I0603 13:51:58.917461 1143678 logs.go:276] 0 containers: []
	W0603 13:51:58.917473 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:51:58.917480 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:51:58.917535 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:51:58.953900 1143678 cri.go:89] found id: ""
	I0603 13:51:58.953933 1143678 logs.go:276] 0 containers: []
	W0603 13:51:58.953943 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:51:58.953959 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:51:58.954030 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:51:58.988677 1143678 cri.go:89] found id: ""
	I0603 13:51:58.988704 1143678 logs.go:276] 0 containers: []
	W0603 13:51:58.988713 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:51:58.988721 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:51:58.988783 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:51:59.023436 1143678 cri.go:89] found id: ""
	I0603 13:51:59.023474 1143678 logs.go:276] 0 containers: []
	W0603 13:51:59.023486 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:51:59.023494 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:51:59.023570 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:51:59.061357 1143678 cri.go:89] found id: ""
	I0603 13:51:59.061386 1143678 logs.go:276] 0 containers: []
	W0603 13:51:59.061394 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:51:59.061400 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:51:59.061487 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:51:59.102995 1143678 cri.go:89] found id: ""
	I0603 13:51:59.103025 1143678 logs.go:276] 0 containers: []
	W0603 13:51:59.103038 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:51:59.103047 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:51:59.103124 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:51:59.141443 1143678 cri.go:89] found id: ""
	I0603 13:51:59.141480 1143678 logs.go:276] 0 containers: []
	W0603 13:51:59.141492 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:51:59.141499 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:51:59.141586 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:51:59.182909 1143678 cri.go:89] found id: ""
	I0603 13:51:59.182943 1143678 logs.go:276] 0 containers: []
	W0603 13:51:59.182953 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:51:59.182967 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:51:59.182984 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:51:59.259533 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:51:59.259580 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:51:59.308976 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:51:59.309016 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:59.362092 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:51:59.362142 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:51:59.378836 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:51:59.378887 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:51:59.454524 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:01.954939 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:01.969968 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:01.970039 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:02.014226 1143678 cri.go:89] found id: ""
	I0603 13:52:02.014267 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.014280 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:02.014289 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:02.014361 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:02.051189 1143678 cri.go:89] found id: ""
	I0603 13:52:02.051244 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.051259 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:02.051268 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:02.051349 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:02.093509 1143678 cri.go:89] found id: ""
	I0603 13:52:02.093548 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.093575 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:02.093586 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:02.093718 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:02.132069 1143678 cri.go:89] found id: ""
	I0603 13:52:02.132113 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.132129 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:02.132138 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:02.132299 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:02.168043 1143678 cri.go:89] found id: ""
	I0603 13:52:02.168071 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.168079 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:02.168085 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:02.168138 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:02.207029 1143678 cri.go:89] found id: ""
	I0603 13:52:02.207064 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.207074 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:02.207081 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:02.207134 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:02.247669 1143678 cri.go:89] found id: ""
	I0603 13:52:02.247719 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.247728 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:02.247734 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:02.247848 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:02.285780 1143678 cri.go:89] found id: ""
	I0603 13:52:02.285817 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.285829 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:02.285841 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:02.285863 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:59.817887 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:01.818381 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:01.871338 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:04.371052 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:00.990263 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:02.990651 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:05.490343 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:02.348775 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:02.349776 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:02.364654 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:02.364691 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:02.447948 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:02.447978 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:02.447992 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:02.534039 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:02.534100 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:05.080437 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:05.094169 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:05.094245 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:05.132312 1143678 cri.go:89] found id: ""
	I0603 13:52:05.132339 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.132346 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:05.132352 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:05.132423 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:05.168941 1143678 cri.go:89] found id: ""
	I0603 13:52:05.168979 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.168990 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:05.168999 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:05.169068 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:05.207151 1143678 cri.go:89] found id: ""
	I0603 13:52:05.207188 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.207196 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:05.207202 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:05.207272 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:05.258807 1143678 cri.go:89] found id: ""
	I0603 13:52:05.258839 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.258850 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:05.258859 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:05.259004 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:05.298250 1143678 cri.go:89] found id: ""
	I0603 13:52:05.298285 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.298297 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:05.298306 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:05.298381 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:05.340922 1143678 cri.go:89] found id: ""
	I0603 13:52:05.340951 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.340959 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:05.340966 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:05.341027 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:05.382680 1143678 cri.go:89] found id: ""
	I0603 13:52:05.382707 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.382715 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:05.382722 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:05.382777 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:05.426774 1143678 cri.go:89] found id: ""
	I0603 13:52:05.426801 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.426811 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:05.426822 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:05.426836 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:05.483042 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:05.483091 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:05.499119 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:05.499159 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:05.580933 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:05.580962 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:05.580983 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:05.660395 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:05.660437 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:03.818676 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:06.316881 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:06.371515 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:08.871174 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:07.490662 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:09.992709 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:08.200887 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:08.215113 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:08.215203 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:08.252367 1143678 cri.go:89] found id: ""
	I0603 13:52:08.252404 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.252417 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:08.252427 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:08.252500 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:08.289249 1143678 cri.go:89] found id: ""
	I0603 13:52:08.289279 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.289290 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:08.289298 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:08.289364 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:08.331155 1143678 cri.go:89] found id: ""
	I0603 13:52:08.331181 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.331195 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:08.331201 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:08.331258 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:08.371376 1143678 cri.go:89] found id: ""
	I0603 13:52:08.371400 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.371408 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:08.371415 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:08.371477 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:08.408009 1143678 cri.go:89] found id: ""
	I0603 13:52:08.408045 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.408057 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:08.408065 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:08.408119 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:08.446377 1143678 cri.go:89] found id: ""
	I0603 13:52:08.446413 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.446421 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:08.446429 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:08.446504 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:08.485429 1143678 cri.go:89] found id: ""
	I0603 13:52:08.485461 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.485471 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:08.485479 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:08.485546 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:08.527319 1143678 cri.go:89] found id: ""
	I0603 13:52:08.527363 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.527375 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:08.527388 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:08.527414 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:08.602347 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:08.602371 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:08.602384 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:08.683855 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:08.683902 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:08.724402 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:08.724443 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:08.781154 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:08.781202 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:11.297827 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:11.313927 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:11.314006 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:11.352622 1143678 cri.go:89] found id: ""
	I0603 13:52:11.352660 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.352671 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:11.352678 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:11.352755 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:11.395301 1143678 cri.go:89] found id: ""
	I0603 13:52:11.395338 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.395351 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:11.395360 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:11.395442 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:11.431104 1143678 cri.go:89] found id: ""
	I0603 13:52:11.431143 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.431155 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:11.431170 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:11.431234 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:11.470177 1143678 cri.go:89] found id: ""
	I0603 13:52:11.470212 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.470223 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:11.470241 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:11.470309 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:11.508741 1143678 cri.go:89] found id: ""
	I0603 13:52:11.508779 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.508803 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:11.508810 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:11.508906 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:11.544970 1143678 cri.go:89] found id: ""
	I0603 13:52:11.545002 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.545012 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:11.545022 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:11.545093 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:11.583606 1143678 cri.go:89] found id: ""
	I0603 13:52:11.583636 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.583653 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:11.583666 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:11.583739 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:11.624770 1143678 cri.go:89] found id: ""
	I0603 13:52:11.624806 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.624815 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:11.624824 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:11.624841 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:11.680251 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:11.680298 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:11.695656 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:11.695695 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:11.770414 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:11.770478 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:11.770497 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:11.850812 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:11.850871 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:08.318447 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:10.817734 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:11.372533 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:13.871822 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:12.490666 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:14.988752 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:14.398649 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:14.411591 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:14.411689 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:14.447126 1143678 cri.go:89] found id: ""
	I0603 13:52:14.447158 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.447170 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:14.447178 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:14.447245 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:14.486681 1143678 cri.go:89] found id: ""
	I0603 13:52:14.486716 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.486728 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:14.486735 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:14.486799 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:14.521297 1143678 cri.go:89] found id: ""
	I0603 13:52:14.521326 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.521337 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:14.521343 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:14.521443 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:14.565086 1143678 cri.go:89] found id: ""
	I0603 13:52:14.565121 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.565130 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:14.565136 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:14.565196 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:14.601947 1143678 cri.go:89] found id: ""
	I0603 13:52:14.601975 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.601984 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:14.601990 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:14.602044 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:14.638332 1143678 cri.go:89] found id: ""
	I0603 13:52:14.638359 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.638366 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:14.638374 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:14.638435 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:14.675254 1143678 cri.go:89] found id: ""
	I0603 13:52:14.675284 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.675293 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:14.675299 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:14.675354 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:14.712601 1143678 cri.go:89] found id: ""
	I0603 13:52:14.712631 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.712639 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:14.712649 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:14.712663 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:14.787026 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:14.787068 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:14.836534 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:14.836564 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:14.889682 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:14.889729 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:14.905230 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:14.905264 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:14.979090 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:13.317070 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:15.317490 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:17.816412 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:15.871901 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:18.370626 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:16.989195 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:18.990108 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:17.479590 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:17.495088 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:17.495250 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:17.530832 1143678 cri.go:89] found id: ""
	I0603 13:52:17.530871 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.530883 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:17.530891 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:17.530966 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:17.567183 1143678 cri.go:89] found id: ""
	I0603 13:52:17.567213 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.567224 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:17.567232 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:17.567305 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:17.602424 1143678 cri.go:89] found id: ""
	I0603 13:52:17.602458 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.602469 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:17.602493 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:17.602570 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:17.641148 1143678 cri.go:89] found id: ""
	I0603 13:52:17.641184 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.641197 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:17.641205 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:17.641273 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:17.679004 1143678 cri.go:89] found id: ""
	I0603 13:52:17.679031 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.679039 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:17.679045 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:17.679102 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:17.717667 1143678 cri.go:89] found id: ""
	I0603 13:52:17.717698 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.717707 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:17.717715 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:17.717786 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:17.760262 1143678 cri.go:89] found id: ""
	I0603 13:52:17.760300 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.760323 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:17.760331 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:17.760416 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:17.796910 1143678 cri.go:89] found id: ""
	I0603 13:52:17.796943 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.796960 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:17.796976 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:17.796990 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:17.811733 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:17.811768 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:17.891891 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:17.891920 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:17.891939 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:17.969495 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:17.969535 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:18.032622 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:18.032654 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:20.586079 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:20.599118 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:20.599202 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:20.633732 1143678 cri.go:89] found id: ""
	I0603 13:52:20.633770 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.633780 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:20.633787 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:20.633841 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:20.668126 1143678 cri.go:89] found id: ""
	I0603 13:52:20.668155 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.668163 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:20.668169 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:20.668231 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:20.704144 1143678 cri.go:89] found id: ""
	I0603 13:52:20.704177 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.704187 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:20.704194 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:20.704251 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:20.745562 1143678 cri.go:89] found id: ""
	I0603 13:52:20.745594 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.745602 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:20.745608 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:20.745663 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:20.788998 1143678 cri.go:89] found id: ""
	I0603 13:52:20.789041 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.789053 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:20.789075 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:20.789152 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:20.832466 1143678 cri.go:89] found id: ""
	I0603 13:52:20.832495 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.832503 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:20.832510 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:20.832575 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:20.875212 1143678 cri.go:89] found id: ""
	I0603 13:52:20.875248 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.875258 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:20.875267 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:20.875336 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:20.912957 1143678 cri.go:89] found id: ""
	I0603 13:52:20.912989 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.912999 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:20.913011 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:20.913030 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:20.963655 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:20.963700 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:20.978619 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:20.978658 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:21.057136 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:21.057163 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:21.057185 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:21.136368 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:21.136415 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:19.817227 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:21.817625 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:20.871465 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:23.370757 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:21.488564 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:23.991662 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:23.676222 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:23.691111 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:23.691213 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:23.733282 1143678 cri.go:89] found id: ""
	I0603 13:52:23.733319 1143678 logs.go:276] 0 containers: []
	W0603 13:52:23.733332 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:23.733341 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:23.733438 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:23.780841 1143678 cri.go:89] found id: ""
	I0603 13:52:23.780873 1143678 logs.go:276] 0 containers: []
	W0603 13:52:23.780882 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:23.780894 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:23.780947 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:23.820521 1143678 cri.go:89] found id: ""
	I0603 13:52:23.820553 1143678 logs.go:276] 0 containers: []
	W0603 13:52:23.820565 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:23.820573 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:23.820636 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:23.857684 1143678 cri.go:89] found id: ""
	I0603 13:52:23.857728 1143678 logs.go:276] 0 containers: []
	W0603 13:52:23.857739 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:23.857747 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:23.857818 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:23.896800 1143678 cri.go:89] found id: ""
	I0603 13:52:23.896829 1143678 logs.go:276] 0 containers: []
	W0603 13:52:23.896842 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:23.896850 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:23.896914 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:23.935511 1143678 cri.go:89] found id: ""
	I0603 13:52:23.935538 1143678 logs.go:276] 0 containers: []
	W0603 13:52:23.935547 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:23.935554 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:23.935608 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:23.973858 1143678 cri.go:89] found id: ""
	I0603 13:52:23.973885 1143678 logs.go:276] 0 containers: []
	W0603 13:52:23.973895 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:23.973901 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:23.973961 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:24.012491 1143678 cri.go:89] found id: ""
	I0603 13:52:24.012521 1143678 logs.go:276] 0 containers: []
	W0603 13:52:24.012532 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:24.012545 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:24.012569 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:24.064274 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:24.064319 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:24.079382 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:24.079420 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:24.153708 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:24.153733 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:24.153749 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:24.233104 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:24.233148 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:26.774771 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:26.789853 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:26.789924 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:26.830089 1143678 cri.go:89] found id: ""
	I0603 13:52:26.830129 1143678 logs.go:276] 0 containers: []
	W0603 13:52:26.830167 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:26.830176 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:26.830251 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:26.866907 1143678 cri.go:89] found id: ""
	I0603 13:52:26.866941 1143678 logs.go:276] 0 containers: []
	W0603 13:52:26.866952 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:26.866960 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:26.867031 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:26.915028 1143678 cri.go:89] found id: ""
	I0603 13:52:26.915061 1143678 logs.go:276] 0 containers: []
	W0603 13:52:26.915070 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:26.915079 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:26.915151 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:26.962044 1143678 cri.go:89] found id: ""
	I0603 13:52:26.962075 1143678 logs.go:276] 0 containers: []
	W0603 13:52:26.962083 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:26.962088 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:26.962154 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:26.996156 1143678 cri.go:89] found id: ""
	I0603 13:52:26.996188 1143678 logs.go:276] 0 containers: []
	W0603 13:52:26.996196 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:26.996202 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:26.996265 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:27.038593 1143678 cri.go:89] found id: ""
	I0603 13:52:27.038627 1143678 logs.go:276] 0 containers: []
	W0603 13:52:27.038636 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:27.038642 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:27.038708 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:27.076116 1143678 cri.go:89] found id: ""
	I0603 13:52:27.076144 1143678 logs.go:276] 0 containers: []
	W0603 13:52:27.076153 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:27.076159 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:27.076228 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:27.110653 1143678 cri.go:89] found id: ""
	I0603 13:52:27.110688 1143678 logs.go:276] 0 containers: []
	W0603 13:52:27.110700 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:27.110714 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:27.110733 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:27.193718 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:27.193743 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:27.193756 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:27.269423 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:27.269483 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:27.307899 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:27.307939 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:24.317663 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:26.817148 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:25.371861 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:27.870070 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:29.870299 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:26.488753 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:28.489065 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:30.489568 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:27.363830 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:27.363878 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:29.879016 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:29.893482 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:29.893553 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:29.932146 1143678 cri.go:89] found id: ""
	I0603 13:52:29.932190 1143678 logs.go:276] 0 containers: []
	W0603 13:52:29.932199 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:29.932205 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:29.932259 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:29.968986 1143678 cri.go:89] found id: ""
	I0603 13:52:29.969020 1143678 logs.go:276] 0 containers: []
	W0603 13:52:29.969032 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:29.969040 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:29.969097 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:30.007190 1143678 cri.go:89] found id: ""
	I0603 13:52:30.007228 1143678 logs.go:276] 0 containers: []
	W0603 13:52:30.007238 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:30.007244 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:30.007303 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:30.044607 1143678 cri.go:89] found id: ""
	I0603 13:52:30.044638 1143678 logs.go:276] 0 containers: []
	W0603 13:52:30.044646 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:30.044652 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:30.044706 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:30.083103 1143678 cri.go:89] found id: ""
	I0603 13:52:30.083179 1143678 logs.go:276] 0 containers: []
	W0603 13:52:30.083193 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:30.083204 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:30.083280 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:30.124125 1143678 cri.go:89] found id: ""
	I0603 13:52:30.124152 1143678 logs.go:276] 0 containers: []
	W0603 13:52:30.124160 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:30.124167 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:30.124234 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:30.164293 1143678 cri.go:89] found id: ""
	I0603 13:52:30.164329 1143678 logs.go:276] 0 containers: []
	W0603 13:52:30.164345 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:30.164353 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:30.164467 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:30.219980 1143678 cri.go:89] found id: ""
	I0603 13:52:30.220015 1143678 logs.go:276] 0 containers: []
	W0603 13:52:30.220028 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:30.220042 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:30.220063 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:30.313282 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:30.313305 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:30.313323 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:30.393759 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:30.393801 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:30.441384 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:30.441434 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:30.493523 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:30.493558 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:28.817554 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:31.317629 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:31.870659 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:33.870954 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:32.990340 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:35.495665 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:33.009114 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:33.023177 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:33.023278 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:33.065346 1143678 cri.go:89] found id: ""
	I0603 13:52:33.065388 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.065400 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:33.065424 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:33.065506 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:33.108513 1143678 cri.go:89] found id: ""
	I0603 13:52:33.108549 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.108561 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:33.108569 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:33.108640 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:33.146053 1143678 cri.go:89] found id: ""
	I0603 13:52:33.146082 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.146089 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:33.146107 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:33.146165 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:33.187152 1143678 cri.go:89] found id: ""
	I0603 13:52:33.187195 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.187206 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:33.187216 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:33.187302 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:33.223887 1143678 cri.go:89] found id: ""
	I0603 13:52:33.223920 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.223932 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:33.223941 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:33.224010 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:33.263902 1143678 cri.go:89] found id: ""
	I0603 13:52:33.263958 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.263971 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:33.263980 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:33.264048 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:33.302753 1143678 cri.go:89] found id: ""
	I0603 13:52:33.302785 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.302796 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:33.302805 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:33.302859 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:33.340711 1143678 cri.go:89] found id: ""
	I0603 13:52:33.340745 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.340754 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:33.340763 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:33.340780 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:33.400226 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:33.400271 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:33.414891 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:33.414923 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:33.498121 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:33.498156 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:33.498172 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:33.575682 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:33.575731 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:36.116930 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:36.133001 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:36.133070 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:36.182727 1143678 cri.go:89] found id: ""
	I0603 13:52:36.182763 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.182774 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:36.182782 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:36.182851 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:36.228804 1143678 cri.go:89] found id: ""
	I0603 13:52:36.228841 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.228854 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:36.228862 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:36.228929 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:36.279320 1143678 cri.go:89] found id: ""
	I0603 13:52:36.279359 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.279370 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:36.279378 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:36.279461 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:36.319725 1143678 cri.go:89] found id: ""
	I0603 13:52:36.319751 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.319759 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:36.319765 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:36.319819 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:36.356657 1143678 cri.go:89] found id: ""
	I0603 13:52:36.356685 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.356693 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:36.356703 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:36.356760 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:36.393397 1143678 cri.go:89] found id: ""
	I0603 13:52:36.393448 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.393459 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:36.393467 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:36.393545 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:36.429211 1143678 cri.go:89] found id: ""
	I0603 13:52:36.429246 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.429254 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:36.429260 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:36.429324 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:36.466796 1143678 cri.go:89] found id: ""
	I0603 13:52:36.466831 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.466839 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:36.466849 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:36.466862 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:36.509871 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:36.509900 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:36.562167 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:36.562206 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:36.577014 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:36.577047 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:36.657581 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:36.657604 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:36.657625 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:33.817495 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:35.820854 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:36.371645 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:38.871484 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:37.989038 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:39.989986 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:39.242339 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:39.257985 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:39.258072 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:39.300153 1143678 cri.go:89] found id: ""
	I0603 13:52:39.300185 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.300197 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:39.300205 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:39.300304 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:39.336117 1143678 cri.go:89] found id: ""
	I0603 13:52:39.336152 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.336162 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:39.336175 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:39.336307 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:39.375945 1143678 cri.go:89] found id: ""
	I0603 13:52:39.375979 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.375990 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:39.375998 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:39.376066 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:39.417207 1143678 cri.go:89] found id: ""
	I0603 13:52:39.417242 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.417253 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:39.417261 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:39.417340 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:39.456259 1143678 cri.go:89] found id: ""
	I0603 13:52:39.456295 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.456307 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:39.456315 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:39.456377 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:39.494879 1143678 cri.go:89] found id: ""
	I0603 13:52:39.494904 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.494913 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:39.494919 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:39.494979 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:39.532129 1143678 cri.go:89] found id: ""
	I0603 13:52:39.532157 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.532168 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:39.532177 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:39.532267 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:39.570662 1143678 cri.go:89] found id: ""
	I0603 13:52:39.570693 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.570703 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:39.570717 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:39.570734 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:39.622008 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:39.622057 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:39.636849 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:39.636884 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:39.719914 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:39.719948 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:39.719967 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:39.801723 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:39.801769 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:38.317321 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:40.817649 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:42.819652 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:41.370965 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:43.371900 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:42.490311 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:44.988731 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:42.348936 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:42.363663 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:42.363735 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:42.400584 1143678 cri.go:89] found id: ""
	I0603 13:52:42.400616 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.400625 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:42.400631 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:42.400685 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:42.438853 1143678 cri.go:89] found id: ""
	I0603 13:52:42.438885 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.438893 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:42.438899 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:42.438954 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:42.474980 1143678 cri.go:89] found id: ""
	I0603 13:52:42.475013 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.475025 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:42.475032 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:42.475086 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:42.511027 1143678 cri.go:89] found id: ""
	I0603 13:52:42.511056 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.511068 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:42.511077 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:42.511237 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:42.545333 1143678 cri.go:89] found id: ""
	I0603 13:52:42.545367 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.545378 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:42.545386 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:42.545468 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:42.583392 1143678 cri.go:89] found id: ""
	I0603 13:52:42.583438 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.583556 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:42.583591 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:42.583656 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:42.620886 1143678 cri.go:89] found id: ""
	I0603 13:52:42.620916 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.620924 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:42.620930 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:42.620985 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:42.656265 1143678 cri.go:89] found id: ""
	I0603 13:52:42.656301 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.656313 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:42.656327 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:42.656344 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:42.711078 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:42.711124 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:42.727751 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:42.727788 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:42.802330 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:42.802356 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:42.802370 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:42.883700 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:42.883742 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:45.424591 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:45.440797 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:45.440883 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:45.483664 1143678 cri.go:89] found id: ""
	I0603 13:52:45.483698 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.483709 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:45.483717 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:45.483789 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:45.523147 1143678 cri.go:89] found id: ""
	I0603 13:52:45.523182 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.523193 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:45.523201 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:45.523273 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:45.563483 1143678 cri.go:89] found id: ""
	I0603 13:52:45.563516 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.563527 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:45.563536 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:45.563598 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:45.603574 1143678 cri.go:89] found id: ""
	I0603 13:52:45.603603 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.603618 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:45.603625 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:45.603680 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:45.642664 1143678 cri.go:89] found id: ""
	I0603 13:52:45.642694 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.642705 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:45.642714 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:45.642793 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:45.679961 1143678 cri.go:89] found id: ""
	I0603 13:52:45.679998 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.680011 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:45.680026 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:45.680100 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:45.716218 1143678 cri.go:89] found id: ""
	I0603 13:52:45.716255 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.716263 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:45.716270 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:45.716364 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:45.752346 1143678 cri.go:89] found id: ""
	I0603 13:52:45.752374 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.752382 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:45.752391 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:45.752405 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:45.793992 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:45.794029 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:45.844930 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:45.844973 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:45.859594 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:45.859633 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:45.936469 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:45.936498 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:45.936515 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:45.317705 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:47.818994 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:45.870780 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:47.871003 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:49.871625 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:46.990866 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:49.488680 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:48.514959 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:48.528331 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:48.528401 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:48.565671 1143678 cri.go:89] found id: ""
	I0603 13:52:48.565703 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.565715 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:48.565724 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:48.565786 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:48.603938 1143678 cri.go:89] found id: ""
	I0603 13:52:48.603973 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.603991 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:48.604000 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:48.604068 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:48.643521 1143678 cri.go:89] found id: ""
	I0603 13:52:48.643550 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.643562 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:48.643571 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:48.643627 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:48.678264 1143678 cri.go:89] found id: ""
	I0603 13:52:48.678301 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.678312 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:48.678320 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:48.678407 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:48.714974 1143678 cri.go:89] found id: ""
	I0603 13:52:48.715014 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.715026 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:48.715034 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:48.715138 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:48.750364 1143678 cri.go:89] found id: ""
	I0603 13:52:48.750396 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.750408 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:48.750416 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:48.750482 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:48.788203 1143678 cri.go:89] found id: ""
	I0603 13:52:48.788238 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.788249 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:48.788258 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:48.788345 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:48.826891 1143678 cri.go:89] found id: ""
	I0603 13:52:48.826920 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.826928 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:48.826938 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:48.826951 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:48.877271 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:48.877315 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:48.892155 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:48.892187 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:48.973433 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:48.973459 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:48.973473 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:49.062819 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:49.062888 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:51.614261 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:51.628056 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:51.628142 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:51.662894 1143678 cri.go:89] found id: ""
	I0603 13:52:51.662924 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.662935 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:51.662942 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:51.663009 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:51.701847 1143678 cri.go:89] found id: ""
	I0603 13:52:51.701878 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.701889 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:51.701896 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:51.701963 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:51.737702 1143678 cri.go:89] found id: ""
	I0603 13:52:51.737741 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.737752 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:51.737760 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:51.737833 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:51.772913 1143678 cri.go:89] found id: ""
	I0603 13:52:51.772944 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.772956 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:51.772964 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:51.773034 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:51.810268 1143678 cri.go:89] found id: ""
	I0603 13:52:51.810298 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.810307 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:51.810312 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:51.810377 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:51.848575 1143678 cri.go:89] found id: ""
	I0603 13:52:51.848612 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.848624 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:51.848633 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:51.848696 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:51.886500 1143678 cri.go:89] found id: ""
	I0603 13:52:51.886536 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.886549 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:51.886560 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:51.886617 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:51.924070 1143678 cri.go:89] found id: ""
	I0603 13:52:51.924104 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.924115 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:51.924128 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:51.924146 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:51.940324 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:51.940355 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:52.019958 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:52.019997 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:52.020015 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:52.095953 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:52.095999 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:52.141070 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:52.141102 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:50.317008 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:52.317142 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:51.872275 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:54.376761 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:51.490098 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:53.491292 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:54.694651 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:54.708508 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:54.708597 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:54.745708 1143678 cri.go:89] found id: ""
	I0603 13:52:54.745748 1143678 logs.go:276] 0 containers: []
	W0603 13:52:54.745762 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:54.745770 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:54.745842 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:54.783335 1143678 cri.go:89] found id: ""
	I0603 13:52:54.783369 1143678 logs.go:276] 0 containers: []
	W0603 13:52:54.783381 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:54.783389 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:54.783465 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:54.824111 1143678 cri.go:89] found id: ""
	I0603 13:52:54.824140 1143678 logs.go:276] 0 containers: []
	W0603 13:52:54.824151 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:54.824159 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:54.824230 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:54.868676 1143678 cri.go:89] found id: ""
	I0603 13:52:54.868710 1143678 logs.go:276] 0 containers: []
	W0603 13:52:54.868721 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:54.868730 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:54.868801 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:54.906180 1143678 cri.go:89] found id: ""
	I0603 13:52:54.906216 1143678 logs.go:276] 0 containers: []
	W0603 13:52:54.906227 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:54.906235 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:54.906310 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:54.945499 1143678 cri.go:89] found id: ""
	I0603 13:52:54.945532 1143678 logs.go:276] 0 containers: []
	W0603 13:52:54.945544 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:54.945552 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:54.945619 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:54.986785 1143678 cri.go:89] found id: ""
	I0603 13:52:54.986812 1143678 logs.go:276] 0 containers: []
	W0603 13:52:54.986820 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:54.986826 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:54.986888 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:55.035290 1143678 cri.go:89] found id: ""
	I0603 13:52:55.035320 1143678 logs.go:276] 0 containers: []
	W0603 13:52:55.035329 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:55.035338 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:55.035352 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:55.085384 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:55.085451 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:55.100699 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:55.100733 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:55.171587 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:55.171614 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:55.171638 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:55.249078 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:55.249123 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:54.317435 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:56.318657 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:56.869954 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:58.872728 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:55.990512 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:58.489578 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:00.490668 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:57.791538 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:57.804373 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:57.804437 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:57.843969 1143678 cri.go:89] found id: ""
	I0603 13:52:57.844007 1143678 logs.go:276] 0 containers: []
	W0603 13:52:57.844016 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:57.844022 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:57.844077 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:57.881201 1143678 cri.go:89] found id: ""
	I0603 13:52:57.881239 1143678 logs.go:276] 0 containers: []
	W0603 13:52:57.881252 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:57.881261 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:57.881336 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:57.917572 1143678 cri.go:89] found id: ""
	I0603 13:52:57.917601 1143678 logs.go:276] 0 containers: []
	W0603 13:52:57.917610 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:57.917617 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:57.917671 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:57.951603 1143678 cri.go:89] found id: ""
	I0603 13:52:57.951642 1143678 logs.go:276] 0 containers: []
	W0603 13:52:57.951654 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:57.951661 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:57.951716 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:57.992833 1143678 cri.go:89] found id: ""
	I0603 13:52:57.992863 1143678 logs.go:276] 0 containers: []
	W0603 13:52:57.992874 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:57.992881 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:57.992945 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:58.031595 1143678 cri.go:89] found id: ""
	I0603 13:52:58.031636 1143678 logs.go:276] 0 containers: []
	W0603 13:52:58.031648 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:58.031657 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:58.031723 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:58.068947 1143678 cri.go:89] found id: ""
	I0603 13:52:58.068985 1143678 logs.go:276] 0 containers: []
	W0603 13:52:58.068996 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:58.069005 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:58.069077 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:58.106559 1143678 cri.go:89] found id: ""
	I0603 13:52:58.106587 1143678 logs.go:276] 0 containers: []
	W0603 13:52:58.106598 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:58.106623 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:58.106640 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:58.162576 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:58.162623 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:58.177104 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:58.177155 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:58.250279 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:58.250312 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:58.250329 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:58.330876 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:58.330920 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:00.871443 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:00.885505 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:00.885589 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:00.923878 1143678 cri.go:89] found id: ""
	I0603 13:53:00.923910 1143678 logs.go:276] 0 containers: []
	W0603 13:53:00.923920 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:00.923928 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:00.923995 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:00.960319 1143678 cri.go:89] found id: ""
	I0603 13:53:00.960362 1143678 logs.go:276] 0 containers: []
	W0603 13:53:00.960375 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:00.960384 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:00.960449 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:00.998806 1143678 cri.go:89] found id: ""
	I0603 13:53:00.998845 1143678 logs.go:276] 0 containers: []
	W0603 13:53:00.998857 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:00.998866 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:00.998929 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:01.033211 1143678 cri.go:89] found id: ""
	I0603 13:53:01.033245 1143678 logs.go:276] 0 containers: []
	W0603 13:53:01.033256 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:01.033265 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:01.033341 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:01.072852 1143678 cri.go:89] found id: ""
	I0603 13:53:01.072883 1143678 logs.go:276] 0 containers: []
	W0603 13:53:01.072891 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:01.072898 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:01.072950 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:01.115667 1143678 cri.go:89] found id: ""
	I0603 13:53:01.115699 1143678 logs.go:276] 0 containers: []
	W0603 13:53:01.115711 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:01.115719 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:01.115824 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:01.153676 1143678 cri.go:89] found id: ""
	I0603 13:53:01.153717 1143678 logs.go:276] 0 containers: []
	W0603 13:53:01.153733 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:01.153741 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:01.153815 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:01.188970 1143678 cri.go:89] found id: ""
	I0603 13:53:01.189003 1143678 logs.go:276] 0 containers: []
	W0603 13:53:01.189017 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:01.189031 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:01.189049 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:01.233151 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:01.233214 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:01.287218 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:01.287269 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:01.302370 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:01.302408 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:01.378414 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:01.378444 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:01.378463 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:58.817003 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:01.317698 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:01.371257 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:03.872917 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:02.989133 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:04.990930 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:03.957327 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:03.971246 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:03.971340 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:04.007299 1143678 cri.go:89] found id: ""
	I0603 13:53:04.007335 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.007347 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:04.007356 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:04.007427 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:04.046364 1143678 cri.go:89] found id: ""
	I0603 13:53:04.046396 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.046405 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:04.046411 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:04.046469 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:04.082094 1143678 cri.go:89] found id: ""
	I0603 13:53:04.082127 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.082139 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:04.082148 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:04.082209 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:04.117389 1143678 cri.go:89] found id: ""
	I0603 13:53:04.117434 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.117446 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:04.117454 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:04.117530 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:04.150560 1143678 cri.go:89] found id: ""
	I0603 13:53:04.150596 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.150606 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:04.150614 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:04.150678 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:04.184808 1143678 cri.go:89] found id: ""
	I0603 13:53:04.184845 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.184857 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:04.184865 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:04.184935 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:04.220286 1143678 cri.go:89] found id: ""
	I0603 13:53:04.220317 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.220326 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:04.220332 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:04.220385 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:04.258898 1143678 cri.go:89] found id: ""
	I0603 13:53:04.258929 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.258941 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:04.258955 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:04.258972 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:04.312151 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:04.312198 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:04.329908 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:04.329943 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:04.402075 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:04.402106 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:04.402138 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:04.482873 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:04.482936 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:07.049978 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:07.063072 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:07.063140 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:07.097703 1143678 cri.go:89] found id: ""
	I0603 13:53:07.097737 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.097748 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:07.097755 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:07.097811 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:07.134826 1143678 cri.go:89] found id: ""
	I0603 13:53:07.134865 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.134878 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:07.134886 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:07.134955 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:07.178015 1143678 cri.go:89] found id: ""
	I0603 13:53:07.178050 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.178061 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:07.178068 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:07.178138 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:07.215713 1143678 cri.go:89] found id: ""
	I0603 13:53:07.215753 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.215764 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:07.215777 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:07.215840 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:07.251787 1143678 cri.go:89] found id: ""
	I0603 13:53:07.251815 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.251824 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:07.251830 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:07.251897 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:07.293357 1143678 cri.go:89] found id: ""
	I0603 13:53:07.293387 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.293398 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:07.293427 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:07.293496 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:07.329518 1143678 cri.go:89] found id: ""
	I0603 13:53:07.329551 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.329561 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:07.329569 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:07.329650 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:03.819203 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:06.317653 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:06.370539 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:08.370701 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:07.490706 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:09.990002 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:07.369534 1143678 cri.go:89] found id: ""
	I0603 13:53:07.369576 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.369587 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:07.369601 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:07.369617 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:07.424211 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:07.424260 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:07.439135 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:07.439172 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:07.511325 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:07.511360 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:07.511378 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:07.588348 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:07.588393 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:10.129812 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:10.143977 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:10.144057 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:10.181873 1143678 cri.go:89] found id: ""
	I0603 13:53:10.181906 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.181918 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:10.181926 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:10.181981 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:10.218416 1143678 cri.go:89] found id: ""
	I0603 13:53:10.218460 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.218473 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:10.218482 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:10.218562 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:10.253580 1143678 cri.go:89] found id: ""
	I0603 13:53:10.253618 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.253630 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:10.253646 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:10.253717 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:10.302919 1143678 cri.go:89] found id: ""
	I0603 13:53:10.302949 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.302957 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:10.302964 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:10.303024 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:10.343680 1143678 cri.go:89] found id: ""
	I0603 13:53:10.343709 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.343721 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:10.343729 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:10.343798 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:10.379281 1143678 cri.go:89] found id: ""
	I0603 13:53:10.379307 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.379315 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:10.379322 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:10.379374 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:10.420197 1143678 cri.go:89] found id: ""
	I0603 13:53:10.420225 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.420233 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:10.420239 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:10.420322 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:10.458578 1143678 cri.go:89] found id: ""
	I0603 13:53:10.458609 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.458618 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:10.458629 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:10.458642 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:10.511785 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:10.511828 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:10.526040 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:10.526081 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:10.603721 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:10.603749 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:10.603766 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:10.684153 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:10.684204 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:08.816447 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:11.318264 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:10.374788 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:12.871019 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:14.871064 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:11.992127 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:14.488866 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:13.227605 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:13.241131 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:13.241228 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:13.284636 1143678 cri.go:89] found id: ""
	I0603 13:53:13.284667 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.284675 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:13.284681 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:13.284737 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:13.322828 1143678 cri.go:89] found id: ""
	I0603 13:53:13.322861 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.322873 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:13.322881 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:13.322945 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:13.360061 1143678 cri.go:89] found id: ""
	I0603 13:53:13.360089 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.360097 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:13.360103 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:13.360176 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:13.397115 1143678 cri.go:89] found id: ""
	I0603 13:53:13.397149 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.397158 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:13.397164 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:13.397234 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:13.434086 1143678 cri.go:89] found id: ""
	I0603 13:53:13.434118 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.434127 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:13.434135 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:13.434194 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:13.470060 1143678 cri.go:89] found id: ""
	I0603 13:53:13.470089 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.470101 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:13.470113 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:13.470189 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:13.508423 1143678 cri.go:89] found id: ""
	I0603 13:53:13.508464 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.508480 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:13.508487 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:13.508552 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:13.546713 1143678 cri.go:89] found id: ""
	I0603 13:53:13.546752 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.546765 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:13.546778 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:13.546796 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:13.632984 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:13.633027 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:13.679169 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:13.679216 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:13.735765 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:13.735812 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:13.750175 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:13.750210 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:13.826571 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:16.327185 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:16.340163 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:16.340253 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:16.380260 1143678 cri.go:89] found id: ""
	I0603 13:53:16.380292 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.380300 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:16.380307 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:16.380373 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:16.420408 1143678 cri.go:89] found id: ""
	I0603 13:53:16.420438 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.420449 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:16.420457 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:16.420534 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:16.459250 1143678 cri.go:89] found id: ""
	I0603 13:53:16.459285 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.459297 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:16.459307 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:16.459377 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:16.496395 1143678 cri.go:89] found id: ""
	I0603 13:53:16.496427 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.496436 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:16.496444 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:16.496516 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:16.534402 1143678 cri.go:89] found id: ""
	I0603 13:53:16.534433 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.534442 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:16.534449 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:16.534514 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:16.571550 1143678 cri.go:89] found id: ""
	I0603 13:53:16.571577 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.571584 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:16.571591 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:16.571659 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:16.608425 1143678 cri.go:89] found id: ""
	I0603 13:53:16.608457 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.608468 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:16.608482 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:16.608549 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:16.647282 1143678 cri.go:89] found id: ""
	I0603 13:53:16.647315 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.647324 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:16.647334 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:16.647351 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:16.728778 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:16.728814 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:16.728831 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:16.822702 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:16.822747 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:16.868816 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:16.868845 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:16.922262 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:16.922301 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:13.818935 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:16.316865 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:17.370681 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:19.371232 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:16.489494 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:18.490176 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:20.491433 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:19.438231 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:19.452520 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:19.452603 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:19.488089 1143678 cri.go:89] found id: ""
	I0603 13:53:19.488121 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.488133 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:19.488141 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:19.488216 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:19.524494 1143678 cri.go:89] found id: ""
	I0603 13:53:19.524527 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.524537 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:19.524543 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:19.524595 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:19.561288 1143678 cri.go:89] found id: ""
	I0603 13:53:19.561323 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.561333 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:19.561341 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:19.561420 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:19.597919 1143678 cri.go:89] found id: ""
	I0603 13:53:19.597965 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.597976 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:19.597984 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:19.598056 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:19.634544 1143678 cri.go:89] found id: ""
	I0603 13:53:19.634579 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.634591 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:19.634599 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:19.634668 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:19.671473 1143678 cri.go:89] found id: ""
	I0603 13:53:19.671506 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.671518 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:19.671527 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:19.671598 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:19.707968 1143678 cri.go:89] found id: ""
	I0603 13:53:19.708000 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.708011 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:19.708019 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:19.708119 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:19.745555 1143678 cri.go:89] found id: ""
	I0603 13:53:19.745593 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.745604 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:19.745617 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:19.745631 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:19.830765 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:19.830812 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:19.875160 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:19.875197 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:19.927582 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:19.927627 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:19.942258 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:19.942289 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:20.016081 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:18.820067 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:21.319103 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:21.871214 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:24.371680 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:22.990210 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:24.990605 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:22.516859 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:22.534973 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:22.535040 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:22.593003 1143678 cri.go:89] found id: ""
	I0603 13:53:22.593043 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.593051 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:22.593058 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:22.593121 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:22.649916 1143678 cri.go:89] found id: ""
	I0603 13:53:22.649951 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.649963 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:22.649971 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:22.650030 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:22.689397 1143678 cri.go:89] found id: ""
	I0603 13:53:22.689449 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.689459 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:22.689465 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:22.689521 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:22.725109 1143678 cri.go:89] found id: ""
	I0603 13:53:22.725149 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.725161 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:22.725169 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:22.725250 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:22.761196 1143678 cri.go:89] found id: ""
	I0603 13:53:22.761225 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.761237 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:22.761245 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:22.761311 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:22.804065 1143678 cri.go:89] found id: ""
	I0603 13:53:22.804103 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.804112 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:22.804119 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:22.804189 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:22.840456 1143678 cri.go:89] found id: ""
	I0603 13:53:22.840485 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.840493 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:22.840499 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:22.840553 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:22.876796 1143678 cri.go:89] found id: ""
	I0603 13:53:22.876831 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.876842 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:22.876854 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:22.876869 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:22.957274 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:22.957317 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:22.998360 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:22.998394 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:23.054895 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:23.054942 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:23.070107 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:23.070141 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:23.147460 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:25.647727 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:25.663603 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:25.663691 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:25.698102 1143678 cri.go:89] found id: ""
	I0603 13:53:25.698139 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.698150 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:25.698159 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:25.698227 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:25.738601 1143678 cri.go:89] found id: ""
	I0603 13:53:25.738641 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.738648 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:25.738655 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:25.738718 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:25.780622 1143678 cri.go:89] found id: ""
	I0603 13:53:25.780657 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.780670 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:25.780678 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:25.780751 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:25.816950 1143678 cri.go:89] found id: ""
	I0603 13:53:25.816978 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.816989 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:25.816997 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:25.817060 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:25.860011 1143678 cri.go:89] found id: ""
	I0603 13:53:25.860051 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.860063 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:25.860072 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:25.860138 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:25.898832 1143678 cri.go:89] found id: ""
	I0603 13:53:25.898866 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.898878 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:25.898886 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:25.898959 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:25.937483 1143678 cri.go:89] found id: ""
	I0603 13:53:25.937518 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.937533 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:25.937541 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:25.937607 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:25.973972 1143678 cri.go:89] found id: ""
	I0603 13:53:25.974008 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.974021 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:25.974034 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:25.974065 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:25.989188 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:25.989227 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:26.065521 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:26.065546 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:26.065560 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:26.147852 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:26.147899 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:26.191395 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:26.191431 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:23.816928 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:25.818534 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:26.872084 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:28.872558 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:27.489951 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:29.989352 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:28.751041 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:28.764764 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:28.764826 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:28.808232 1143678 cri.go:89] found id: ""
	I0603 13:53:28.808271 1143678 logs.go:276] 0 containers: []
	W0603 13:53:28.808285 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:28.808293 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:28.808369 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:28.849058 1143678 cri.go:89] found id: ""
	I0603 13:53:28.849094 1143678 logs.go:276] 0 containers: []
	W0603 13:53:28.849107 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:28.849114 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:28.849187 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:28.892397 1143678 cri.go:89] found id: ""
	I0603 13:53:28.892427 1143678 logs.go:276] 0 containers: []
	W0603 13:53:28.892441 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:28.892447 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:28.892515 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:28.932675 1143678 cri.go:89] found id: ""
	I0603 13:53:28.932715 1143678 logs.go:276] 0 containers: []
	W0603 13:53:28.932727 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:28.932735 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:28.932840 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:28.969732 1143678 cri.go:89] found id: ""
	I0603 13:53:28.969769 1143678 logs.go:276] 0 containers: []
	W0603 13:53:28.969781 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:28.969789 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:28.969857 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:29.007765 1143678 cri.go:89] found id: ""
	I0603 13:53:29.007791 1143678 logs.go:276] 0 containers: []
	W0603 13:53:29.007798 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:29.007804 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:29.007865 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:29.044616 1143678 cri.go:89] found id: ""
	I0603 13:53:29.044652 1143678 logs.go:276] 0 containers: []
	W0603 13:53:29.044664 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:29.044675 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:29.044734 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:29.081133 1143678 cri.go:89] found id: ""
	I0603 13:53:29.081166 1143678 logs.go:276] 0 containers: []
	W0603 13:53:29.081187 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:29.081198 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:29.081213 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:29.095753 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:29.095783 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:29.174472 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:29.174496 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:29.174516 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:29.251216 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:29.251262 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:29.289127 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:29.289168 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:31.845335 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:31.860631 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:31.860720 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:31.904507 1143678 cri.go:89] found id: ""
	I0603 13:53:31.904544 1143678 logs.go:276] 0 containers: []
	W0603 13:53:31.904556 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:31.904564 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:31.904633 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:31.940795 1143678 cri.go:89] found id: ""
	I0603 13:53:31.940832 1143678 logs.go:276] 0 containers: []
	W0603 13:53:31.940845 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:31.940852 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:31.940921 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:31.978447 1143678 cri.go:89] found id: ""
	I0603 13:53:31.978481 1143678 logs.go:276] 0 containers: []
	W0603 13:53:31.978499 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:31.978507 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:31.978569 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:32.017975 1143678 cri.go:89] found id: ""
	I0603 13:53:32.018009 1143678 logs.go:276] 0 containers: []
	W0603 13:53:32.018018 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:32.018025 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:32.018089 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:32.053062 1143678 cri.go:89] found id: ""
	I0603 13:53:32.053091 1143678 logs.go:276] 0 containers: []
	W0603 13:53:32.053099 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:32.053106 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:32.053181 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:32.089822 1143678 cri.go:89] found id: ""
	I0603 13:53:32.089856 1143678 logs.go:276] 0 containers: []
	W0603 13:53:32.089868 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:32.089877 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:32.089944 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:32.126243 1143678 cri.go:89] found id: ""
	I0603 13:53:32.126280 1143678 logs.go:276] 0 containers: []
	W0603 13:53:32.126291 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:32.126299 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:32.126358 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:32.163297 1143678 cri.go:89] found id: ""
	I0603 13:53:32.163346 1143678 logs.go:276] 0 containers: []
	W0603 13:53:32.163357 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:32.163370 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:32.163386 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:32.218452 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:32.218495 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:32.233688 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:32.233731 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:32.318927 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:32.318947 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:32.318963 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:28.317046 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:30.317308 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:32.318273 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:31.370654 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:33.371038 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:31.991594 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:34.492142 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:32.403734 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:32.403786 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:34.947857 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:34.961894 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:34.961983 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:35.006279 1143678 cri.go:89] found id: ""
	I0603 13:53:35.006308 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.006318 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:35.006326 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:35.006398 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:35.042765 1143678 cri.go:89] found id: ""
	I0603 13:53:35.042794 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.042807 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:35.042815 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:35.042877 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:35.084332 1143678 cri.go:89] found id: ""
	I0603 13:53:35.084365 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.084375 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:35.084381 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:35.084448 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:35.121306 1143678 cri.go:89] found id: ""
	I0603 13:53:35.121337 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.121348 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:35.121358 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:35.121444 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:35.155952 1143678 cri.go:89] found id: ""
	I0603 13:53:35.155994 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.156008 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:35.156016 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:35.156089 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:35.196846 1143678 cri.go:89] found id: ""
	I0603 13:53:35.196881 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.196893 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:35.196902 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:35.196972 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:35.232396 1143678 cri.go:89] found id: ""
	I0603 13:53:35.232429 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.232440 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:35.232449 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:35.232528 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:35.269833 1143678 cri.go:89] found id: ""
	I0603 13:53:35.269862 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.269872 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:35.269885 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:35.269902 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:35.357754 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:35.357794 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:35.399793 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:35.399822 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:35.453742 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:35.453782 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:35.468431 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:35.468465 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:35.547817 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:34.816178 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:36.817093 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:35.373072 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:37.870173 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:36.989364 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:38.990163 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:38.048517 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:38.063481 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:38.063569 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:38.100487 1143678 cri.go:89] found id: ""
	I0603 13:53:38.100523 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.100535 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:38.100543 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:38.100612 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:38.137627 1143678 cri.go:89] found id: ""
	I0603 13:53:38.137665 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.137678 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:38.137686 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:38.137754 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:38.176138 1143678 cri.go:89] found id: ""
	I0603 13:53:38.176172 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.176190 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:38.176199 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:38.176265 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:38.214397 1143678 cri.go:89] found id: ""
	I0603 13:53:38.214439 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.214451 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:38.214459 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:38.214528 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:38.250531 1143678 cri.go:89] found id: ""
	I0603 13:53:38.250563 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.250573 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:38.250580 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:38.250642 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:38.286558 1143678 cri.go:89] found id: ""
	I0603 13:53:38.286587 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.286595 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:38.286601 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:38.286652 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:38.327995 1143678 cri.go:89] found id: ""
	I0603 13:53:38.328043 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.328055 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:38.328062 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:38.328126 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:38.374266 1143678 cri.go:89] found id: ""
	I0603 13:53:38.374300 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.374311 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:38.374324 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:38.374341 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:38.426876 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:38.426918 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:38.443296 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:38.443340 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:38.514702 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:38.514728 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:38.514746 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:38.601536 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:38.601590 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:41.141766 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:41.155927 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:41.156006 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:41.196829 1143678 cri.go:89] found id: ""
	I0603 13:53:41.196871 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.196884 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:41.196896 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:41.196967 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:41.231729 1143678 cri.go:89] found id: ""
	I0603 13:53:41.231780 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.231802 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:41.231812 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:41.231900 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:41.266663 1143678 cri.go:89] found id: ""
	I0603 13:53:41.266699 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.266711 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:41.266720 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:41.266783 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:41.305251 1143678 cri.go:89] found id: ""
	I0603 13:53:41.305278 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.305286 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:41.305292 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:41.305351 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:41.342527 1143678 cri.go:89] found id: ""
	I0603 13:53:41.342556 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.342568 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:41.342575 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:41.342637 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:41.379950 1143678 cri.go:89] found id: ""
	I0603 13:53:41.379982 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.379992 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:41.379999 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:41.380068 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:41.414930 1143678 cri.go:89] found id: ""
	I0603 13:53:41.414965 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.414973 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:41.414980 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:41.415043 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:41.449265 1143678 cri.go:89] found id: ""
	I0603 13:53:41.449299 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.449310 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:41.449324 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:41.449343 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:41.502525 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:41.502560 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:41.519357 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:41.519390 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:41.591443 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:41.591471 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:41.591485 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:41.668758 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:41.668802 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:39.317333 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:41.317598 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:40.370844 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:42.871161 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:41.489574 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:43.989620 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:44.211768 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:44.226789 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:44.226869 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:44.265525 1143678 cri.go:89] found id: ""
	I0603 13:53:44.265553 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.265561 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:44.265568 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:44.265646 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:44.304835 1143678 cri.go:89] found id: ""
	I0603 13:53:44.304866 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.304874 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:44.304880 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:44.304935 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:44.345832 1143678 cri.go:89] found id: ""
	I0603 13:53:44.345875 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.345885 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:44.345891 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:44.345950 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:44.386150 1143678 cri.go:89] found id: ""
	I0603 13:53:44.386186 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.386198 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:44.386207 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:44.386268 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:44.423662 1143678 cri.go:89] found id: ""
	I0603 13:53:44.423697 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.423709 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:44.423719 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:44.423788 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:44.462437 1143678 cri.go:89] found id: ""
	I0603 13:53:44.462464 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.462473 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:44.462481 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:44.462567 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:44.501007 1143678 cri.go:89] found id: ""
	I0603 13:53:44.501062 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.501074 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:44.501081 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:44.501138 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:44.535501 1143678 cri.go:89] found id: ""
	I0603 13:53:44.535543 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.535554 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:44.535567 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:44.535585 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:44.587114 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:44.587157 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:44.602151 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:44.602180 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:44.674065 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:44.674104 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:44.674122 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:44.757443 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:44.757488 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:47.306481 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:47.319895 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:47.319958 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:43.818030 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:46.316852 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:45.370762 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:47.371799 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:49.871512 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:46.488076 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:48.488472 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:50.488892 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:47.356975 1143678 cri.go:89] found id: ""
	I0603 13:53:47.357013 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.357026 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:47.357034 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:47.357106 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:47.393840 1143678 cri.go:89] found id: ""
	I0603 13:53:47.393869 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.393877 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:47.393884 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:47.393936 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:47.428455 1143678 cri.go:89] found id: ""
	I0603 13:53:47.428493 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.428506 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:47.428514 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:47.428597 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:47.463744 1143678 cri.go:89] found id: ""
	I0603 13:53:47.463777 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.463788 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:47.463795 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:47.463855 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:47.498134 1143678 cri.go:89] found id: ""
	I0603 13:53:47.498159 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.498167 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:47.498173 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:47.498245 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:47.534153 1143678 cri.go:89] found id: ""
	I0603 13:53:47.534195 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.534206 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:47.534219 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:47.534272 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:47.567148 1143678 cri.go:89] found id: ""
	I0603 13:53:47.567179 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.567187 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:47.567194 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:47.567249 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:47.605759 1143678 cri.go:89] found id: ""
	I0603 13:53:47.605790 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.605798 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:47.605810 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:47.605824 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:47.683651 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:47.683692 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:47.683705 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:47.763810 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:47.763848 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:47.806092 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:47.806131 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:47.859637 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:47.859677 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:50.377538 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:50.391696 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:50.391776 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:50.433968 1143678 cri.go:89] found id: ""
	I0603 13:53:50.434001 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.434013 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:50.434020 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:50.434080 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:50.470561 1143678 cri.go:89] found id: ""
	I0603 13:53:50.470589 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.470596 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:50.470603 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:50.470662 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:50.510699 1143678 cri.go:89] found id: ""
	I0603 13:53:50.510727 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.510735 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:50.510741 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:50.510808 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:50.553386 1143678 cri.go:89] found id: ""
	I0603 13:53:50.553433 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.553445 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:50.553452 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:50.553533 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:50.589731 1143678 cri.go:89] found id: ""
	I0603 13:53:50.589779 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.589792 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:50.589801 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:50.589885 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:50.625144 1143678 cri.go:89] found id: ""
	I0603 13:53:50.625180 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.625192 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:50.625201 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:50.625274 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:50.669021 1143678 cri.go:89] found id: ""
	I0603 13:53:50.669053 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.669061 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:50.669067 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:50.669121 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:50.714241 1143678 cri.go:89] found id: ""
	I0603 13:53:50.714270 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.714284 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:50.714297 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:50.714314 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:50.766290 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:50.766333 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:50.797242 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:50.797275 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:50.866589 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:50.866616 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:50.866637 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:50.948808 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:50.948854 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:48.318282 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:50.817445 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:52.370798 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:54.377027 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:52.490719 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:54.989907 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:53.496797 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:53.511944 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:53.512021 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:53.549028 1143678 cri.go:89] found id: ""
	I0603 13:53:53.549057 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.549066 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:53.549072 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:53.549128 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:53.583533 1143678 cri.go:89] found id: ""
	I0603 13:53:53.583566 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.583578 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:53.583586 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:53.583652 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:53.618578 1143678 cri.go:89] found id: ""
	I0603 13:53:53.618609 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.618618 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:53.618626 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:53.618701 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:53.653313 1143678 cri.go:89] found id: ""
	I0603 13:53:53.653347 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.653358 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:53.653364 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:53.653442 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:53.689805 1143678 cri.go:89] found id: ""
	I0603 13:53:53.689839 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.689849 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:53.689857 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:53.689931 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:53.725538 1143678 cri.go:89] found id: ""
	I0603 13:53:53.725571 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.725584 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:53.725592 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:53.725648 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:53.762284 1143678 cri.go:89] found id: ""
	I0603 13:53:53.762325 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.762336 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:53.762345 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:53.762419 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:53.799056 1143678 cri.go:89] found id: ""
	I0603 13:53:53.799083 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.799092 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:53.799102 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:53.799115 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:53.873743 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:53.873809 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:53.919692 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:53.919724 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:53.969068 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:53.969109 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:53.983840 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:53.983866 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:54.054842 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:56.555587 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:56.570014 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:56.570076 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:56.604352 1143678 cri.go:89] found id: ""
	I0603 13:53:56.604386 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.604400 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:56.604408 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:56.604479 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:56.648126 1143678 cri.go:89] found id: ""
	I0603 13:53:56.648161 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.648171 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:56.648177 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:56.648231 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:56.685621 1143678 cri.go:89] found id: ""
	I0603 13:53:56.685658 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.685670 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:56.685678 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:56.685763 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:56.721860 1143678 cri.go:89] found id: ""
	I0603 13:53:56.721891 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.721913 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:56.721921 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:56.721989 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:56.757950 1143678 cri.go:89] found id: ""
	I0603 13:53:56.757982 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.757995 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:56.758002 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:56.758068 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:56.794963 1143678 cri.go:89] found id: ""
	I0603 13:53:56.794991 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.794999 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:56.795007 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:56.795072 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:56.831795 1143678 cri.go:89] found id: ""
	I0603 13:53:56.831827 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.831839 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:56.831846 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:56.831913 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:56.869263 1143678 cri.go:89] found id: ""
	I0603 13:53:56.869293 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.869303 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:56.869314 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:56.869331 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:56.945068 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:56.945096 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:56.945110 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:57.028545 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:57.028582 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:57.069973 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:57.070009 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:57.126395 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:57.126436 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:53.316616 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:55.316981 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:57.317295 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:56.870680 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:59.371553 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:56.990964 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:59.489616 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:59.644870 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:59.658547 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:59.658634 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:59.694625 1143678 cri.go:89] found id: ""
	I0603 13:53:59.694656 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.694665 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:59.694673 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:59.694740 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:59.730475 1143678 cri.go:89] found id: ""
	I0603 13:53:59.730573 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.730590 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:59.730599 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:59.730696 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:59.768533 1143678 cri.go:89] found id: ""
	I0603 13:53:59.768567 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.768580 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:59.768590 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:59.768662 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:59.804913 1143678 cri.go:89] found id: ""
	I0603 13:53:59.804944 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.804953 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:59.804960 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:59.805014 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:59.850331 1143678 cri.go:89] found id: ""
	I0603 13:53:59.850363 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.850376 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:59.850385 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:59.850466 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:59.890777 1143678 cri.go:89] found id: ""
	I0603 13:53:59.890814 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.890826 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:59.890834 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:59.890909 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:59.931233 1143678 cri.go:89] found id: ""
	I0603 13:53:59.931268 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.931277 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:59.931283 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:59.931354 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:59.966267 1143678 cri.go:89] found id: ""
	I0603 13:53:59.966307 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.966319 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:59.966333 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:59.966356 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:00.019884 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:00.019924 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:00.034936 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:00.034982 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:00.115002 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:00.115035 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:00.115053 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:00.189992 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:00.190035 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:59.818065 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:02.316183 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:01.870679 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:03.872563 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:01.490213 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:03.988699 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:02.737387 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:02.752131 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:02.752220 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:02.787863 1143678 cri.go:89] found id: ""
	I0603 13:54:02.787893 1143678 logs.go:276] 0 containers: []
	W0603 13:54:02.787902 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:02.787908 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:02.787974 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:02.824938 1143678 cri.go:89] found id: ""
	I0603 13:54:02.824973 1143678 logs.go:276] 0 containers: []
	W0603 13:54:02.824983 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:02.824989 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:02.825061 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:02.861425 1143678 cri.go:89] found id: ""
	I0603 13:54:02.861461 1143678 logs.go:276] 0 containers: []
	W0603 13:54:02.861469 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:02.861476 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:02.861546 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:02.907417 1143678 cri.go:89] found id: ""
	I0603 13:54:02.907453 1143678 logs.go:276] 0 containers: []
	W0603 13:54:02.907475 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:02.907483 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:02.907553 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:02.953606 1143678 cri.go:89] found id: ""
	I0603 13:54:02.953640 1143678 logs.go:276] 0 containers: []
	W0603 13:54:02.953649 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:02.953655 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:02.953728 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:03.007785 1143678 cri.go:89] found id: ""
	I0603 13:54:03.007816 1143678 logs.go:276] 0 containers: []
	W0603 13:54:03.007824 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:03.007830 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:03.007896 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:03.058278 1143678 cri.go:89] found id: ""
	I0603 13:54:03.058316 1143678 logs.go:276] 0 containers: []
	W0603 13:54:03.058329 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:03.058338 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:03.058404 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:03.094766 1143678 cri.go:89] found id: ""
	I0603 13:54:03.094800 1143678 logs.go:276] 0 containers: []
	W0603 13:54:03.094811 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:03.094824 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:03.094840 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:03.163663 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:03.163690 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:03.163704 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:03.250751 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:03.250802 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:03.292418 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:03.292466 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:03.344552 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:03.344600 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:05.859965 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:05.875255 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:05.875340 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:05.918590 1143678 cri.go:89] found id: ""
	I0603 13:54:05.918619 1143678 logs.go:276] 0 containers: []
	W0603 13:54:05.918630 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:05.918637 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:05.918706 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:05.953932 1143678 cri.go:89] found id: ""
	I0603 13:54:05.953969 1143678 logs.go:276] 0 containers: []
	W0603 13:54:05.953980 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:05.953988 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:05.954056 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:05.993319 1143678 cri.go:89] found id: ""
	I0603 13:54:05.993348 1143678 logs.go:276] 0 containers: []
	W0603 13:54:05.993359 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:05.993368 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:05.993468 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:06.033047 1143678 cri.go:89] found id: ""
	I0603 13:54:06.033079 1143678 logs.go:276] 0 containers: []
	W0603 13:54:06.033087 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:06.033100 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:06.033156 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:06.072607 1143678 cri.go:89] found id: ""
	I0603 13:54:06.072631 1143678 logs.go:276] 0 containers: []
	W0603 13:54:06.072640 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:06.072647 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:06.072698 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:06.109944 1143678 cri.go:89] found id: ""
	I0603 13:54:06.109990 1143678 logs.go:276] 0 containers: []
	W0603 13:54:06.109999 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:06.110007 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:06.110071 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:06.150235 1143678 cri.go:89] found id: ""
	I0603 13:54:06.150266 1143678 logs.go:276] 0 containers: []
	W0603 13:54:06.150276 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:06.150284 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:06.150349 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:06.193963 1143678 cri.go:89] found id: ""
	I0603 13:54:06.193992 1143678 logs.go:276] 0 containers: []
	W0603 13:54:06.194004 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:06.194017 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:06.194035 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:06.235790 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:06.235827 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:06.289940 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:06.289980 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:06.305205 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:06.305240 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:06.381170 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:06.381191 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:06.381206 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:04.316812 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:06.317759 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:06.370944 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:08.371668 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:05.989346 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:08.492021 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:08.958985 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:08.973364 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:08.973462 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:09.015050 1143678 cri.go:89] found id: ""
	I0603 13:54:09.015087 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.015099 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:09.015107 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:09.015187 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:09.054474 1143678 cri.go:89] found id: ""
	I0603 13:54:09.054508 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.054521 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:09.054533 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:09.054590 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:09.090867 1143678 cri.go:89] found id: ""
	I0603 13:54:09.090905 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.090917 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:09.090926 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:09.090995 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:09.128401 1143678 cri.go:89] found id: ""
	I0603 13:54:09.128433 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.128441 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:09.128447 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:09.128511 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:09.162952 1143678 cri.go:89] found id: ""
	I0603 13:54:09.162992 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.163005 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:09.163013 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:09.163078 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:09.200375 1143678 cri.go:89] found id: ""
	I0603 13:54:09.200402 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.200410 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:09.200416 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:09.200495 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:09.244694 1143678 cri.go:89] found id: ""
	I0603 13:54:09.244729 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.244740 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:09.244749 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:09.244818 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:09.281633 1143678 cri.go:89] found id: ""
	I0603 13:54:09.281666 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.281675 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:09.281686 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:09.281700 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:09.341287 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:09.341331 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:09.355379 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:09.355415 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:09.435934 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:09.435960 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:09.435979 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:09.518203 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:09.518248 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:12.061538 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:12.076939 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:12.077020 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:12.114308 1143678 cri.go:89] found id: ""
	I0603 13:54:12.114344 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.114353 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:12.114359 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:12.114427 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:12.150336 1143678 cri.go:89] found id: ""
	I0603 13:54:12.150368 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.150383 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:12.150390 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:12.150455 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:12.189881 1143678 cri.go:89] found id: ""
	I0603 13:54:12.189934 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.189946 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:12.189954 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:12.190020 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:12.226361 1143678 cri.go:89] found id: ""
	I0603 13:54:12.226396 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.226407 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:12.226415 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:12.226488 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:12.264216 1143678 cri.go:89] found id: ""
	I0603 13:54:12.264257 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.264265 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:12.264271 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:12.264341 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:12.306563 1143678 cri.go:89] found id: ""
	I0603 13:54:12.306600 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.306612 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:12.306620 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:12.306690 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:12.347043 1143678 cri.go:89] found id: ""
	I0603 13:54:12.347082 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.347094 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:12.347105 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:12.347170 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:08.317824 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:10.816743 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:12.816776 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:10.372079 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:12.872314 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:10.990240 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:13.489762 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:12.383947 1143678 cri.go:89] found id: ""
	I0603 13:54:12.383978 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.383989 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:12.384001 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:12.384018 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:12.464306 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:12.464348 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:12.505079 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:12.505110 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:12.563631 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:12.563666 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:12.578328 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:12.578357 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:12.646015 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:15.147166 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:15.163786 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:15.163865 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:15.202249 1143678 cri.go:89] found id: ""
	I0603 13:54:15.202286 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.202296 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:15.202304 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:15.202372 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:15.236305 1143678 cri.go:89] found id: ""
	I0603 13:54:15.236345 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.236359 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:15.236368 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:15.236459 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:15.273457 1143678 cri.go:89] found id: ""
	I0603 13:54:15.273493 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.273510 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:15.273521 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:15.273592 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:15.314917 1143678 cri.go:89] found id: ""
	I0603 13:54:15.314951 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.314963 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:15.314984 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:15.315055 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:15.353060 1143678 cri.go:89] found id: ""
	I0603 13:54:15.353098 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.353112 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:15.353118 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:15.353197 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:15.390412 1143678 cri.go:89] found id: ""
	I0603 13:54:15.390448 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.390460 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:15.390469 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:15.390534 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:15.427735 1143678 cri.go:89] found id: ""
	I0603 13:54:15.427771 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.427782 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:15.427789 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:15.427854 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:15.467134 1143678 cri.go:89] found id: ""
	I0603 13:54:15.467165 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.467175 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:15.467184 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:15.467199 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:15.517924 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:15.517973 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:15.531728 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:15.531760 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:15.608397 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:15.608421 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:15.608444 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:15.688976 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:15.689016 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:15.319250 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:16.817018 1143252 pod_ready.go:81] duration metric: took 4m0.00664589s for pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace to be "Ready" ...
	E0603 13:54:16.817042 1143252 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0603 13:54:16.817049 1143252 pod_ready.go:38] duration metric: took 4m6.670583216s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:54:16.817081 1143252 api_server.go:52] waiting for apiserver process to appear ...
	I0603 13:54:16.817110 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:16.817158 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:16.871314 1143252 cri.go:89] found id: "45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a"
	I0603 13:54:16.871339 1143252 cri.go:89] found id: ""
	I0603 13:54:16.871350 1143252 logs.go:276] 1 containers: [45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a]
	I0603 13:54:16.871405 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:16.876249 1143252 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:16.876319 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:16.917267 1143252 cri.go:89] found id: "114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05"
	I0603 13:54:16.917298 1143252 cri.go:89] found id: ""
	I0603 13:54:16.917310 1143252 logs.go:276] 1 containers: [114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05]
	I0603 13:54:16.917374 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:16.923290 1143252 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:16.923374 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:16.963598 1143252 cri.go:89] found id: "f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574"
	I0603 13:54:16.963619 1143252 cri.go:89] found id: ""
	I0603 13:54:16.963628 1143252 logs.go:276] 1 containers: [f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574]
	I0603 13:54:16.963689 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:16.968201 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:16.968277 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:17.008229 1143252 cri.go:89] found id: "f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87"
	I0603 13:54:17.008264 1143252 cri.go:89] found id: ""
	I0603 13:54:17.008274 1143252 logs.go:276] 1 containers: [f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87]
	I0603 13:54:17.008341 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:17.012719 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:17.012795 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:17.048353 1143252 cri.go:89] found id: "c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1"
	I0603 13:54:17.048384 1143252 cri.go:89] found id: ""
	I0603 13:54:17.048394 1143252 logs.go:276] 1 containers: [c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1]
	I0603 13:54:17.048459 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:17.053094 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:17.053162 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:17.088475 1143252 cri.go:89] found id: "a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d"
	I0603 13:54:17.088507 1143252 cri.go:89] found id: ""
	I0603 13:54:17.088518 1143252 logs.go:276] 1 containers: [a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d]
	I0603 13:54:17.088583 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:17.093293 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:17.093373 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:17.130335 1143252 cri.go:89] found id: ""
	I0603 13:54:17.130370 1143252 logs.go:276] 0 containers: []
	W0603 13:54:17.130381 1143252 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:17.130389 1143252 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0603 13:54:17.130472 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0603 13:54:17.176283 1143252 cri.go:89] found id: "e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b"
	I0603 13:54:17.176317 1143252 cri.go:89] found id: "141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88"
	I0603 13:54:17.176324 1143252 cri.go:89] found id: ""
	I0603 13:54:17.176335 1143252 logs.go:276] 2 containers: [e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b 141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88]
	I0603 13:54:17.176409 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:17.181455 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:17.185881 1143252 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:17.185902 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:17.239636 1143252 logs.go:123] Gathering logs for kube-apiserver [45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a] ...
	I0603 13:54:17.239680 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a"
	I0603 13:54:17.309488 1143252 logs.go:123] Gathering logs for etcd [114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05] ...
	I0603 13:54:17.309532 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05"
	I0603 13:54:17.362243 1143252 logs.go:123] Gathering logs for coredns [f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574] ...
	I0603 13:54:17.362282 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574"
	I0603 13:54:17.401389 1143252 logs.go:123] Gathering logs for kube-proxy [c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1] ...
	I0603 13:54:17.401440 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1"
	I0603 13:54:17.442095 1143252 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:17.442127 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:17.923198 1143252 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:17.923247 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:17.939968 1143252 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:17.940000 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 13:54:18.075054 1143252 logs.go:123] Gathering logs for kube-scheduler [f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87] ...
	I0603 13:54:18.075098 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87"
	I0603 13:54:18.113954 1143252 logs.go:123] Gathering logs for kube-controller-manager [a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d] ...
	I0603 13:54:18.113994 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d"
	I0603 13:54:18.181862 1143252 logs.go:123] Gathering logs for storage-provisioner [e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b] ...
	I0603 13:54:18.181906 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b"
	I0603 13:54:18.227105 1143252 logs.go:123] Gathering logs for storage-provisioner [141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88] ...
	I0603 13:54:18.227137 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88"
	I0603 13:54:18.272684 1143252 logs.go:123] Gathering logs for container status ...
	I0603 13:54:18.272721 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:15.371753 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:17.870321 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:19.879331 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:15.990326 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:18.489960 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:18.228279 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:18.242909 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:18.242985 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:18.285400 1143678 cri.go:89] found id: ""
	I0603 13:54:18.285445 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.285455 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:18.285461 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:18.285521 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:18.321840 1143678 cri.go:89] found id: ""
	I0603 13:54:18.321868 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.321877 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:18.321884 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:18.321943 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:18.358856 1143678 cri.go:89] found id: ""
	I0603 13:54:18.358888 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.358902 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:18.358911 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:18.358979 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:18.395638 1143678 cri.go:89] found id: ""
	I0603 13:54:18.395678 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.395691 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:18.395699 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:18.395766 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:18.435541 1143678 cri.go:89] found id: ""
	I0603 13:54:18.435570 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.435581 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:18.435589 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:18.435653 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:18.469491 1143678 cri.go:89] found id: ""
	I0603 13:54:18.469527 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.469538 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:18.469545 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:18.469615 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:18.507986 1143678 cri.go:89] found id: ""
	I0603 13:54:18.508018 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.508030 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:18.508039 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:18.508106 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:18.542311 1143678 cri.go:89] found id: ""
	I0603 13:54:18.542343 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.542351 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:18.542361 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:18.542375 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:18.619295 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:18.619337 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:18.662500 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:18.662540 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:18.714392 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:18.714432 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:18.728750 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:18.728785 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:18.800786 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:21.301554 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:21.315880 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:21.315944 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:21.358178 1143678 cri.go:89] found id: ""
	I0603 13:54:21.358208 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.358217 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:21.358227 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:21.358289 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:21.395873 1143678 cri.go:89] found id: ""
	I0603 13:54:21.395969 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.395995 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:21.396014 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:21.396111 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:21.431781 1143678 cri.go:89] found id: ""
	I0603 13:54:21.431810 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.431822 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:21.431831 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:21.431906 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:21.472840 1143678 cri.go:89] found id: ""
	I0603 13:54:21.472872 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.472885 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:21.472893 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:21.472955 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:21.512296 1143678 cri.go:89] found id: ""
	I0603 13:54:21.512333 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.512346 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:21.512353 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:21.512421 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:21.547555 1143678 cri.go:89] found id: ""
	I0603 13:54:21.547588 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.547599 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:21.547609 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:21.547670 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:21.584972 1143678 cri.go:89] found id: ""
	I0603 13:54:21.585005 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.585013 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:21.585019 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:21.585085 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:21.621566 1143678 cri.go:89] found id: ""
	I0603 13:54:21.621599 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.621610 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:21.621623 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:21.621639 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:21.637223 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:21.637263 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:21.712272 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:21.712294 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:21.712310 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:21.800453 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:21.800490 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:21.841477 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:21.841525 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:20.819740 1143252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:20.836917 1143252 api_server.go:72] duration metric: took 4m15.913250824s to wait for apiserver process to appear ...
	I0603 13:54:20.836947 1143252 api_server.go:88] waiting for apiserver healthz status ...
	I0603 13:54:20.836988 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:20.837038 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:20.874034 1143252 cri.go:89] found id: "45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a"
	I0603 13:54:20.874064 1143252 cri.go:89] found id: ""
	I0603 13:54:20.874076 1143252 logs.go:276] 1 containers: [45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a]
	I0603 13:54:20.874146 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:20.878935 1143252 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:20.879020 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:20.920390 1143252 cri.go:89] found id: "114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05"
	I0603 13:54:20.920417 1143252 cri.go:89] found id: ""
	I0603 13:54:20.920425 1143252 logs.go:276] 1 containers: [114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05]
	I0603 13:54:20.920494 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:20.924858 1143252 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:20.924934 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:20.966049 1143252 cri.go:89] found id: "f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574"
	I0603 13:54:20.966077 1143252 cri.go:89] found id: ""
	I0603 13:54:20.966088 1143252 logs.go:276] 1 containers: [f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574]
	I0603 13:54:20.966174 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:20.970734 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:20.970812 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:21.010892 1143252 cri.go:89] found id: "f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87"
	I0603 13:54:21.010918 1143252 cri.go:89] found id: ""
	I0603 13:54:21.010929 1143252 logs.go:276] 1 containers: [f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87]
	I0603 13:54:21.010994 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:21.016274 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:21.016347 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:21.055294 1143252 cri.go:89] found id: "c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1"
	I0603 13:54:21.055318 1143252 cri.go:89] found id: ""
	I0603 13:54:21.055327 1143252 logs.go:276] 1 containers: [c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1]
	I0603 13:54:21.055375 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:21.060007 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:21.060069 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:21.099200 1143252 cri.go:89] found id: "a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d"
	I0603 13:54:21.099225 1143252 cri.go:89] found id: ""
	I0603 13:54:21.099236 1143252 logs.go:276] 1 containers: [a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d]
	I0603 13:54:21.099309 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:21.103590 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:21.103662 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:21.140375 1143252 cri.go:89] found id: ""
	I0603 13:54:21.140409 1143252 logs.go:276] 0 containers: []
	W0603 13:54:21.140422 1143252 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:21.140431 1143252 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0603 13:54:21.140498 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0603 13:54:21.180709 1143252 cri.go:89] found id: "e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b"
	I0603 13:54:21.180735 1143252 cri.go:89] found id: "141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88"
	I0603 13:54:21.180739 1143252 cri.go:89] found id: ""
	I0603 13:54:21.180747 1143252 logs.go:276] 2 containers: [e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b 141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88]
	I0603 13:54:21.180814 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:21.184952 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:21.189111 1143252 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:21.189140 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:21.663768 1143252 logs.go:123] Gathering logs for kube-apiserver [45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a] ...
	I0603 13:54:21.663807 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a"
	I0603 13:54:21.719542 1143252 logs.go:123] Gathering logs for etcd [114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05] ...
	I0603 13:54:21.719573 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05"
	I0603 13:54:21.786686 1143252 logs.go:123] Gathering logs for coredns [f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574] ...
	I0603 13:54:21.786725 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574"
	I0603 13:54:21.824908 1143252 logs.go:123] Gathering logs for kube-scheduler [f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87] ...
	I0603 13:54:21.824948 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87"
	I0603 13:54:21.864778 1143252 logs.go:123] Gathering logs for kube-proxy [c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1] ...
	I0603 13:54:21.864818 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1"
	I0603 13:54:21.904450 1143252 logs.go:123] Gathering logs for storage-provisioner [e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b] ...
	I0603 13:54:21.904480 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b"
	I0603 13:54:21.942006 1143252 logs.go:123] Gathering logs for storage-provisioner [141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88] ...
	I0603 13:54:21.942040 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88"
	I0603 13:54:21.979636 1143252 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:21.979673 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:22.033943 1143252 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:22.033980 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:22.048545 1143252 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:22.048578 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 13:54:22.154866 1143252 logs.go:123] Gathering logs for kube-controller-manager [a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d] ...
	I0603 13:54:22.154906 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d"
	I0603 13:54:22.218033 1143252 logs.go:123] Gathering logs for container status ...
	I0603 13:54:22.218073 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:22.374700 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:24.871898 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:20.989874 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:23.489083 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:24.394864 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:24.408416 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:24.408527 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:24.444572 1143678 cri.go:89] found id: ""
	I0603 13:54:24.444603 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.444612 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:24.444618 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:24.444672 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:24.483710 1143678 cri.go:89] found id: ""
	I0603 13:54:24.483744 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.483755 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:24.483763 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:24.483837 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:24.522396 1143678 cri.go:89] found id: ""
	I0603 13:54:24.522437 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.522450 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:24.522457 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:24.522520 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:24.560865 1143678 cri.go:89] found id: ""
	I0603 13:54:24.560896 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.560905 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:24.560911 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:24.560964 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:24.598597 1143678 cri.go:89] found id: ""
	I0603 13:54:24.598632 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.598643 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:24.598657 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:24.598722 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:24.638854 1143678 cri.go:89] found id: ""
	I0603 13:54:24.638885 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.638897 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:24.638908 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:24.638979 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:24.678039 1143678 cri.go:89] found id: ""
	I0603 13:54:24.678076 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.678088 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:24.678096 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:24.678166 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:24.712836 1143678 cri.go:89] found id: ""
	I0603 13:54:24.712871 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.712883 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:24.712896 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:24.712913 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:24.763503 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:24.763545 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:24.779383 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:24.779416 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:24.867254 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:24.867287 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:24.867307 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:24.944920 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:24.944957 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:24.768551 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:54:24.774942 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 200:
	ok
	I0603 13:54:24.776278 1143252 api_server.go:141] control plane version: v1.30.1
	I0603 13:54:24.776301 1143252 api_server.go:131] duration metric: took 3.939347802s to wait for apiserver health ...
	I0603 13:54:24.776310 1143252 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 13:54:24.776334 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:24.776386 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:24.827107 1143252 cri.go:89] found id: "45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a"
	I0603 13:54:24.827139 1143252 cri.go:89] found id: ""
	I0603 13:54:24.827152 1143252 logs.go:276] 1 containers: [45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a]
	I0603 13:54:24.827210 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:24.831681 1143252 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:24.831752 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:24.875645 1143252 cri.go:89] found id: "114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05"
	I0603 13:54:24.875689 1143252 cri.go:89] found id: ""
	I0603 13:54:24.875711 1143252 logs.go:276] 1 containers: [114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05]
	I0603 13:54:24.875778 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:24.880157 1143252 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:24.880256 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:24.932131 1143252 cri.go:89] found id: "f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574"
	I0603 13:54:24.932157 1143252 cri.go:89] found id: ""
	I0603 13:54:24.932167 1143252 logs.go:276] 1 containers: [f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574]
	I0603 13:54:24.932262 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:24.938104 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:24.938168 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:24.980289 1143252 cri.go:89] found id: "f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87"
	I0603 13:54:24.980318 1143252 cri.go:89] found id: ""
	I0603 13:54:24.980327 1143252 logs.go:276] 1 containers: [f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87]
	I0603 13:54:24.980389 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:24.985608 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:24.985687 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:25.033726 1143252 cri.go:89] found id: "c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1"
	I0603 13:54:25.033749 1143252 cri.go:89] found id: ""
	I0603 13:54:25.033757 1143252 logs.go:276] 1 containers: [c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1]
	I0603 13:54:25.033811 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:25.038493 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:25.038561 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:25.077447 1143252 cri.go:89] found id: "a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d"
	I0603 13:54:25.077474 1143252 cri.go:89] found id: ""
	I0603 13:54:25.077485 1143252 logs.go:276] 1 containers: [a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d]
	I0603 13:54:25.077545 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:25.081701 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:25.081770 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:25.120216 1143252 cri.go:89] found id: ""
	I0603 13:54:25.120246 1143252 logs.go:276] 0 containers: []
	W0603 13:54:25.120254 1143252 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:25.120261 1143252 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0603 13:54:25.120313 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0603 13:54:25.162562 1143252 cri.go:89] found id: "e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b"
	I0603 13:54:25.162596 1143252 cri.go:89] found id: "141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88"
	I0603 13:54:25.162602 1143252 cri.go:89] found id: ""
	I0603 13:54:25.162613 1143252 logs.go:276] 2 containers: [e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b 141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88]
	I0603 13:54:25.162678 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:25.167179 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:25.171531 1143252 logs.go:123] Gathering logs for container status ...
	I0603 13:54:25.171558 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:25.223749 1143252 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:25.223787 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:25.290251 1143252 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:25.290293 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:25.315271 1143252 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:25.315302 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 13:54:25.433219 1143252 logs.go:123] Gathering logs for coredns [f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574] ...
	I0603 13:54:25.433257 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574"
	I0603 13:54:25.473156 1143252 logs.go:123] Gathering logs for kube-scheduler [f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87] ...
	I0603 13:54:25.473194 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87"
	I0603 13:54:25.513988 1143252 logs.go:123] Gathering logs for kube-controller-manager [a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d] ...
	I0603 13:54:25.514015 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d"
	I0603 13:54:25.587224 1143252 logs.go:123] Gathering logs for kube-apiserver [45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a] ...
	I0603 13:54:25.587260 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a"
	I0603 13:54:25.638872 1143252 logs.go:123] Gathering logs for etcd [114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05] ...
	I0603 13:54:25.638909 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05"
	I0603 13:54:25.687323 1143252 logs.go:123] Gathering logs for kube-proxy [c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1] ...
	I0603 13:54:25.687372 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1"
	I0603 13:54:25.739508 1143252 logs.go:123] Gathering logs for storage-provisioner [e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b] ...
	I0603 13:54:25.739539 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b"
	I0603 13:54:25.775066 1143252 logs.go:123] Gathering logs for storage-provisioner [141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88] ...
	I0603 13:54:25.775096 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88"
	I0603 13:54:25.811982 1143252 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:25.812016 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:28.685228 1143252 system_pods.go:59] 8 kube-system pods found
	I0603 13:54:28.685261 1143252 system_pods.go:61] "coredns-7db6d8ff4d-qdjrv" [9a490ea5-c189-4d28-bd6b-509610d35f37] Running
	I0603 13:54:28.685265 1143252 system_pods.go:61] "etcd-embed-certs-223260" [97807b62-195b-4d94-a7f8-754f68ad4f03] Running
	I0603 13:54:28.685269 1143252 system_pods.go:61] "kube-apiserver-embed-certs-223260" [df2f6cde-407c-4ed2-8fec-5fa61a428a88] Running
	I0603 13:54:28.685272 1143252 system_pods.go:61] "kube-controller-manager-embed-certs-223260" [9b8bc1b7-3f43-4626-b9ee-37f5176b7fd6] Running
	I0603 13:54:28.685276 1143252 system_pods.go:61] "kube-proxy-s5vdl" [4c515f67-d265-4140-82ec-ba9ac4ddda80] Running
	I0603 13:54:28.685279 1143252 system_pods.go:61] "kube-scheduler-embed-certs-223260" [d23001bf-d971-42d2-a901-b2ec4b4db649] Running
	I0603 13:54:28.685285 1143252 system_pods.go:61] "metrics-server-569cc877fc-v7d9t" [e89c698d-7aab-4acd-a9b3-5ba0315ad681] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:54:28.685290 1143252 system_pods.go:61] "storage-provisioner" [6ff65744-2d90-4589-a97f-d6b4d792eab4] Running
	I0603 13:54:28.685298 1143252 system_pods.go:74] duration metric: took 3.908982484s to wait for pod list to return data ...
	I0603 13:54:28.685305 1143252 default_sa.go:34] waiting for default service account to be created ...
	I0603 13:54:28.687914 1143252 default_sa.go:45] found service account: "default"
	I0603 13:54:28.687939 1143252 default_sa.go:55] duration metric: took 2.627402ms for default service account to be created ...
	I0603 13:54:28.687947 1143252 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 13:54:28.693336 1143252 system_pods.go:86] 8 kube-system pods found
	I0603 13:54:28.693369 1143252 system_pods.go:89] "coredns-7db6d8ff4d-qdjrv" [9a490ea5-c189-4d28-bd6b-509610d35f37] Running
	I0603 13:54:28.693375 1143252 system_pods.go:89] "etcd-embed-certs-223260" [97807b62-195b-4d94-a7f8-754f68ad4f03] Running
	I0603 13:54:28.693379 1143252 system_pods.go:89] "kube-apiserver-embed-certs-223260" [df2f6cde-407c-4ed2-8fec-5fa61a428a88] Running
	I0603 13:54:28.693385 1143252 system_pods.go:89] "kube-controller-manager-embed-certs-223260" [9b8bc1b7-3f43-4626-b9ee-37f5176b7fd6] Running
	I0603 13:54:28.693389 1143252 system_pods.go:89] "kube-proxy-s5vdl" [4c515f67-d265-4140-82ec-ba9ac4ddda80] Running
	I0603 13:54:28.693393 1143252 system_pods.go:89] "kube-scheduler-embed-certs-223260" [d23001bf-d971-42d2-a901-b2ec4b4db649] Running
	I0603 13:54:28.693401 1143252 system_pods.go:89] "metrics-server-569cc877fc-v7d9t" [e89c698d-7aab-4acd-a9b3-5ba0315ad681] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:54:28.693418 1143252 system_pods.go:89] "storage-provisioner" [6ff65744-2d90-4589-a97f-d6b4d792eab4] Running
	I0603 13:54:28.693438 1143252 system_pods.go:126] duration metric: took 5.484487ms to wait for k8s-apps to be running ...
	I0603 13:54:28.693450 1143252 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 13:54:28.693497 1143252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:54:28.710364 1143252 system_svc.go:56] duration metric: took 16.901982ms WaitForService to wait for kubelet
	I0603 13:54:28.710399 1143252 kubeadm.go:576] duration metric: took 4m23.786738812s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 13:54:28.710444 1143252 node_conditions.go:102] verifying NodePressure condition ...
	I0603 13:54:28.713300 1143252 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 13:54:28.713328 1143252 node_conditions.go:123] node cpu capacity is 2
	I0603 13:54:28.713362 1143252 node_conditions.go:105] duration metric: took 2.909242ms to run NodePressure ...
	I0603 13:54:28.713382 1143252 start.go:240] waiting for startup goroutines ...
	I0603 13:54:28.713392 1143252 start.go:245] waiting for cluster config update ...
	I0603 13:54:28.713424 1143252 start.go:254] writing updated cluster config ...
	I0603 13:54:28.713798 1143252 ssh_runner.go:195] Run: rm -f paused
	I0603 13:54:28.767538 1143252 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 13:54:28.769737 1143252 out.go:177] * Done! kubectl is now configured to use "embed-certs-223260" cluster and "default" namespace by default
	I0603 13:54:27.370695 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:29.870214 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:25.990136 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:28.489276 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:30.489392 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:27.495908 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:27.509885 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:27.509968 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:27.545591 1143678 cri.go:89] found id: ""
	I0603 13:54:27.545626 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.545635 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:27.545641 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:27.545695 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:27.583699 1143678 cri.go:89] found id: ""
	I0603 13:54:27.583728 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.583740 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:27.583748 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:27.583835 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:27.623227 1143678 cri.go:89] found id: ""
	I0603 13:54:27.623268 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.623277 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:27.623283 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:27.623341 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:27.663057 1143678 cri.go:89] found id: ""
	I0603 13:54:27.663090 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.663102 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:27.663109 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:27.663187 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:27.708448 1143678 cri.go:89] found id: ""
	I0603 13:54:27.708481 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.708489 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:27.708495 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:27.708551 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:27.743629 1143678 cri.go:89] found id: ""
	I0603 13:54:27.743663 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.743674 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:27.743682 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:27.743748 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:27.778094 1143678 cri.go:89] found id: ""
	I0603 13:54:27.778128 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.778137 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:27.778147 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:27.778210 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:27.813137 1143678 cri.go:89] found id: ""
	I0603 13:54:27.813170 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.813180 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:27.813192 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:27.813208 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:27.861100 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:27.861136 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:27.914752 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:27.914794 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:27.929479 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:27.929511 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:28.002898 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:28.002926 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:28.002942 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:30.581890 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:30.595982 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:30.596068 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:30.638804 1143678 cri.go:89] found id: ""
	I0603 13:54:30.638841 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.638853 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:30.638862 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:30.638942 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:30.677202 1143678 cri.go:89] found id: ""
	I0603 13:54:30.677242 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.677253 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:30.677262 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:30.677329 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:30.717382 1143678 cri.go:89] found id: ""
	I0603 13:54:30.717436 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.717446 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:30.717455 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:30.717523 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:30.753691 1143678 cri.go:89] found id: ""
	I0603 13:54:30.753719 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.753728 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:30.753734 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:30.753798 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:30.790686 1143678 cri.go:89] found id: ""
	I0603 13:54:30.790714 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.790723 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:30.790729 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:30.790783 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:30.830196 1143678 cri.go:89] found id: ""
	I0603 13:54:30.830224 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.830237 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:30.830245 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:30.830299 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:30.865952 1143678 cri.go:89] found id: ""
	I0603 13:54:30.865980 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.865992 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:30.866000 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:30.866066 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:30.901561 1143678 cri.go:89] found id: ""
	I0603 13:54:30.901592 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.901601 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:30.901610 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:30.901627 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:30.979416 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:30.979459 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:31.035024 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:31.035061 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:31.089005 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:31.089046 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:31.105176 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:31.105210 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:31.172862 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:32.371040 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:34.870810 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:32.989041 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:34.989599 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:33.674069 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:33.688423 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:33.688499 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:33.729840 1143678 cri.go:89] found id: ""
	I0603 13:54:33.729876 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.729886 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:33.729893 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:33.729945 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:33.764984 1143678 cri.go:89] found id: ""
	I0603 13:54:33.765010 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.765018 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:33.765025 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:33.765075 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:33.798411 1143678 cri.go:89] found id: ""
	I0603 13:54:33.798446 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.798459 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:33.798468 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:33.798547 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:33.831565 1143678 cri.go:89] found id: ""
	I0603 13:54:33.831600 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.831611 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:33.831620 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:33.831688 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:33.869701 1143678 cri.go:89] found id: ""
	I0603 13:54:33.869727 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.869735 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:33.869741 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:33.869802 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:33.906108 1143678 cri.go:89] found id: ""
	I0603 13:54:33.906134 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.906144 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:33.906153 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:33.906218 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:33.946577 1143678 cri.go:89] found id: ""
	I0603 13:54:33.946607 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.946615 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:33.946621 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:33.946673 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:33.986691 1143678 cri.go:89] found id: ""
	I0603 13:54:33.986724 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.986743 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:33.986757 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:33.986775 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:34.044068 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:34.044110 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:34.059686 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:34.059724 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:34.141490 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:34.141514 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:34.141531 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:34.227890 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:34.227930 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:36.778969 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:36.792527 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:36.792612 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:36.828044 1143678 cri.go:89] found id: ""
	I0603 13:54:36.828083 1143678 logs.go:276] 0 containers: []
	W0603 13:54:36.828096 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:36.828102 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:36.828166 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:36.863869 1143678 cri.go:89] found id: ""
	I0603 13:54:36.863905 1143678 logs.go:276] 0 containers: []
	W0603 13:54:36.863917 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:36.863926 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:36.863996 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:36.899610 1143678 cri.go:89] found id: ""
	I0603 13:54:36.899649 1143678 logs.go:276] 0 containers: []
	W0603 13:54:36.899661 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:36.899669 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:36.899742 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:36.938627 1143678 cri.go:89] found id: ""
	I0603 13:54:36.938664 1143678 logs.go:276] 0 containers: []
	W0603 13:54:36.938675 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:36.938683 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:36.938739 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:36.973810 1143678 cri.go:89] found id: ""
	I0603 13:54:36.973842 1143678 logs.go:276] 0 containers: []
	W0603 13:54:36.973857 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:36.973863 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:36.973915 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:37.013759 1143678 cri.go:89] found id: ""
	I0603 13:54:37.013792 1143678 logs.go:276] 0 containers: []
	W0603 13:54:37.013805 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:37.013813 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:37.013881 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:37.049665 1143678 cri.go:89] found id: ""
	I0603 13:54:37.049697 1143678 logs.go:276] 0 containers: []
	W0603 13:54:37.049706 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:37.049712 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:37.049787 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:37.087405 1143678 cri.go:89] found id: ""
	I0603 13:54:37.087436 1143678 logs.go:276] 0 containers: []
	W0603 13:54:37.087446 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:37.087457 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:37.087470 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:37.126443 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:37.126476 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:37.177976 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:37.178015 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:37.192821 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:37.192860 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:37.267895 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:37.267926 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:37.267945 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:36.871536 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:37.371048 1143450 pod_ready.go:81] duration metric: took 4m0.007102739s for pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace to be "Ready" ...
	E0603 13:54:37.371080 1143450 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0603 13:54:37.371092 1143450 pod_ready.go:38] duration metric: took 4m5.236838117s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:54:37.371111 1143450 api_server.go:52] waiting for apiserver process to appear ...
	I0603 13:54:37.371145 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:37.371202 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:37.428454 1143450 cri.go:89] found id: "50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836"
	I0603 13:54:37.428487 1143450 cri.go:89] found id: ""
	I0603 13:54:37.428498 1143450 logs.go:276] 1 containers: [50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836]
	I0603 13:54:37.428564 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:37.434473 1143450 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:37.434552 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:37.476251 1143450 cri.go:89] found id: "c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d"
	I0603 13:54:37.476288 1143450 cri.go:89] found id: ""
	I0603 13:54:37.476300 1143450 logs.go:276] 1 containers: [c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d]
	I0603 13:54:37.476368 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:37.483190 1143450 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:37.483280 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:37.528660 1143450 cri.go:89] found id: "bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d"
	I0603 13:54:37.528693 1143450 cri.go:89] found id: ""
	I0603 13:54:37.528704 1143450 logs.go:276] 1 containers: [bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d]
	I0603 13:54:37.528797 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:37.533716 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:37.533809 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:37.573995 1143450 cri.go:89] found id: "7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a"
	I0603 13:54:37.574016 1143450 cri.go:89] found id: ""
	I0603 13:54:37.574025 1143450 logs.go:276] 1 containers: [7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a]
	I0603 13:54:37.574071 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:37.578385 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:37.578465 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:37.616468 1143450 cri.go:89] found id: "9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b"
	I0603 13:54:37.616511 1143450 cri.go:89] found id: ""
	I0603 13:54:37.616522 1143450 logs.go:276] 1 containers: [9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b]
	I0603 13:54:37.616603 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:37.621204 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:37.621277 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:37.661363 1143450 cri.go:89] found id: "b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7"
	I0603 13:54:37.661390 1143450 cri.go:89] found id: ""
	I0603 13:54:37.661401 1143450 logs.go:276] 1 containers: [b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7]
	I0603 13:54:37.661507 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:37.665969 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:37.666055 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:37.705096 1143450 cri.go:89] found id: ""
	I0603 13:54:37.705128 1143450 logs.go:276] 0 containers: []
	W0603 13:54:37.705136 1143450 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:37.705142 1143450 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0603 13:54:37.705210 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0603 13:54:37.746365 1143450 cri.go:89] found id: "969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240"
	I0603 13:54:37.746400 1143450 cri.go:89] found id: "bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4"
	I0603 13:54:37.746404 1143450 cri.go:89] found id: ""
	I0603 13:54:37.746412 1143450 logs.go:276] 2 containers: [969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240 bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4]
	I0603 13:54:37.746470 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:37.750874 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:37.755146 1143450 logs.go:123] Gathering logs for kube-controller-manager [b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7] ...
	I0603 13:54:37.755175 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7"
	I0603 13:54:37.811365 1143450 logs.go:123] Gathering logs for storage-provisioner [bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4] ...
	I0603 13:54:37.811403 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4"
	I0603 13:54:37.849687 1143450 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:37.849729 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:37.904870 1143450 logs.go:123] Gathering logs for etcd [c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d] ...
	I0603 13:54:37.904909 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d"
	I0603 13:54:37.955448 1143450 logs.go:123] Gathering logs for coredns [bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d] ...
	I0603 13:54:37.955497 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d"
	I0603 13:54:37.996659 1143450 logs.go:123] Gathering logs for kube-proxy [9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b] ...
	I0603 13:54:37.996687 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b"
	I0603 13:54:38.047501 1143450 logs.go:123] Gathering logs for storage-provisioner [969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240] ...
	I0603 13:54:38.047540 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240"
	I0603 13:54:38.090932 1143450 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:38.090969 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:38.606612 1143450 logs.go:123] Gathering logs for container status ...
	I0603 13:54:38.606672 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:38.652732 1143450 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:38.652774 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:38.670570 1143450 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:38.670620 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 13:54:38.812156 1143450 logs.go:123] Gathering logs for kube-apiserver [50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836] ...
	I0603 13:54:38.812208 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836"
	I0603 13:54:38.862940 1143450 logs.go:123] Gathering logs for kube-scheduler [7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a] ...
	I0603 13:54:38.862988 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a"
	I0603 13:54:37.491134 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:39.990379 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:39.846505 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:39.860426 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:39.860514 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:39.896684 1143678 cri.go:89] found id: ""
	I0603 13:54:39.896712 1143678 logs.go:276] 0 containers: []
	W0603 13:54:39.896726 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:39.896736 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:39.896801 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:39.932437 1143678 cri.go:89] found id: ""
	I0603 13:54:39.932482 1143678 logs.go:276] 0 containers: []
	W0603 13:54:39.932494 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:39.932503 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:39.932571 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:39.967850 1143678 cri.go:89] found id: ""
	I0603 13:54:39.967883 1143678 logs.go:276] 0 containers: []
	W0603 13:54:39.967891 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:39.967898 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:39.967952 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:40.003255 1143678 cri.go:89] found id: ""
	I0603 13:54:40.003284 1143678 logs.go:276] 0 containers: []
	W0603 13:54:40.003292 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:40.003298 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:40.003351 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:40.045865 1143678 cri.go:89] found id: ""
	I0603 13:54:40.045892 1143678 logs.go:276] 0 containers: []
	W0603 13:54:40.045904 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:40.045912 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:40.045976 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:40.082469 1143678 cri.go:89] found id: ""
	I0603 13:54:40.082498 1143678 logs.go:276] 0 containers: []
	W0603 13:54:40.082507 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:40.082513 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:40.082584 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:40.117181 1143678 cri.go:89] found id: ""
	I0603 13:54:40.117231 1143678 logs.go:276] 0 containers: []
	W0603 13:54:40.117242 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:40.117250 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:40.117320 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:40.157776 1143678 cri.go:89] found id: ""
	I0603 13:54:40.157813 1143678 logs.go:276] 0 containers: []
	W0603 13:54:40.157822 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:40.157832 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:40.157848 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:40.213374 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:40.213437 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:40.228298 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:40.228330 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:40.305450 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:40.305485 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:40.305503 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:40.393653 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:40.393704 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:41.405129 1143450 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:41.423234 1143450 api_server.go:72] duration metric: took 4m14.998447047s to wait for apiserver process to appear ...
	I0603 13:54:41.423266 1143450 api_server.go:88] waiting for apiserver healthz status ...
	I0603 13:54:41.423312 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:41.423374 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:41.463540 1143450 cri.go:89] found id: "50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836"
	I0603 13:54:41.463562 1143450 cri.go:89] found id: ""
	I0603 13:54:41.463570 1143450 logs.go:276] 1 containers: [50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836]
	I0603 13:54:41.463620 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:41.468145 1143450 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:41.468226 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:41.511977 1143450 cri.go:89] found id: "c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d"
	I0603 13:54:41.512000 1143450 cri.go:89] found id: ""
	I0603 13:54:41.512017 1143450 logs.go:276] 1 containers: [c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d]
	I0603 13:54:41.512081 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:41.516600 1143450 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:41.516674 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:41.554392 1143450 cri.go:89] found id: "bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d"
	I0603 13:54:41.554420 1143450 cri.go:89] found id: ""
	I0603 13:54:41.554443 1143450 logs.go:276] 1 containers: [bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d]
	I0603 13:54:41.554508 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:41.558983 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:41.559039 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:41.597710 1143450 cri.go:89] found id: "7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a"
	I0603 13:54:41.597737 1143450 cri.go:89] found id: ""
	I0603 13:54:41.597747 1143450 logs.go:276] 1 containers: [7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a]
	I0603 13:54:41.597811 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:41.602164 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:41.602227 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:41.639422 1143450 cri.go:89] found id: "9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b"
	I0603 13:54:41.639452 1143450 cri.go:89] found id: ""
	I0603 13:54:41.639462 1143450 logs.go:276] 1 containers: [9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b]
	I0603 13:54:41.639532 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:41.644093 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:41.644171 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:41.682475 1143450 cri.go:89] found id: "b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7"
	I0603 13:54:41.682506 1143450 cri.go:89] found id: ""
	I0603 13:54:41.682515 1143450 logs.go:276] 1 containers: [b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7]
	I0603 13:54:41.682578 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:41.687654 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:41.687734 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:41.724804 1143450 cri.go:89] found id: ""
	I0603 13:54:41.724839 1143450 logs.go:276] 0 containers: []
	W0603 13:54:41.724850 1143450 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:41.724858 1143450 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0603 13:54:41.724928 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0603 13:54:41.764625 1143450 cri.go:89] found id: "969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240"
	I0603 13:54:41.764653 1143450 cri.go:89] found id: "bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4"
	I0603 13:54:41.764659 1143450 cri.go:89] found id: ""
	I0603 13:54:41.764670 1143450 logs.go:276] 2 containers: [969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240 bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4]
	I0603 13:54:41.764736 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:41.769499 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:41.773782 1143450 logs.go:123] Gathering logs for container status ...
	I0603 13:54:41.773806 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:41.816486 1143450 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:41.816520 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:41.833538 1143450 logs.go:123] Gathering logs for kube-apiserver [50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836] ...
	I0603 13:54:41.833569 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836"
	I0603 13:54:41.877958 1143450 logs.go:123] Gathering logs for etcd [c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d] ...
	I0603 13:54:41.878004 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d"
	I0603 13:54:41.922575 1143450 logs.go:123] Gathering logs for kube-controller-manager [b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7] ...
	I0603 13:54:41.922612 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7"
	I0603 13:54:41.983865 1143450 logs.go:123] Gathering logs for storage-provisioner [969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240] ...
	I0603 13:54:41.983900 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240"
	I0603 13:54:42.032746 1143450 logs.go:123] Gathering logs for storage-provisioner [bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4] ...
	I0603 13:54:42.032773 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4"
	I0603 13:54:42.076129 1143450 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:42.076166 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:42.129061 1143450 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:42.129099 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 13:54:42.248179 1143450 logs.go:123] Gathering logs for coredns [bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d] ...
	I0603 13:54:42.248213 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d"
	I0603 13:54:42.292179 1143450 logs.go:123] Gathering logs for kube-scheduler [7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a] ...
	I0603 13:54:42.292288 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a"
	I0603 13:54:42.340447 1143450 logs.go:123] Gathering logs for kube-proxy [9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b] ...
	I0603 13:54:42.340493 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b"
	I0603 13:54:42.381993 1143450 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:42.382024 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:42.488926 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:44.990221 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:42.934691 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:42.948505 1143678 kubeadm.go:591] duration metric: took 4m4.45791317s to restartPrimaryControlPlane
	W0603 13:54:42.948592 1143678 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0603 13:54:42.948629 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 13:54:48.316951 1143678 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.36829775s)
	I0603 13:54:48.317039 1143678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:54:48.333630 1143678 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 13:54:48.345772 1143678 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 13:54:48.357359 1143678 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 13:54:48.357386 1143678 kubeadm.go:156] found existing configuration files:
	
	I0603 13:54:48.357477 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 13:54:48.367844 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 13:54:48.367917 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 13:54:48.379349 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 13:54:48.389684 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 13:54:48.389760 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 13:54:48.401562 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 13:54:48.412670 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 13:54:48.412743 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 13:54:48.424261 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 13:54:48.434598 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 13:54:48.434674 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 13:54:48.446187 1143678 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 13:54:48.527873 1143678 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0603 13:54:48.528073 1143678 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 13:54:48.695244 1143678 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 13:54:48.695401 1143678 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 13:54:48.695581 1143678 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 13:54:48.930141 1143678 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 13:54:45.281199 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:54:45.286305 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 200:
	ok
	I0603 13:54:45.287421 1143450 api_server.go:141] control plane version: v1.30.1
	I0603 13:54:45.287444 1143450 api_server.go:131] duration metric: took 3.864171356s to wait for apiserver health ...
	I0603 13:54:45.287455 1143450 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 13:54:45.287486 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:45.287540 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:45.328984 1143450 cri.go:89] found id: "50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836"
	I0603 13:54:45.329012 1143450 cri.go:89] found id: ""
	I0603 13:54:45.329022 1143450 logs.go:276] 1 containers: [50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836]
	I0603 13:54:45.329075 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:45.334601 1143450 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:45.334683 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:45.382942 1143450 cri.go:89] found id: "c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d"
	I0603 13:54:45.382967 1143450 cri.go:89] found id: ""
	I0603 13:54:45.382978 1143450 logs.go:276] 1 containers: [c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d]
	I0603 13:54:45.383039 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:45.387904 1143450 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:45.387969 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:45.431948 1143450 cri.go:89] found id: "bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d"
	I0603 13:54:45.431981 1143450 cri.go:89] found id: ""
	I0603 13:54:45.431992 1143450 logs.go:276] 1 containers: [bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d]
	I0603 13:54:45.432052 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:45.440993 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:45.441074 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:45.490086 1143450 cri.go:89] found id: "7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a"
	I0603 13:54:45.490114 1143450 cri.go:89] found id: ""
	I0603 13:54:45.490125 1143450 logs.go:276] 1 containers: [7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a]
	I0603 13:54:45.490194 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:45.494628 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:45.494688 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:45.532264 1143450 cri.go:89] found id: "9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b"
	I0603 13:54:45.532296 1143450 cri.go:89] found id: ""
	I0603 13:54:45.532307 1143450 logs.go:276] 1 containers: [9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b]
	I0603 13:54:45.532374 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:45.536914 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:45.536985 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:45.576641 1143450 cri.go:89] found id: "b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7"
	I0603 13:54:45.576663 1143450 cri.go:89] found id: ""
	I0603 13:54:45.576671 1143450 logs.go:276] 1 containers: [b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7]
	I0603 13:54:45.576720 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:45.580872 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:45.580926 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:45.628834 1143450 cri.go:89] found id: ""
	I0603 13:54:45.628864 1143450 logs.go:276] 0 containers: []
	W0603 13:54:45.628872 1143450 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:45.628879 1143450 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0603 13:54:45.628931 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0603 13:54:45.671689 1143450 cri.go:89] found id: "969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240"
	I0603 13:54:45.671719 1143450 cri.go:89] found id: "bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4"
	I0603 13:54:45.671727 1143450 cri.go:89] found id: ""
	I0603 13:54:45.671740 1143450 logs.go:276] 2 containers: [969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240 bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4]
	I0603 13:54:45.671799 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:45.677161 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:45.682179 1143450 logs.go:123] Gathering logs for container status ...
	I0603 13:54:45.682219 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:45.731155 1143450 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:45.731192 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 13:54:45.846365 1143450 logs.go:123] Gathering logs for kube-apiserver [50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836] ...
	I0603 13:54:45.846411 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836"
	I0603 13:54:45.907694 1143450 logs.go:123] Gathering logs for coredns [bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d] ...
	I0603 13:54:45.907733 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d"
	I0603 13:54:45.952881 1143450 logs.go:123] Gathering logs for kube-scheduler [7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a] ...
	I0603 13:54:45.952919 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a"
	I0603 13:54:45.998674 1143450 logs.go:123] Gathering logs for kube-controller-manager [b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7] ...
	I0603 13:54:45.998722 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7"
	I0603 13:54:46.061902 1143450 logs.go:123] Gathering logs for storage-provisioner [969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240] ...
	I0603 13:54:46.061949 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240"
	I0603 13:54:46.106017 1143450 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:46.106056 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:46.473915 1143450 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:46.473981 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:46.530212 1143450 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:46.530260 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:46.545954 1143450 logs.go:123] Gathering logs for etcd [c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d] ...
	I0603 13:54:46.545996 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d"
	I0603 13:54:46.595057 1143450 logs.go:123] Gathering logs for kube-proxy [9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b] ...
	I0603 13:54:46.595097 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b"
	I0603 13:54:46.637835 1143450 logs.go:123] Gathering logs for storage-provisioner [bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4] ...
	I0603 13:54:46.637872 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4"
	I0603 13:54:49.190539 1143450 system_pods.go:59] 8 kube-system pods found
	I0603 13:54:49.190572 1143450 system_pods.go:61] "coredns-7db6d8ff4d-flxqj" [a116f363-ca50-4e2d-8c77-e99498c81e36] Running
	I0603 13:54:49.190577 1143450 system_pods.go:61] "etcd-default-k8s-diff-port-030870" [4134b8e4-b7c4-4571-ae7f-f1eff2be2427] Running
	I0603 13:54:49.190582 1143450 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-030870" [38fe3d48-9d20-448a-b8d1-7c3af8ab1d2b] Running
	I0603 13:54:49.190586 1143450 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-030870" [5c8f2fc4-fc4f-48f8-8d81-3b64aa9a93c3] Running
	I0603 13:54:49.190590 1143450 system_pods.go:61] "kube-proxy-thsrx" [96df5442-b343-47c8-a561-681a2d568d50] Running
	I0603 13:54:49.190593 1143450 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-030870" [1f2c23a1-1c2c-463f-a5f0-e8f1bb8956f6] Running
	I0603 13:54:49.190602 1143450 system_pods.go:61] "metrics-server-569cc877fc-8xw9v" [4ab08177-2171-493b-928c-456d8a21fd68] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:54:49.190609 1143450 system_pods.go:61] "storage-provisioner" [64d080e5-d582-4ee5-adbc-a652e8e2b820] Running
	I0603 13:54:49.190620 1143450 system_pods.go:74] duration metric: took 3.903157143s to wait for pod list to return data ...
	I0603 13:54:49.190633 1143450 default_sa.go:34] waiting for default service account to be created ...
	I0603 13:54:49.193192 1143450 default_sa.go:45] found service account: "default"
	I0603 13:54:49.193219 1143450 default_sa.go:55] duration metric: took 2.575016ms for default service account to be created ...
	I0603 13:54:49.193229 1143450 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 13:54:49.202028 1143450 system_pods.go:86] 8 kube-system pods found
	I0603 13:54:49.202065 1143450 system_pods.go:89] "coredns-7db6d8ff4d-flxqj" [a116f363-ca50-4e2d-8c77-e99498c81e36] Running
	I0603 13:54:49.202074 1143450 system_pods.go:89] "etcd-default-k8s-diff-port-030870" [4134b8e4-b7c4-4571-ae7f-f1eff2be2427] Running
	I0603 13:54:49.202081 1143450 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-030870" [38fe3d48-9d20-448a-b8d1-7c3af8ab1d2b] Running
	I0603 13:54:49.202088 1143450 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-030870" [5c8f2fc4-fc4f-48f8-8d81-3b64aa9a93c3] Running
	I0603 13:54:49.202094 1143450 system_pods.go:89] "kube-proxy-thsrx" [96df5442-b343-47c8-a561-681a2d568d50] Running
	I0603 13:54:49.202100 1143450 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-030870" [1f2c23a1-1c2c-463f-a5f0-e8f1bb8956f6] Running
	I0603 13:54:49.202113 1143450 system_pods.go:89] "metrics-server-569cc877fc-8xw9v" [4ab08177-2171-493b-928c-456d8a21fd68] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:54:49.202124 1143450 system_pods.go:89] "storage-provisioner" [64d080e5-d582-4ee5-adbc-a652e8e2b820] Running
	I0603 13:54:49.202135 1143450 system_pods.go:126] duration metric: took 8.899065ms to wait for k8s-apps to be running ...
	I0603 13:54:49.202152 1143450 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 13:54:49.202209 1143450 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:54:49.220199 1143450 system_svc.go:56] duration metric: took 18.025994ms WaitForService to wait for kubelet
	I0603 13:54:49.220242 1143450 kubeadm.go:576] duration metric: took 4m22.79546223s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 13:54:49.220269 1143450 node_conditions.go:102] verifying NodePressure condition ...
	I0603 13:54:49.223327 1143450 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 13:54:49.223354 1143450 node_conditions.go:123] node cpu capacity is 2
	I0603 13:54:49.223367 1143450 node_conditions.go:105] duration metric: took 3.093435ms to run NodePressure ...
	I0603 13:54:49.223383 1143450 start.go:240] waiting for startup goroutines ...
	I0603 13:54:49.223393 1143450 start.go:245] waiting for cluster config update ...
	I0603 13:54:49.223408 1143450 start.go:254] writing updated cluster config ...
	I0603 13:54:49.223704 1143450 ssh_runner.go:195] Run: rm -f paused
	I0603 13:54:49.277924 1143450 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 13:54:49.280442 1143450 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-030870" cluster and "default" namespace by default
	I0603 13:54:48.932024 1143678 out.go:204]   - Generating certificates and keys ...
	I0603 13:54:48.932110 1143678 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 13:54:48.932168 1143678 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 13:54:48.932235 1143678 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 13:54:48.932305 1143678 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 13:54:48.932481 1143678 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 13:54:48.932639 1143678 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 13:54:48.933272 1143678 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 13:54:48.933771 1143678 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 13:54:48.934251 1143678 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 13:54:48.934654 1143678 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 13:54:48.934712 1143678 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 13:54:48.934762 1143678 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 13:54:49.063897 1143678 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 13:54:49.266680 1143678 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 13:54:49.364943 1143678 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 13:54:49.628905 1143678 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 13:54:49.645861 1143678 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 13:54:49.645991 1143678 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 13:54:49.646049 1143678 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 13:54:49.795196 1143678 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 13:54:47.490336 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:49.989543 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:49.798407 1143678 out.go:204]   - Booting up control plane ...
	I0603 13:54:49.798564 1143678 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 13:54:49.800163 1143678 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 13:54:49.802226 1143678 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 13:54:49.803809 1143678 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 13:54:49.806590 1143678 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0603 13:54:52.490088 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:54.990092 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:57.488119 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:59.489775 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:01.490194 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:03.989075 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:05.990054 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:08.489226 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:10.989028 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:13.489118 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:15.489176 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:17.989008 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:20.489091 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:22.989284 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:24.990020 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:27.489326 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:27.983679 1142862 pod_ready.go:81] duration metric: took 4m0.001142992s for pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace to be "Ready" ...
	E0603 13:55:27.983708 1142862 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace to be "Ready" (will not retry!)
	I0603 13:55:27.983731 1142862 pod_ready.go:38] duration metric: took 4m12.038904247s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:55:27.983760 1142862 kubeadm.go:591] duration metric: took 4m21.273943202s to restartPrimaryControlPlane
	W0603 13:55:27.983831 1142862 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0603 13:55:27.983865 1142862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 13:55:29.807867 1143678 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0603 13:55:29.808474 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:55:29.808754 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:55:34.809455 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:55:34.809722 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:55:44.810305 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:55:44.810491 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:55:59.870853 1142862 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.886953189s)
	I0603 13:55:59.870958 1142862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:55:59.889658 1142862 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 13:55:59.901529 1142862 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 13:55:59.914241 1142862 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 13:55:59.914266 1142862 kubeadm.go:156] found existing configuration files:
	
	I0603 13:55:59.914312 1142862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 13:55:59.924884 1142862 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 13:55:59.924950 1142862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 13:55:59.935494 1142862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 13:55:59.946222 1142862 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 13:55:59.946321 1142862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 13:55:59.956749 1142862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 13:55:59.967027 1142862 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 13:55:59.967110 1142862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 13:55:59.979124 1142862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 13:55:59.989689 1142862 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 13:55:59.989751 1142862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 13:56:00.000616 1142862 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 13:56:00.230878 1142862 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 13:56:04.811725 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:56:04.811929 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:56:08.995375 1142862 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0603 13:56:08.995463 1142862 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 13:56:08.995588 1142862 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 13:56:08.995724 1142862 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 13:56:08.995874 1142862 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 13:56:08.995970 1142862 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 13:56:08.997810 1142862 out.go:204]   - Generating certificates and keys ...
	I0603 13:56:08.997914 1142862 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 13:56:08.998045 1142862 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 13:56:08.998154 1142862 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 13:56:08.998321 1142862 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 13:56:08.998423 1142862 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 13:56:08.998506 1142862 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 13:56:08.998578 1142862 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 13:56:08.998665 1142862 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 13:56:08.998764 1142862 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 13:56:08.998860 1142862 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 13:56:08.998919 1142862 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 13:56:08.999011 1142862 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 13:56:08.999111 1142862 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 13:56:08.999202 1142862 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0603 13:56:08.999275 1142862 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 13:56:08.999354 1142862 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 13:56:08.999423 1142862 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 13:56:08.999538 1142862 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 13:56:08.999692 1142862 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 13:56:09.001133 1142862 out.go:204]   - Booting up control plane ...
	I0603 13:56:09.001218 1142862 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 13:56:09.001293 1142862 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 13:56:09.001354 1142862 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 13:56:09.001499 1142862 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 13:56:09.001584 1142862 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 13:56:09.001637 1142862 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 13:56:09.001768 1142862 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0603 13:56:09.001881 1142862 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0603 13:56:09.001941 1142862 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.923053ms
	I0603 13:56:09.002010 1142862 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0603 13:56:09.002090 1142862 kubeadm.go:309] [api-check] The API server is healthy after 5.502208975s
	I0603 13:56:09.002224 1142862 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0603 13:56:09.002363 1142862 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0603 13:56:09.002457 1142862 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0603 13:56:09.002647 1142862 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-817450 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0603 13:56:09.002713 1142862 kubeadm.go:309] [bootstrap-token] Using token: a7hbk8.xb8is7k6ewa3l3ya
	I0603 13:56:09.004666 1142862 out.go:204]   - Configuring RBAC rules ...
	I0603 13:56:09.004792 1142862 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0603 13:56:09.004883 1142862 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0603 13:56:09.005026 1142862 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0603 13:56:09.005234 1142862 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0603 13:56:09.005389 1142862 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0603 13:56:09.005531 1142862 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0603 13:56:09.005651 1142862 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0603 13:56:09.005709 1142862 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0603 13:56:09.005779 1142862 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0603 13:56:09.005787 1142862 kubeadm.go:309] 
	I0603 13:56:09.005869 1142862 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0603 13:56:09.005885 1142862 kubeadm.go:309] 
	I0603 13:56:09.006014 1142862 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0603 13:56:09.006034 1142862 kubeadm.go:309] 
	I0603 13:56:09.006076 1142862 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0603 13:56:09.006136 1142862 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0603 13:56:09.006197 1142862 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0603 13:56:09.006203 1142862 kubeadm.go:309] 
	I0603 13:56:09.006263 1142862 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0603 13:56:09.006273 1142862 kubeadm.go:309] 
	I0603 13:56:09.006330 1142862 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0603 13:56:09.006338 1142862 kubeadm.go:309] 
	I0603 13:56:09.006393 1142862 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0603 13:56:09.006476 1142862 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0603 13:56:09.006542 1142862 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0603 13:56:09.006548 1142862 kubeadm.go:309] 
	I0603 13:56:09.006629 1142862 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0603 13:56:09.006746 1142862 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0603 13:56:09.006758 1142862 kubeadm.go:309] 
	I0603 13:56:09.006850 1142862 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token a7hbk8.xb8is7k6ewa3l3ya \
	I0603 13:56:09.006987 1142862 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c33e9516f6d05db03b44f9194bafe44692a1b8ae1d860b8bc74f77578e93fdb1 \
	I0603 13:56:09.007028 1142862 kubeadm.go:309] 	--control-plane 
	I0603 13:56:09.007037 1142862 kubeadm.go:309] 
	I0603 13:56:09.007141 1142862 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0603 13:56:09.007170 1142862 kubeadm.go:309] 
	I0603 13:56:09.007266 1142862 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token a7hbk8.xb8is7k6ewa3l3ya \
	I0603 13:56:09.007427 1142862 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c33e9516f6d05db03b44f9194bafe44692a1b8ae1d860b8bc74f77578e93fdb1 
	I0603 13:56:09.007451 1142862 cni.go:84] Creating CNI manager for ""
	I0603 13:56:09.007464 1142862 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:56:09.009292 1142862 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 13:56:09.010750 1142862 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 13:56:09.022810 1142862 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 13:56:09.052132 1142862 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 13:56:09.052150 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:09.052150 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-817450 minikube.k8s.io/updated_at=2024_06_03T13_56_09_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354 minikube.k8s.io/name=no-preload-817450 minikube.k8s.io/primary=true
	I0603 13:56:09.291610 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:09.296892 1142862 ops.go:34] apiserver oom_adj: -16
	I0603 13:56:09.792736 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:10.292471 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:10.792688 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:11.291782 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:11.792454 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:12.292056 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:12.792150 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:13.292620 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:13.792024 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:14.292501 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:14.791790 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:15.292128 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:15.792608 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:16.292106 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:16.791876 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:17.292276 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:17.791876 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:18.292644 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:18.792571 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:19.292064 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:19.791908 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:20.292511 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:20.792137 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:21.292153 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:21.791809 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:21.882178 1142862 kubeadm.go:1107] duration metric: took 12.830108615s to wait for elevateKubeSystemPrivileges
	W0603 13:56:21.882223 1142862 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0603 13:56:21.882236 1142862 kubeadm.go:393] duration metric: took 5m15.237452092s to StartCluster
	I0603 13:56:21.882260 1142862 settings.go:142] acquiring lock: {Name:mka7155af15d143794eb08b8670f7d850f44839e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:56:21.882368 1142862 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 13:56:21.883986 1142862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/kubeconfig: {Name:mk082a4c41fd0f4876b4085806e1bc5ef6533b14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:56:21.884288 1142862 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.125 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 13:56:21.885915 1142862 out.go:177] * Verifying Kubernetes components...
	I0603 13:56:21.884411 1142862 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 13:56:21.884504 1142862 config.go:182] Loaded profile config "no-preload-817450": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:56:21.887156 1142862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:56:21.887168 1142862 addons.go:69] Setting storage-provisioner=true in profile "no-preload-817450"
	I0603 13:56:21.887199 1142862 addons.go:69] Setting metrics-server=true in profile "no-preload-817450"
	I0603 13:56:21.887230 1142862 addons.go:234] Setting addon storage-provisioner=true in "no-preload-817450"
	W0603 13:56:21.887245 1142862 addons.go:243] addon storage-provisioner should already be in state true
	I0603 13:56:21.887261 1142862 addons.go:234] Setting addon metrics-server=true in "no-preload-817450"
	W0603 13:56:21.887276 1142862 addons.go:243] addon metrics-server should already be in state true
	I0603 13:56:21.887295 1142862 host.go:66] Checking if "no-preload-817450" exists ...
	I0603 13:56:21.887316 1142862 host.go:66] Checking if "no-preload-817450" exists ...
	I0603 13:56:21.887156 1142862 addons.go:69] Setting default-storageclass=true in profile "no-preload-817450"
	I0603 13:56:21.887366 1142862 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-817450"
	I0603 13:56:21.887709 1142862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:56:21.887711 1142862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:56:21.887749 1142862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:56:21.887752 1142862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:56:21.887779 1142862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:56:21.887778 1142862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:56:21.906019 1142862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37881
	I0603 13:56:21.906319 1142862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42985
	I0603 13:56:21.906563 1142862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43371
	I0603 13:56:21.906601 1142862 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:56:21.906714 1142862 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:56:21.907043 1142862 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:56:21.907126 1142862 main.go:141] libmachine: Using API Version  1
	I0603 13:56:21.907143 1142862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:56:21.907269 1142862 main.go:141] libmachine: Using API Version  1
	I0603 13:56:21.907288 1142862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:56:21.907558 1142862 main.go:141] libmachine: Using API Version  1
	I0603 13:56:21.907578 1142862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:56:21.907752 1142862 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:56:21.907891 1142862 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:56:21.908248 1142862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:56:21.908269 1142862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:56:21.908419 1142862 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:56:21.908487 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetState
	I0603 13:56:21.909150 1142862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:56:21.909175 1142862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:56:21.912898 1142862 addons.go:234] Setting addon default-storageclass=true in "no-preload-817450"
	W0603 13:56:21.912926 1142862 addons.go:243] addon default-storageclass should already be in state true
	I0603 13:56:21.912963 1142862 host.go:66] Checking if "no-preload-817450" exists ...
	I0603 13:56:21.913361 1142862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:56:21.913413 1142862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:56:21.928877 1142862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32983
	I0603 13:56:21.929336 1142862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44735
	I0603 13:56:21.929541 1142862 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:56:21.930006 1142862 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:56:21.930064 1142862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43105
	I0603 13:56:21.930161 1142862 main.go:141] libmachine: Using API Version  1
	I0603 13:56:21.930186 1142862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:56:21.930580 1142862 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:56:21.930723 1142862 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:56:21.930798 1142862 main.go:141] libmachine: Using API Version  1
	I0603 13:56:21.930812 1142862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:56:21.930891 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetState
	I0603 13:56:21.931037 1142862 main.go:141] libmachine: Using API Version  1
	I0603 13:56:21.931052 1142862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:56:21.931187 1142862 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:56:21.931369 1142862 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:56:21.931394 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetState
	I0603 13:56:21.932113 1142862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:56:21.932140 1142862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:56:21.933613 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:56:21.936068 1142862 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:56:21.934518 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:56:21.937788 1142862 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 13:56:21.937821 1142862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 13:56:21.937844 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:56:21.939174 1142862 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0603 13:56:21.940435 1142862 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0603 13:56:21.940458 1142862 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0603 13:56:21.940559 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:56:21.942628 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:56:21.943950 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:56:21.944227 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:56:21.944257 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:56:21.944449 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:56:21.944658 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:56:21.944734 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:56:21.944754 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:56:21.944780 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:56:21.944919 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:56:21.944932 1142862 sshutil.go:53] new ssh client: &{IP:192.168.72.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa Username:docker}
	I0603 13:56:21.945154 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:56:21.945309 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:56:21.945457 1142862 sshutil.go:53] new ssh client: &{IP:192.168.72.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa Username:docker}
	I0603 13:56:21.951140 1142862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45619
	I0603 13:56:21.951606 1142862 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:56:21.952125 1142862 main.go:141] libmachine: Using API Version  1
	I0603 13:56:21.952152 1142862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:56:21.952579 1142862 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:56:21.952808 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetState
	I0603 13:56:21.954505 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:56:21.954760 1142862 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 13:56:21.954781 1142862 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 13:56:21.954801 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:56:21.958298 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:56:21.958816 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:56:21.958851 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:56:21.959086 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:56:21.959325 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:56:21.959515 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:56:21.959678 1142862 sshutil.go:53] new ssh client: &{IP:192.168.72.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa Username:docker}
	I0603 13:56:22.102359 1142862 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:56:22.121380 1142862 node_ready.go:35] waiting up to 6m0s for node "no-preload-817450" to be "Ready" ...
	I0603 13:56:22.135572 1142862 node_ready.go:49] node "no-preload-817450" has status "Ready":"True"
	I0603 13:56:22.135599 1142862 node_ready.go:38] duration metric: took 14.156504ms for node "no-preload-817450" to be "Ready" ...
	I0603 13:56:22.135614 1142862 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:56:22.151036 1142862 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-f8pbl" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:22.283805 1142862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 13:56:22.288913 1142862 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0603 13:56:22.288938 1142862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0603 13:56:22.297769 1142862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 13:56:22.329187 1142862 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0603 13:56:22.329221 1142862 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0603 13:56:22.393569 1142862 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 13:56:22.393594 1142862 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0603 13:56:22.435605 1142862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 13:56:23.470078 1142862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.18622743s)
	I0603 13:56:23.470155 1142862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.172344092s)
	I0603 13:56:23.470171 1142862 main.go:141] libmachine: Making call to close driver server
	I0603 13:56:23.470192 1142862 main.go:141] libmachine: (no-preload-817450) Calling .Close
	I0603 13:56:23.470200 1142862 main.go:141] libmachine: Making call to close driver server
	I0603 13:56:23.470216 1142862 main.go:141] libmachine: (no-preload-817450) Calling .Close
	I0603 13:56:23.470515 1142862 main.go:141] libmachine: (no-preload-817450) DBG | Closing plugin on server side
	I0603 13:56:23.470553 1142862 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:56:23.470567 1142862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:56:23.470576 1142862 main.go:141] libmachine: Making call to close driver server
	I0603 13:56:23.470586 1142862 main.go:141] libmachine: (no-preload-817450) Calling .Close
	I0603 13:56:23.470589 1142862 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:56:23.470602 1142862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:56:23.470613 1142862 main.go:141] libmachine: Making call to close driver server
	I0603 13:56:23.470625 1142862 main.go:141] libmachine: (no-preload-817450) Calling .Close
	I0603 13:56:23.470807 1142862 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:56:23.470823 1142862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:56:23.471108 1142862 main.go:141] libmachine: (no-preload-817450) DBG | Closing plugin on server side
	I0603 13:56:23.471138 1142862 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:56:23.471180 1142862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:56:23.492187 1142862 main.go:141] libmachine: Making call to close driver server
	I0603 13:56:23.492226 1142862 main.go:141] libmachine: (no-preload-817450) Calling .Close
	I0603 13:56:23.492596 1142862 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:56:23.492618 1142862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:56:23.492636 1142862 main.go:141] libmachine: (no-preload-817450) DBG | Closing plugin on server side
	I0603 13:56:23.892903 1142862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.45716212s)
	I0603 13:56:23.892991 1142862 main.go:141] libmachine: Making call to close driver server
	I0603 13:56:23.893006 1142862 main.go:141] libmachine: (no-preload-817450) Calling .Close
	I0603 13:56:23.893418 1142862 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:56:23.893426 1142862 main.go:141] libmachine: (no-preload-817450) DBG | Closing plugin on server side
	I0603 13:56:23.893442 1142862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:56:23.893459 1142862 main.go:141] libmachine: Making call to close driver server
	I0603 13:56:23.893468 1142862 main.go:141] libmachine: (no-preload-817450) Calling .Close
	I0603 13:56:23.893745 1142862 main.go:141] libmachine: (no-preload-817450) DBG | Closing plugin on server side
	I0603 13:56:23.893790 1142862 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:56:23.893811 1142862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:56:23.893832 1142862 addons.go:475] Verifying addon metrics-server=true in "no-preload-817450"
	I0603 13:56:23.895990 1142862 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0603 13:56:23.897968 1142862 addons.go:510] duration metric: took 2.013558036s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0603 13:56:24.157803 1142862 pod_ready.go:102] pod "coredns-7db6d8ff4d-f8pbl" in "kube-system" namespace has status "Ready":"False"
	I0603 13:56:24.658730 1142862 pod_ready.go:92] pod "coredns-7db6d8ff4d-f8pbl" in "kube-system" namespace has status "Ready":"True"
	I0603 13:56:24.658765 1142862 pod_ready.go:81] duration metric: took 2.507699067s for pod "coredns-7db6d8ff4d-f8pbl" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.658779 1142862 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.664053 1142862 pod_ready.go:92] pod "etcd-no-preload-817450" in "kube-system" namespace has status "Ready":"True"
	I0603 13:56:24.664084 1142862 pod_ready.go:81] duration metric: took 5.2962ms for pod "etcd-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.664096 1142862 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.668496 1142862 pod_ready.go:92] pod "kube-apiserver-no-preload-817450" in "kube-system" namespace has status "Ready":"True"
	I0603 13:56:24.668521 1142862 pod_ready.go:81] duration metric: took 4.417565ms for pod "kube-apiserver-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.668533 1142862 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.673549 1142862 pod_ready.go:92] pod "kube-controller-manager-no-preload-817450" in "kube-system" namespace has status "Ready":"True"
	I0603 13:56:24.673568 1142862 pod_ready.go:81] duration metric: took 5.026882ms for pod "kube-controller-manager-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.673577 1142862 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-t45fn" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.678207 1142862 pod_ready.go:92] pod "kube-proxy-t45fn" in "kube-system" namespace has status "Ready":"True"
	I0603 13:56:24.678228 1142862 pod_ready.go:81] duration metric: took 4.644345ms for pod "kube-proxy-t45fn" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.678239 1142862 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:25.056174 1142862 pod_ready.go:92] pod "kube-scheduler-no-preload-817450" in "kube-system" namespace has status "Ready":"True"
	I0603 13:56:25.056204 1142862 pod_ready.go:81] duration metric: took 377.957963ms for pod "kube-scheduler-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:25.056214 1142862 pod_ready.go:38] duration metric: took 2.920586356s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:56:25.056231 1142862 api_server.go:52] waiting for apiserver process to appear ...
	I0603 13:56:25.056294 1142862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:56:25.071253 1142862 api_server.go:72] duration metric: took 3.186917827s to wait for apiserver process to appear ...
	I0603 13:56:25.071291 1142862 api_server.go:88] waiting for apiserver healthz status ...
	I0603 13:56:25.071319 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:56:25.076592 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 200:
	ok
	I0603 13:56:25.077531 1142862 api_server.go:141] control plane version: v1.30.1
	I0603 13:56:25.077553 1142862 api_server.go:131] duration metric: took 6.255263ms to wait for apiserver health ...
	I0603 13:56:25.077561 1142862 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 13:56:25.258520 1142862 system_pods.go:59] 9 kube-system pods found
	I0603 13:56:25.258552 1142862 system_pods.go:61] "coredns-7db6d8ff4d-f8pbl" [201e687b-1c1b-4030-8b59-b0257a0f876c] Running
	I0603 13:56:25.258557 1142862 system_pods.go:61] "coredns-7db6d8ff4d-jgk4p" [75956644-426d-49a7-b80c-492c4284f438] Running
	I0603 13:56:25.258560 1142862 system_pods.go:61] "etcd-no-preload-817450" [51d6541e-42ba-4d69-938d-0f2d379572ec] Running
	I0603 13:56:25.258565 1142862 system_pods.go:61] "kube-apiserver-no-preload-817450" [76c05ee7-8f8c-4280-af34-534c73422c51] Running
	I0603 13:56:25.258569 1142862 system_pods.go:61] "kube-controller-manager-no-preload-817450" [e3394427-3c75-4fb4-bd08-b22b8b6ad9eb] Running
	I0603 13:56:25.258573 1142862 system_pods.go:61] "kube-proxy-t45fn" [0578c151-2b36-4125-83f8-f4fbd62a1dc4] Running
	I0603 13:56:25.258578 1142862 system_pods.go:61] "kube-scheduler-no-preload-817450" [9d7c419f-d671-4b0a-bfee-7fe26c690312] Running
	I0603 13:56:25.258585 1142862 system_pods.go:61] "metrics-server-569cc877fc-j2lpf" [4f776017-1575-4461-a7c8-656e5a170460] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:56:25.258591 1142862 system_pods.go:61] "storage-provisioner" [f22655fc-5571-496e-a93f-3970d1693435] Running
	I0603 13:56:25.258603 1142862 system_pods.go:74] duration metric: took 181.034608ms to wait for pod list to return data ...
	I0603 13:56:25.258618 1142862 default_sa.go:34] waiting for default service account to be created ...
	I0603 13:56:25.454775 1142862 default_sa.go:45] found service account: "default"
	I0603 13:56:25.454810 1142862 default_sa.go:55] duration metric: took 196.18004ms for default service account to be created ...
	I0603 13:56:25.454820 1142862 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 13:56:25.658868 1142862 system_pods.go:86] 9 kube-system pods found
	I0603 13:56:25.658908 1142862 system_pods.go:89] "coredns-7db6d8ff4d-f8pbl" [201e687b-1c1b-4030-8b59-b0257a0f876c] Running
	I0603 13:56:25.658919 1142862 system_pods.go:89] "coredns-7db6d8ff4d-jgk4p" [75956644-426d-49a7-b80c-492c4284f438] Running
	I0603 13:56:25.658926 1142862 system_pods.go:89] "etcd-no-preload-817450" [51d6541e-42ba-4d69-938d-0f2d379572ec] Running
	I0603 13:56:25.658932 1142862 system_pods.go:89] "kube-apiserver-no-preload-817450" [76c05ee7-8f8c-4280-af34-534c73422c51] Running
	I0603 13:56:25.658938 1142862 system_pods.go:89] "kube-controller-manager-no-preload-817450" [e3394427-3c75-4fb4-bd08-b22b8b6ad9eb] Running
	I0603 13:56:25.658944 1142862 system_pods.go:89] "kube-proxy-t45fn" [0578c151-2b36-4125-83f8-f4fbd62a1dc4] Running
	I0603 13:56:25.658950 1142862 system_pods.go:89] "kube-scheduler-no-preload-817450" [9d7c419f-d671-4b0a-bfee-7fe26c690312] Running
	I0603 13:56:25.658959 1142862 system_pods.go:89] "metrics-server-569cc877fc-j2lpf" [4f776017-1575-4461-a7c8-656e5a170460] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:56:25.658970 1142862 system_pods.go:89] "storage-provisioner" [f22655fc-5571-496e-a93f-3970d1693435] Running
	I0603 13:56:25.658983 1142862 system_pods.go:126] duration metric: took 204.156078ms to wait for k8s-apps to be running ...
	I0603 13:56:25.658999 1142862 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 13:56:25.659058 1142862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:56:25.674728 1142862 system_svc.go:56] duration metric: took 15.717684ms WaitForService to wait for kubelet
	I0603 13:56:25.674759 1142862 kubeadm.go:576] duration metric: took 3.790431991s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 13:56:25.674777 1142862 node_conditions.go:102] verifying NodePressure condition ...
	I0603 13:56:25.855640 1142862 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 13:56:25.855671 1142862 node_conditions.go:123] node cpu capacity is 2
	I0603 13:56:25.855684 1142862 node_conditions.go:105] duration metric: took 180.901974ms to run NodePressure ...
	I0603 13:56:25.855696 1142862 start.go:240] waiting for startup goroutines ...
	I0603 13:56:25.855703 1142862 start.go:245] waiting for cluster config update ...
	I0603 13:56:25.855716 1142862 start.go:254] writing updated cluster config ...
	I0603 13:56:25.856020 1142862 ssh_runner.go:195] Run: rm -f paused
	I0603 13:56:25.908747 1142862 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 13:56:25.911049 1142862 out.go:177] * Done! kubectl is now configured to use "no-preload-817450" cluster and "default" namespace by default
	I0603 13:56:44.813650 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:56:44.813933 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:56:44.813964 1143678 kubeadm.go:309] 
	I0603 13:56:44.814039 1143678 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0603 13:56:44.814075 1143678 kubeadm.go:309] 		timed out waiting for the condition
	I0603 13:56:44.814115 1143678 kubeadm.go:309] 
	I0603 13:56:44.814197 1143678 kubeadm.go:309] 	This error is likely caused by:
	I0603 13:56:44.814246 1143678 kubeadm.go:309] 		- The kubelet is not running
	I0603 13:56:44.814369 1143678 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0603 13:56:44.814378 1143678 kubeadm.go:309] 
	I0603 13:56:44.814496 1143678 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0603 13:56:44.814540 1143678 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0603 13:56:44.814573 1143678 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0603 13:56:44.814580 1143678 kubeadm.go:309] 
	I0603 13:56:44.814685 1143678 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0603 13:56:44.814785 1143678 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0603 13:56:44.814798 1143678 kubeadm.go:309] 
	I0603 13:56:44.814896 1143678 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0603 13:56:44.815001 1143678 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0603 13:56:44.815106 1143678 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0603 13:56:44.815208 1143678 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0603 13:56:44.815220 1143678 kubeadm.go:309] 
	I0603 13:56:44.816032 1143678 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 13:56:44.816137 1143678 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0603 13:56:44.816231 1143678 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0603 13:56:44.816405 1143678 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0603 13:56:44.816480 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 13:56:45.288649 1143678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:56:45.305284 1143678 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 13:56:45.316705 1143678 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 13:56:45.316736 1143678 kubeadm.go:156] found existing configuration files:
	
	I0603 13:56:45.316804 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 13:56:45.327560 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 13:56:45.327630 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 13:56:45.337910 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 13:56:45.349864 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 13:56:45.349948 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 13:56:45.361369 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 13:56:45.371797 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 13:56:45.371866 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 13:56:45.382861 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 13:56:45.393310 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 13:56:45.393382 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 13:56:45.403822 1143678 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 13:56:45.476725 1143678 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0603 13:56:45.476794 1143678 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 13:56:45.630786 1143678 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 13:56:45.630956 1143678 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 13:56:45.631125 1143678 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 13:56:45.814370 1143678 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 13:56:45.816372 1143678 out.go:204]   - Generating certificates and keys ...
	I0603 13:56:45.816481 1143678 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 13:56:45.816556 1143678 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 13:56:45.816710 1143678 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 13:56:45.816831 1143678 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 13:56:45.816928 1143678 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 13:56:45.817003 1143678 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 13:56:45.817093 1143678 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 13:56:45.817178 1143678 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 13:56:45.817328 1143678 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 13:56:45.817477 1143678 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 13:56:45.817533 1143678 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 13:56:45.817607 1143678 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 13:56:46.025905 1143678 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 13:56:46.331809 1143678 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 13:56:46.551488 1143678 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 13:56:46.636938 1143678 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 13:56:46.663292 1143678 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 13:56:46.663400 1143678 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 13:56:46.663448 1143678 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 13:56:46.840318 1143678 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 13:56:46.842399 1143678 out.go:204]   - Booting up control plane ...
	I0603 13:56:46.842530 1143678 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 13:56:46.851940 1143678 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 13:56:46.855283 1143678 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 13:56:46.855443 1143678 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 13:56:46.857883 1143678 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0603 13:57:26.860915 1143678 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0603 13:57:26.861047 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:57:26.861296 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:57:31.861724 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:57:31.862046 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:57:41.862803 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:57:41.863057 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:58:01.862907 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:58:01.863136 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:58:41.862069 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:58:41.862391 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:58:41.862430 1143678 kubeadm.go:309] 
	I0603 13:58:41.862535 1143678 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0603 13:58:41.862613 1143678 kubeadm.go:309] 		timed out waiting for the condition
	I0603 13:58:41.862624 1143678 kubeadm.go:309] 
	I0603 13:58:41.862675 1143678 kubeadm.go:309] 	This error is likely caused by:
	I0603 13:58:41.862737 1143678 kubeadm.go:309] 		- The kubelet is not running
	I0603 13:58:41.862895 1143678 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0603 13:58:41.862909 1143678 kubeadm.go:309] 
	I0603 13:58:41.863030 1143678 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0603 13:58:41.863060 1143678 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0603 13:58:41.863090 1143678 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0603 13:58:41.863100 1143678 kubeadm.go:309] 
	I0603 13:58:41.863230 1143678 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0603 13:58:41.863388 1143678 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0603 13:58:41.863406 1143678 kubeadm.go:309] 
	I0603 13:58:41.863583 1143678 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0603 13:58:41.863709 1143678 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0603 13:58:41.863811 1143678 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0603 13:58:41.863894 1143678 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0603 13:58:41.863917 1143678 kubeadm.go:309] 
	I0603 13:58:41.865001 1143678 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 13:58:41.865120 1143678 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0603 13:58:41.865209 1143678 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0603 13:58:41.865361 1143678 kubeadm.go:393] duration metric: took 8m3.432874561s to StartCluster
	I0603 13:58:41.865460 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:58:41.865537 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:58:41.912780 1143678 cri.go:89] found id: ""
	I0603 13:58:41.912812 1143678 logs.go:276] 0 containers: []
	W0603 13:58:41.912826 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:58:41.912832 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:58:41.912901 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:58:41.951372 1143678 cri.go:89] found id: ""
	I0603 13:58:41.951402 1143678 logs.go:276] 0 containers: []
	W0603 13:58:41.951411 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:58:41.951418 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:58:41.951490 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:58:41.989070 1143678 cri.go:89] found id: ""
	I0603 13:58:41.989104 1143678 logs.go:276] 0 containers: []
	W0603 13:58:41.989115 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:58:41.989123 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:58:41.989191 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:58:42.026208 1143678 cri.go:89] found id: ""
	I0603 13:58:42.026238 1143678 logs.go:276] 0 containers: []
	W0603 13:58:42.026246 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:58:42.026252 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:58:42.026312 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:58:42.064899 1143678 cri.go:89] found id: ""
	I0603 13:58:42.064941 1143678 logs.go:276] 0 containers: []
	W0603 13:58:42.064950 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:58:42.064971 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:58:42.065043 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:58:42.098817 1143678 cri.go:89] found id: ""
	I0603 13:58:42.098858 1143678 logs.go:276] 0 containers: []
	W0603 13:58:42.098868 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:58:42.098876 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:58:42.098939 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:58:42.133520 1143678 cri.go:89] found id: ""
	I0603 13:58:42.133558 1143678 logs.go:276] 0 containers: []
	W0603 13:58:42.133570 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:58:42.133579 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:58:42.133639 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:58:42.187356 1143678 cri.go:89] found id: ""
	I0603 13:58:42.187387 1143678 logs.go:276] 0 containers: []
	W0603 13:58:42.187399 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:58:42.187412 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:58:42.187434 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:58:42.249992 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:58:42.250034 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:58:42.272762 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:58:42.272801 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:58:42.362004 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:58:42.362030 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:58:42.362046 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:58:42.468630 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:58:42.468676 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0603 13:58:42.510945 1143678 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0603 13:58:42.511002 1143678 out.go:239] * 
	W0603 13:58:42.511094 1143678 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0603 13:58:42.511119 1143678 out.go:239] * 
	W0603 13:58:42.512307 1143678 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0603 13:58:42.516199 1143678 out.go:177] 
	W0603 13:58:42.517774 1143678 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0603 13:58:42.517848 1143678 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0603 13:58:42.517883 1143678 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0603 13:58:42.519747 1143678 out.go:177] 
	
	
	==> CRI-O <==
	Jun 03 14:07:48 old-k8s-version-151788 crio[659]: time="2024-06-03 14:07:48.342394804Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717423668342370253,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eda08dad-b1c3-4efe-89e3-8a4843da46b8 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:07:48 old-k8s-version-151788 crio[659]: time="2024-06-03 14:07:48.342976057Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6be93d73-5b7b-424f-8a14-e21ab2e33fb4 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:07:48 old-k8s-version-151788 crio[659]: time="2024-06-03 14:07:48.343069235Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6be93d73-5b7b-424f-8a14-e21ab2e33fb4 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:07:48 old-k8s-version-151788 crio[659]: time="2024-06-03 14:07:48.343108728Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6be93d73-5b7b-424f-8a14-e21ab2e33fb4 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:07:48 old-k8s-version-151788 crio[659]: time="2024-06-03 14:07:48.376862104Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2e1df8cd-1f0d-4575-adc8-926ef111acb0 name=/runtime.v1.RuntimeService/Version
	Jun 03 14:07:48 old-k8s-version-151788 crio[659]: time="2024-06-03 14:07:48.376973734Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2e1df8cd-1f0d-4575-adc8-926ef111acb0 name=/runtime.v1.RuntimeService/Version
	Jun 03 14:07:48 old-k8s-version-151788 crio[659]: time="2024-06-03 14:07:48.377894304Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=50c828f9-af63-4fbb-9a58-278ac74e4f9e name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:07:48 old-k8s-version-151788 crio[659]: time="2024-06-03 14:07:48.378454445Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717423668378430657,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=50c828f9-af63-4fbb-9a58-278ac74e4f9e name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:07:48 old-k8s-version-151788 crio[659]: time="2024-06-03 14:07:48.378951764Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6c739e4f-eb7c-4870-80d2-6babe6e25fcc name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:07:48 old-k8s-version-151788 crio[659]: time="2024-06-03 14:07:48.379006918Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6c739e4f-eb7c-4870-80d2-6babe6e25fcc name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:07:48 old-k8s-version-151788 crio[659]: time="2024-06-03 14:07:48.379036565Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6c739e4f-eb7c-4870-80d2-6babe6e25fcc name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:07:48 old-k8s-version-151788 crio[659]: time="2024-06-03 14:07:48.413599796Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=74c76374-47e1-4d91-9be6-6ec8f1cbacc4 name=/runtime.v1.RuntimeService/Version
	Jun 03 14:07:48 old-k8s-version-151788 crio[659]: time="2024-06-03 14:07:48.413672372Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=74c76374-47e1-4d91-9be6-6ec8f1cbacc4 name=/runtime.v1.RuntimeService/Version
	Jun 03 14:07:48 old-k8s-version-151788 crio[659]: time="2024-06-03 14:07:48.414852113Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a8f49c82-68ba-4411-9530-5b1e5ce1f649 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:07:48 old-k8s-version-151788 crio[659]: time="2024-06-03 14:07:48.415350532Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717423668415319324,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a8f49c82-68ba-4411-9530-5b1e5ce1f649 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:07:48 old-k8s-version-151788 crio[659]: time="2024-06-03 14:07:48.416108962Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=96330c0c-f260-4b84-bfd8-2bd470d5d936 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:07:48 old-k8s-version-151788 crio[659]: time="2024-06-03 14:07:48.416208495Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=96330c0c-f260-4b84-bfd8-2bd470d5d936 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:07:48 old-k8s-version-151788 crio[659]: time="2024-06-03 14:07:48.416324750Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=96330c0c-f260-4b84-bfd8-2bd470d5d936 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:07:48 old-k8s-version-151788 crio[659]: time="2024-06-03 14:07:48.449308510Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8f4c6e3a-fdfe-43ed-a8fe-9d2a3c56242e name=/runtime.v1.RuntimeService/Version
	Jun 03 14:07:48 old-k8s-version-151788 crio[659]: time="2024-06-03 14:07:48.449390212Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8f4c6e3a-fdfe-43ed-a8fe-9d2a3c56242e name=/runtime.v1.RuntimeService/Version
	Jun 03 14:07:48 old-k8s-version-151788 crio[659]: time="2024-06-03 14:07:48.450376724Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=937f8905-b3a5-479e-8bef-4144c395d5cc name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:07:48 old-k8s-version-151788 crio[659]: time="2024-06-03 14:07:48.450740430Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717423668450720230,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=937f8905-b3a5-479e-8bef-4144c395d5cc name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:07:48 old-k8s-version-151788 crio[659]: time="2024-06-03 14:07:48.451240449Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4452cf47-f73e-4f50-ae2a-9937a54239d1 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:07:48 old-k8s-version-151788 crio[659]: time="2024-06-03 14:07:48.451349848Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4452cf47-f73e-4f50-ae2a-9937a54239d1 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:07:48 old-k8s-version-151788 crio[659]: time="2024-06-03 14:07:48.451389189Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=4452cf47-f73e-4f50-ae2a-9937a54239d1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jun 3 13:50] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055954] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042975] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.825342] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.576562] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.695734] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.047871] systemd-fstab-generator[580]: Ignoring "noauto" option for root device
	[  +0.063174] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.087641] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.197728] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.185593] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.323645] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +6.685681] systemd-fstab-generator[846]: Ignoring "noauto" option for root device
	[  +0.076136] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.031593] systemd-fstab-generator[973]: Ignoring "noauto" option for root device
	[ +10.661843] kauditd_printk_skb: 46 callbacks suppressed
	[Jun 3 13:54] systemd-fstab-generator[5027]: Ignoring "noauto" option for root device
	[Jun 3 13:56] systemd-fstab-generator[5309]: Ignoring "noauto" option for root device
	[  +0.079564] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 14:07:48 up 17 min,  0 users,  load average: 0.14, 0.08, 0.08
	Linux old-k8s-version-151788 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jun 03 14:07:43 old-k8s-version-151788 kubelet[6487]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Jun 03 14:07:43 old-k8s-version-151788 kubelet[6487]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Jun 03 14:07:43 old-k8s-version-151788 kubelet[6487]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Jun 03 14:07:43 old-k8s-version-151788 kubelet[6487]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0006d36f0)
	Jun 03 14:07:43 old-k8s-version-151788 kubelet[6487]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Jun 03 14:07:43 old-k8s-version-151788 kubelet[6487]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0009b9ef0, 0x4f0ac20, 0xc000b9e0f0, 0x1, 0xc0000a60c0)
	Jun 03 14:07:43 old-k8s-version-151788 kubelet[6487]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Jun 03 14:07:43 old-k8s-version-151788 kubelet[6487]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0001d9180, 0xc0000a60c0)
	Jun 03 14:07:43 old-k8s-version-151788 kubelet[6487]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Jun 03 14:07:43 old-k8s-version-151788 kubelet[6487]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Jun 03 14:07:43 old-k8s-version-151788 kubelet[6487]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Jun 03 14:07:43 old-k8s-version-151788 kubelet[6487]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000965bc0, 0xc000a15ec0)
	Jun 03 14:07:43 old-k8s-version-151788 kubelet[6487]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Jun 03 14:07:43 old-k8s-version-151788 kubelet[6487]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Jun 03 14:07:43 old-k8s-version-151788 kubelet[6487]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Jun 03 14:07:43 old-k8s-version-151788 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jun 03 14:07:43 old-k8s-version-151788 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jun 03 14:07:43 old-k8s-version-151788 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Jun 03 14:07:43 old-k8s-version-151788 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jun 03 14:07:43 old-k8s-version-151788 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jun 03 14:07:43 old-k8s-version-151788 kubelet[6495]: I0603 14:07:43.747496    6495 server.go:416] Version: v1.20.0
	Jun 03 14:07:43 old-k8s-version-151788 kubelet[6495]: I0603 14:07:43.747833    6495 server.go:837] Client rotation is on, will bootstrap in background
	Jun 03 14:07:43 old-k8s-version-151788 kubelet[6495]: I0603 14:07:43.750296    6495 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jun 03 14:07:43 old-k8s-version-151788 kubelet[6495]: I0603 14:07:43.751398    6495 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Jun 03 14:07:43 old-k8s-version-151788 kubelet[6495]: W0603 14:07:43.751421    6495 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-151788 -n old-k8s-version-151788
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-151788 -n old-k8s-version-151788: exit status 2 (232.925395ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-151788" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.73s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (439.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-223260 -n embed-certs-223260
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-06-03 14:10:51.528472232 +0000 UTC m=+6403.733030736
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-223260 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-223260 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.25µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-223260 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-223260 -n embed-certs-223260
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-223260 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-223260 logs -n 25: (1.323817754s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-021279 sudo                                  | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-021279 sudo                                  | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-021279 sudo find                             | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-021279 sudo crio                             | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-021279                                       | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	| delete  | -p                                                     | disable-driver-mounts-069000 | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | disable-driver-mounts-069000                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-030870 | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:43 UTC |
	|         | default-k8s-diff-port-030870                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-817450             | no-preload-817450            | jenkins | v1.33.1 | 03 Jun 24 13:42 UTC | 03 Jun 24 13:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-817450                                   | no-preload-817450            | jenkins | v1.33.1 | 03 Jun 24 13:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-223260            | embed-certs-223260           | jenkins | v1.33.1 | 03 Jun 24 13:43 UTC | 03 Jun 24 13:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-223260                                  | embed-certs-223260           | jenkins | v1.33.1 | 03 Jun 24 13:43 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-030870  | default-k8s-diff-port-030870 | jenkins | v1.33.1 | 03 Jun 24 13:43 UTC | 03 Jun 24 13:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-030870 | jenkins | v1.33.1 | 03 Jun 24 13:43 UTC |                     |
	|         | default-k8s-diff-port-030870                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-817450                  | no-preload-817450            | jenkins | v1.33.1 | 03 Jun 24 13:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-151788        | old-k8s-version-151788       | jenkins | v1.33.1 | 03 Jun 24 13:44 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p no-preload-817450                                   | no-preload-817450            | jenkins | v1.33.1 | 03 Jun 24 13:44 UTC | 03 Jun 24 13:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-223260                 | embed-certs-223260           | jenkins | v1.33.1 | 03 Jun 24 13:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-223260                                  | embed-certs-223260           | jenkins | v1.33.1 | 03 Jun 24 13:45 UTC | 03 Jun 24 13:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-030870       | default-k8s-diff-port-030870 | jenkins | v1.33.1 | 03 Jun 24 13:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-030870 | jenkins | v1.33.1 | 03 Jun 24 13:46 UTC | 03 Jun 24 13:54 UTC |
	|         | default-k8s-diff-port-030870                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-151788                              | old-k8s-version-151788       | jenkins | v1.33.1 | 03 Jun 24 13:46 UTC | 03 Jun 24 13:46 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-151788             | old-k8s-version-151788       | jenkins | v1.33.1 | 03 Jun 24 13:46 UTC | 03 Jun 24 13:46 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-151788                              | old-k8s-version-151788       | jenkins | v1.33.1 | 03 Jun 24 13:46 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-151788                              | old-k8s-version-151788       | jenkins | v1.33.1 | 03 Jun 24 14:10 UTC | 03 Jun 24 14:10 UTC |
	| start   | -p newest-cni-937150 --memory=2200 --alsologtostderr   | newest-cni-937150            | jenkins | v1.33.1 | 03 Jun 24 14:10 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 14:10:10
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 14:10:10.374422 1149858 out.go:291] Setting OutFile to fd 1 ...
	I0603 14:10:10.374815 1149858 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 14:10:10.374864 1149858 out.go:304] Setting ErrFile to fd 2...
	I0603 14:10:10.374882 1149858 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 14:10:10.375383 1149858 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
	I0603 14:10:10.376501 1149858 out.go:298] Setting JSON to false
	I0603 14:10:10.377800 1149858 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":17557,"bootTime":1717406253,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 14:10:10.377878 1149858 start.go:139] virtualization: kvm guest
	I0603 14:10:10.380015 1149858 out.go:177] * [newest-cni-937150] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 14:10:10.381875 1149858 out.go:177]   - MINIKUBE_LOCATION=19011
	I0603 14:10:10.381875 1149858 notify.go:220] Checking for updates...
	I0603 14:10:10.383549 1149858 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 14:10:10.384915 1149858 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 14:10:10.386228 1149858 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 14:10:10.387567 1149858 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 14:10:10.389006 1149858 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 14:10:10.390875 1149858 config.go:182] Loaded profile config "default-k8s-diff-port-030870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 14:10:10.390965 1149858 config.go:182] Loaded profile config "embed-certs-223260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 14:10:10.391042 1149858 config.go:182] Loaded profile config "no-preload-817450": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 14:10:10.391141 1149858 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 14:10:10.428062 1149858 out.go:177] * Using the kvm2 driver based on user configuration
	I0603 14:10:10.429645 1149858 start.go:297] selected driver: kvm2
	I0603 14:10:10.429671 1149858 start.go:901] validating driver "kvm2" against <nil>
	I0603 14:10:10.429683 1149858 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 14:10:10.430490 1149858 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 14:10:10.430587 1149858 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19011-1078924/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0603 14:10:10.447373 1149858 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0603 14:10:10.447435 1149858 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0603 14:10:10.447479 1149858 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0603 14:10:10.447820 1149858 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0603 14:10:10.447938 1149858 cni.go:84] Creating CNI manager for ""
	I0603 14:10:10.447966 1149858 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 14:10:10.447982 1149858 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0603 14:10:10.448077 1149858 start.go:340] cluster config:
	{Name:newest-cni-937150 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:newest-cni-937150 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 14:10:10.448191 1149858 iso.go:125] acquiring lock: {Name:mka26d6a83f88b83737ccc78b57cc462fbe70fe1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 14:10:10.450351 1149858 out.go:177] * Starting "newest-cni-937150" primary control-plane node in "newest-cni-937150" cluster
	I0603 14:10:10.451445 1149858 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 14:10:10.451486 1149858 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0603 14:10:10.451500 1149858 cache.go:56] Caching tarball of preloaded images
	I0603 14:10:10.451625 1149858 preload.go:173] Found /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 14:10:10.451636 1149858 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0603 14:10:10.451762 1149858 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/config.json ...
	I0603 14:10:10.451795 1149858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/config.json: {Name:mk67c4ee36f012c1f62af75927e93199f9c68f08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 14:10:10.451989 1149858 start.go:360] acquireMachinesLock for newest-cni-937150: {Name:mk20baaab39609d00406b78ad309423511e633ec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 14:10:10.452047 1149858 start.go:364] duration metric: took 27.044µs to acquireMachinesLock for "newest-cni-937150"
	I0603 14:10:10.452076 1149858 start.go:93] Provisioning new machine with config: &{Name:newest-cni-937150 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.1 ClusterName:newest-cni-937150 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 14:10:10.452194 1149858 start.go:125] createHost starting for "" (driver="kvm2")
	I0603 14:10:10.453898 1149858 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0603 14:10:10.454062 1149858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 14:10:10.454116 1149858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 14:10:10.469365 1149858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43799
	I0603 14:10:10.469815 1149858 main.go:141] libmachine: () Calling .GetVersion
	I0603 14:10:10.470411 1149858 main.go:141] libmachine: Using API Version  1
	I0603 14:10:10.470435 1149858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 14:10:10.470796 1149858 main.go:141] libmachine: () Calling .GetMachineName
	I0603 14:10:10.471043 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetMachineName
	I0603 14:10:10.471189 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .DriverName
	I0603 14:10:10.471379 1149858 start.go:159] libmachine.API.Create for "newest-cni-937150" (driver="kvm2")
	I0603 14:10:10.471410 1149858 client.go:168] LocalClient.Create starting
	I0603 14:10:10.471441 1149858 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem
	I0603 14:10:10.471475 1149858 main.go:141] libmachine: Decoding PEM data...
	I0603 14:10:10.471494 1149858 main.go:141] libmachine: Parsing certificate...
	I0603 14:10:10.471550 1149858 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem
	I0603 14:10:10.471569 1149858 main.go:141] libmachine: Decoding PEM data...
	I0603 14:10:10.471580 1149858 main.go:141] libmachine: Parsing certificate...
	I0603 14:10:10.471594 1149858 main.go:141] libmachine: Running pre-create checks...
	I0603 14:10:10.471601 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .PreCreateCheck
	I0603 14:10:10.471986 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetConfigRaw
	I0603 14:10:10.472455 1149858 main.go:141] libmachine: Creating machine...
	I0603 14:10:10.472473 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .Create
	I0603 14:10:10.472663 1149858 main.go:141] libmachine: (newest-cni-937150) Creating KVM machine...
	I0603 14:10:10.474103 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | found existing default KVM network
	I0603 14:10:10.475564 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | I0603 14:10:10.475343 1149881 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:4f:31:b1} reservation:<nil>}
	I0603 14:10:10.476932 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | I0603 14:10:10.476834 1149881 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002b4a40}
	I0603 14:10:10.476973 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | created network xml: 
	I0603 14:10:10.476986 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | <network>
	I0603 14:10:10.476993 1149858 main.go:141] libmachine: (newest-cni-937150) DBG |   <name>mk-newest-cni-937150</name>
	I0603 14:10:10.477001 1149858 main.go:141] libmachine: (newest-cni-937150) DBG |   <dns enable='no'/>
	I0603 14:10:10.477009 1149858 main.go:141] libmachine: (newest-cni-937150) DBG |   
	I0603 14:10:10.477019 1149858 main.go:141] libmachine: (newest-cni-937150) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0603 14:10:10.477051 1149858 main.go:141] libmachine: (newest-cni-937150) DBG |     <dhcp>
	I0603 14:10:10.477081 1149858 main.go:141] libmachine: (newest-cni-937150) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0603 14:10:10.477093 1149858 main.go:141] libmachine: (newest-cni-937150) DBG |     </dhcp>
	I0603 14:10:10.477103 1149858 main.go:141] libmachine: (newest-cni-937150) DBG |   </ip>
	I0603 14:10:10.477116 1149858 main.go:141] libmachine: (newest-cni-937150) DBG |   
	I0603 14:10:10.477125 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | </network>
	I0603 14:10:10.477140 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | 
	I0603 14:10:10.482715 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | trying to create private KVM network mk-newest-cni-937150 192.168.50.0/24...
	I0603 14:10:10.557546 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | private KVM network mk-newest-cni-937150 192.168.50.0/24 created
	I0603 14:10:10.557594 1149858 main.go:141] libmachine: (newest-cni-937150) Setting up store path in /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/newest-cni-937150 ...
	I0603 14:10:10.557623 1149858 main.go:141] libmachine: (newest-cni-937150) Building disk image from file:///home/jenkins/minikube-integration/19011-1078924/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso
	I0603 14:10:10.557647 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | I0603 14:10:10.557589 1149881 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 14:10:10.557890 1149858 main.go:141] libmachine: (newest-cni-937150) Downloading /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19011-1078924/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0603 14:10:10.859248 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | I0603 14:10:10.859069 1149881 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/newest-cni-937150/id_rsa...
	I0603 14:10:11.063760 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | I0603 14:10:11.063617 1149881 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/newest-cni-937150/newest-cni-937150.rawdisk...
	I0603 14:10:11.063809 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | Writing magic tar header
	I0603 14:10:11.063827 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | Writing SSH key tar header
	I0603 14:10:11.063841 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | I0603 14:10:11.063779 1149881 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/newest-cni-937150 ...
	I0603 14:10:11.063930 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/newest-cni-937150
	I0603 14:10:11.063981 1149858 main.go:141] libmachine: (newest-cni-937150) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/newest-cni-937150 (perms=drwx------)
	I0603 14:10:11.064011 1149858 main.go:141] libmachine: (newest-cni-937150) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924/.minikube/machines (perms=drwxr-xr-x)
	I0603 14:10:11.064024 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines
	I0603 14:10:11.064036 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 14:10:11.064046 1149858 main.go:141] libmachine: (newest-cni-937150) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924/.minikube (perms=drwxr-xr-x)
	I0603 14:10:11.064063 1149858 main.go:141] libmachine: (newest-cni-937150) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924 (perms=drwxrwxr-x)
	I0603 14:10:11.064076 1149858 main.go:141] libmachine: (newest-cni-937150) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0603 14:10:11.064098 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924
	I0603 14:10:11.064113 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0603 14:10:11.064124 1149858 main.go:141] libmachine: (newest-cni-937150) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0603 14:10:11.064133 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | Checking permissions on dir: /home/jenkins
	I0603 14:10:11.064147 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | Checking permissions on dir: /home
	I0603 14:10:11.064157 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | Skipping /home - not owner
	I0603 14:10:11.064165 1149858 main.go:141] libmachine: (newest-cni-937150) Creating domain...
	I0603 14:10:11.065459 1149858 main.go:141] libmachine: (newest-cni-937150) define libvirt domain using xml: 
	I0603 14:10:11.065495 1149858 main.go:141] libmachine: (newest-cni-937150) <domain type='kvm'>
	I0603 14:10:11.065507 1149858 main.go:141] libmachine: (newest-cni-937150)   <name>newest-cni-937150</name>
	I0603 14:10:11.065515 1149858 main.go:141] libmachine: (newest-cni-937150)   <memory unit='MiB'>2200</memory>
	I0603 14:10:11.065524 1149858 main.go:141] libmachine: (newest-cni-937150)   <vcpu>2</vcpu>
	I0603 14:10:11.065537 1149858 main.go:141] libmachine: (newest-cni-937150)   <features>
	I0603 14:10:11.065546 1149858 main.go:141] libmachine: (newest-cni-937150)     <acpi/>
	I0603 14:10:11.065555 1149858 main.go:141] libmachine: (newest-cni-937150)     <apic/>
	I0603 14:10:11.065566 1149858 main.go:141] libmachine: (newest-cni-937150)     <pae/>
	I0603 14:10:11.065576 1149858 main.go:141] libmachine: (newest-cni-937150)     
	I0603 14:10:11.065584 1149858 main.go:141] libmachine: (newest-cni-937150)   </features>
	I0603 14:10:11.065592 1149858 main.go:141] libmachine: (newest-cni-937150)   <cpu mode='host-passthrough'>
	I0603 14:10:11.065629 1149858 main.go:141] libmachine: (newest-cni-937150)   
	I0603 14:10:11.065645 1149858 main.go:141] libmachine: (newest-cni-937150)   </cpu>
	I0603 14:10:11.065654 1149858 main.go:141] libmachine: (newest-cni-937150)   <os>
	I0603 14:10:11.065662 1149858 main.go:141] libmachine: (newest-cni-937150)     <type>hvm</type>
	I0603 14:10:11.065680 1149858 main.go:141] libmachine: (newest-cni-937150)     <boot dev='cdrom'/>
	I0603 14:10:11.065691 1149858 main.go:141] libmachine: (newest-cni-937150)     <boot dev='hd'/>
	I0603 14:10:11.065702 1149858 main.go:141] libmachine: (newest-cni-937150)     <bootmenu enable='no'/>
	I0603 14:10:11.065712 1149858 main.go:141] libmachine: (newest-cni-937150)   </os>
	I0603 14:10:11.065720 1149858 main.go:141] libmachine: (newest-cni-937150)   <devices>
	I0603 14:10:11.065731 1149858 main.go:141] libmachine: (newest-cni-937150)     <disk type='file' device='cdrom'>
	I0603 14:10:11.065744 1149858 main.go:141] libmachine: (newest-cni-937150)       <source file='/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/newest-cni-937150/boot2docker.iso'/>
	I0603 14:10:11.065757 1149858 main.go:141] libmachine: (newest-cni-937150)       <target dev='hdc' bus='scsi'/>
	I0603 14:10:11.065768 1149858 main.go:141] libmachine: (newest-cni-937150)       <readonly/>
	I0603 14:10:11.065778 1149858 main.go:141] libmachine: (newest-cni-937150)     </disk>
	I0603 14:10:11.065787 1149858 main.go:141] libmachine: (newest-cni-937150)     <disk type='file' device='disk'>
	I0603 14:10:11.065799 1149858 main.go:141] libmachine: (newest-cni-937150)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0603 14:10:11.065817 1149858 main.go:141] libmachine: (newest-cni-937150)       <source file='/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/newest-cni-937150/newest-cni-937150.rawdisk'/>
	I0603 14:10:11.065828 1149858 main.go:141] libmachine: (newest-cni-937150)       <target dev='hda' bus='virtio'/>
	I0603 14:10:11.065843 1149858 main.go:141] libmachine: (newest-cni-937150)     </disk>
	I0603 14:10:11.065853 1149858 main.go:141] libmachine: (newest-cni-937150)     <interface type='network'>
	I0603 14:10:11.065870 1149858 main.go:141] libmachine: (newest-cni-937150)       <source network='mk-newest-cni-937150'/>
	I0603 14:10:11.065881 1149858 main.go:141] libmachine: (newest-cni-937150)       <model type='virtio'/>
	I0603 14:10:11.065889 1149858 main.go:141] libmachine: (newest-cni-937150)     </interface>
	I0603 14:10:11.065897 1149858 main.go:141] libmachine: (newest-cni-937150)     <interface type='network'>
	I0603 14:10:11.065906 1149858 main.go:141] libmachine: (newest-cni-937150)       <source network='default'/>
	I0603 14:10:11.065914 1149858 main.go:141] libmachine: (newest-cni-937150)       <model type='virtio'/>
	I0603 14:10:11.065948 1149858 main.go:141] libmachine: (newest-cni-937150)     </interface>
	I0603 14:10:11.065974 1149858 main.go:141] libmachine: (newest-cni-937150)     <serial type='pty'>
	I0603 14:10:11.065989 1149858 main.go:141] libmachine: (newest-cni-937150)       <target port='0'/>
	I0603 14:10:11.066001 1149858 main.go:141] libmachine: (newest-cni-937150)     </serial>
	I0603 14:10:11.066010 1149858 main.go:141] libmachine: (newest-cni-937150)     <console type='pty'>
	I0603 14:10:11.066021 1149858 main.go:141] libmachine: (newest-cni-937150)       <target type='serial' port='0'/>
	I0603 14:10:11.066028 1149858 main.go:141] libmachine: (newest-cni-937150)     </console>
	I0603 14:10:11.066043 1149858 main.go:141] libmachine: (newest-cni-937150)     <rng model='virtio'>
	I0603 14:10:11.066056 1149858 main.go:141] libmachine: (newest-cni-937150)       <backend model='random'>/dev/random</backend>
	I0603 14:10:11.066068 1149858 main.go:141] libmachine: (newest-cni-937150)     </rng>
	I0603 14:10:11.066079 1149858 main.go:141] libmachine: (newest-cni-937150)     
	I0603 14:10:11.066089 1149858 main.go:141] libmachine: (newest-cni-937150)     
	I0603 14:10:11.066098 1149858 main.go:141] libmachine: (newest-cni-937150)   </devices>
	I0603 14:10:11.066111 1149858 main.go:141] libmachine: (newest-cni-937150) </domain>
	I0603 14:10:11.066124 1149858 main.go:141] libmachine: (newest-cni-937150) 
	I0603 14:10:11.070908 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:49:67:db in network default
	I0603 14:10:11.071583 1149858 main.go:141] libmachine: (newest-cni-937150) Ensuring networks are active...
	I0603 14:10:11.071604 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:11.072703 1149858 main.go:141] libmachine: (newest-cni-937150) Ensuring network default is active
	I0603 14:10:11.073040 1149858 main.go:141] libmachine: (newest-cni-937150) Ensuring network mk-newest-cni-937150 is active
	I0603 14:10:11.073579 1149858 main.go:141] libmachine: (newest-cni-937150) Getting domain xml...
	I0603 14:10:11.074420 1149858 main.go:141] libmachine: (newest-cni-937150) Creating domain...
	I0603 14:10:12.353217 1149858 main.go:141] libmachine: (newest-cni-937150) Waiting to get IP...
	I0603 14:10:12.354164 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:12.354684 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | unable to find current IP address of domain newest-cni-937150 in network mk-newest-cni-937150
	I0603 14:10:12.354715 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | I0603 14:10:12.354621 1149881 retry.go:31] will retry after 259.893127ms: waiting for machine to come up
	I0603 14:10:12.616214 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:12.616729 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | unable to find current IP address of domain newest-cni-937150 in network mk-newest-cni-937150
	I0603 14:10:12.616772 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | I0603 14:10:12.616700 1149881 retry.go:31] will retry after 380.359ms: waiting for machine to come up
	I0603 14:10:12.998408 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:12.998910 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | unable to find current IP address of domain newest-cni-937150 in network mk-newest-cni-937150
	I0603 14:10:12.998957 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | I0603 14:10:12.998864 1149881 retry.go:31] will retry after 488.054448ms: waiting for machine to come up
	I0603 14:10:13.488678 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:13.489157 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | unable to find current IP address of domain newest-cni-937150 in network mk-newest-cni-937150
	I0603 14:10:13.489200 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | I0603 14:10:13.489102 1149881 retry.go:31] will retry after 414.42816ms: waiting for machine to come up
	I0603 14:10:13.905763 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:13.906365 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | unable to find current IP address of domain newest-cni-937150 in network mk-newest-cni-937150
	I0603 14:10:13.906391 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | I0603 14:10:13.906303 1149881 retry.go:31] will retry after 521.411782ms: waiting for machine to come up
	I0603 14:10:14.429056 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:14.429543 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | unable to find current IP address of domain newest-cni-937150 in network mk-newest-cni-937150
	I0603 14:10:14.429623 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | I0603 14:10:14.429519 1149881 retry.go:31] will retry after 814.866584ms: waiting for machine to come up
	I0603 14:10:15.245815 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:15.246317 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | unable to find current IP address of domain newest-cni-937150 in network mk-newest-cni-937150
	I0603 14:10:15.246346 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | I0603 14:10:15.246258 1149881 retry.go:31] will retry after 1.021138707s: waiting for machine to come up
	I0603 14:10:16.268550 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:16.269101 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | unable to find current IP address of domain newest-cni-937150 in network mk-newest-cni-937150
	I0603 14:10:16.269146 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | I0603 14:10:16.269064 1149881 retry.go:31] will retry after 1.167022182s: waiting for machine to come up
	I0603 14:10:17.437421 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:17.437866 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | unable to find current IP address of domain newest-cni-937150 in network mk-newest-cni-937150
	I0603 14:10:17.437895 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | I0603 14:10:17.437817 1149881 retry.go:31] will retry after 1.415790047s: waiting for machine to come up
	I0603 14:10:18.855773 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:18.856289 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | unable to find current IP address of domain newest-cni-937150 in network mk-newest-cni-937150
	I0603 14:10:18.856318 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | I0603 14:10:18.856227 1149881 retry.go:31] will retry after 2.184943297s: waiting for machine to come up
	I0603 14:10:21.043362 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:21.043841 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | unable to find current IP address of domain newest-cni-937150 in network mk-newest-cni-937150
	I0603 14:10:21.043873 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | I0603 14:10:21.043763 1149881 retry.go:31] will retry after 1.770659238s: waiting for machine to come up
	I0603 14:10:22.816815 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:22.817222 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | unable to find current IP address of domain newest-cni-937150 in network mk-newest-cni-937150
	I0603 14:10:22.817247 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | I0603 14:10:22.817177 1149881 retry.go:31] will retry after 3.405487359s: waiting for machine to come up
	I0603 14:10:26.223700 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:26.224169 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | unable to find current IP address of domain newest-cni-937150 in network mk-newest-cni-937150
	I0603 14:10:26.224215 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | I0603 14:10:26.224129 1149881 retry.go:31] will retry after 4.352919539s: waiting for machine to come up
	I0603 14:10:30.578405 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:30.578883 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has current primary IP address 192.168.50.117 and MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:30.578913 1149858 main.go:141] libmachine: (newest-cni-937150) Found IP for machine: 192.168.50.117
	I0603 14:10:30.578927 1149858 main.go:141] libmachine: (newest-cni-937150) Reserving static IP address...
	I0603 14:10:30.579338 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | unable to find host DHCP lease matching {name: "newest-cni-937150", mac: "52:54:00:86:c8:b7", ip: "192.168.50.117"} in network mk-newest-cni-937150
	I0603 14:10:30.659372 1149858 main.go:141] libmachine: (newest-cni-937150) Reserved static IP address: 192.168.50.117
	I0603 14:10:30.659404 1149858 main.go:141] libmachine: (newest-cni-937150) Waiting for SSH to be available...
	I0603 14:10:30.659414 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | Getting to WaitForSSH function...
	I0603 14:10:30.662675 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:30.663210 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:c8:b7", ip: ""} in network mk-newest-cni-937150: {Iface:virbr2 ExpiryTime:2024-06-03 15:10:25 +0000 UTC Type:0 Mac:52:54:00:86:c8:b7 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:minikube Clientid:01:52:54:00:86:c8:b7}
	I0603 14:10:30.663248 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined IP address 192.168.50.117 and MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:30.663392 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | Using SSH client type: external
	I0603 14:10:30.663413 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | Using SSH private key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/newest-cni-937150/id_rsa (-rw-------)
	I0603 14:10:30.663443 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.117 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/newest-cni-937150/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 14:10:30.663453 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | About to run SSH command:
	I0603 14:10:30.663462 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | exit 0
	I0603 14:10:30.793746 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | SSH cmd err, output: <nil>: 
	I0603 14:10:30.794052 1149858 main.go:141] libmachine: (newest-cni-937150) KVM machine creation complete!
	I0603 14:10:30.794357 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetConfigRaw
	I0603 14:10:30.794981 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .DriverName
	I0603 14:10:30.795200 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .DriverName
	I0603 14:10:30.795436 1149858 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0603 14:10:30.795463 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetState
	I0603 14:10:30.796938 1149858 main.go:141] libmachine: Detecting operating system of created instance...
	I0603 14:10:30.796963 1149858 main.go:141] libmachine: Waiting for SSH to be available...
	I0603 14:10:30.796969 1149858 main.go:141] libmachine: Getting to WaitForSSH function...
	I0603 14:10:30.796975 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHHostname
	I0603 14:10:30.799957 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:30.800464 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:c8:b7", ip: ""} in network mk-newest-cni-937150: {Iface:virbr2 ExpiryTime:2024-06-03 15:10:25 +0000 UTC Type:0 Mac:52:54:00:86:c8:b7 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:newest-cni-937150 Clientid:01:52:54:00:86:c8:b7}
	I0603 14:10:30.800514 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined IP address 192.168.50.117 and MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:30.800687 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHPort
	I0603 14:10:30.800859 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHKeyPath
	I0603 14:10:30.801037 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHKeyPath
	I0603 14:10:30.801178 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHUsername
	I0603 14:10:30.801343 1149858 main.go:141] libmachine: Using SSH client type: native
	I0603 14:10:30.801644 1149858 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.117 22 <nil> <nil>}
	I0603 14:10:30.801662 1149858 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0603 14:10:30.916985 1149858 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 14:10:30.917013 1149858 main.go:141] libmachine: Detecting the provisioner...
	I0603 14:10:30.917026 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHHostname
	I0603 14:10:30.920194 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:30.920652 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:c8:b7", ip: ""} in network mk-newest-cni-937150: {Iface:virbr2 ExpiryTime:2024-06-03 15:10:25 +0000 UTC Type:0 Mac:52:54:00:86:c8:b7 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:newest-cni-937150 Clientid:01:52:54:00:86:c8:b7}
	I0603 14:10:30.920702 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined IP address 192.168.50.117 and MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:30.920869 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHPort
	I0603 14:10:30.921116 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHKeyPath
	I0603 14:10:30.921362 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHKeyPath
	I0603 14:10:30.921548 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHUsername
	I0603 14:10:30.921720 1149858 main.go:141] libmachine: Using SSH client type: native
	I0603 14:10:30.921946 1149858 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.117 22 <nil> <nil>}
	I0603 14:10:30.921961 1149858 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0603 14:10:31.038406 1149858 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0603 14:10:31.038510 1149858 main.go:141] libmachine: found compatible host: buildroot
	I0603 14:10:31.038520 1149858 main.go:141] libmachine: Provisioning with buildroot...
	I0603 14:10:31.038537 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetMachineName
	I0603 14:10:31.038827 1149858 buildroot.go:166] provisioning hostname "newest-cni-937150"
	I0603 14:10:31.038863 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetMachineName
	I0603 14:10:31.039075 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHHostname
	I0603 14:10:31.042115 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:31.042590 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:c8:b7", ip: ""} in network mk-newest-cni-937150: {Iface:virbr2 ExpiryTime:2024-06-03 15:10:25 +0000 UTC Type:0 Mac:52:54:00:86:c8:b7 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:newest-cni-937150 Clientid:01:52:54:00:86:c8:b7}
	I0603 14:10:31.042627 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined IP address 192.168.50.117 and MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:31.042676 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHPort
	I0603 14:10:31.042909 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHKeyPath
	I0603 14:10:31.043120 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHKeyPath
	I0603 14:10:31.043295 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHUsername
	I0603 14:10:31.043502 1149858 main.go:141] libmachine: Using SSH client type: native
	I0603 14:10:31.043658 1149858 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.117 22 <nil> <nil>}
	I0603 14:10:31.043673 1149858 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-937150 && echo "newest-cni-937150" | sudo tee /etc/hostname
	I0603 14:10:31.172948 1149858 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-937150
	
	I0603 14:10:31.172982 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHHostname
	I0603 14:10:31.175909 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:31.176278 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:c8:b7", ip: ""} in network mk-newest-cni-937150: {Iface:virbr2 ExpiryTime:2024-06-03 15:10:25 +0000 UTC Type:0 Mac:52:54:00:86:c8:b7 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:newest-cni-937150 Clientid:01:52:54:00:86:c8:b7}
	I0603 14:10:31.176319 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined IP address 192.168.50.117 and MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:31.176492 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHPort
	I0603 14:10:31.176724 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHKeyPath
	I0603 14:10:31.176927 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHKeyPath
	I0603 14:10:31.177075 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHUsername
	I0603 14:10:31.177265 1149858 main.go:141] libmachine: Using SSH client type: native
	I0603 14:10:31.177514 1149858 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.117 22 <nil> <nil>}
	I0603 14:10:31.177532 1149858 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-937150' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-937150/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-937150' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 14:10:31.301133 1149858 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 14:10:31.301174 1149858 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19011-1078924/.minikube CaCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19011-1078924/.minikube}
	I0603 14:10:31.301233 1149858 buildroot.go:174] setting up certificates
	I0603 14:10:31.301248 1149858 provision.go:84] configureAuth start
	I0603 14:10:31.301262 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetMachineName
	I0603 14:10:31.301691 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetIP
	I0603 14:10:31.304488 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:31.304830 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:c8:b7", ip: ""} in network mk-newest-cni-937150: {Iface:virbr2 ExpiryTime:2024-06-03 15:10:25 +0000 UTC Type:0 Mac:52:54:00:86:c8:b7 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:newest-cni-937150 Clientid:01:52:54:00:86:c8:b7}
	I0603 14:10:31.304850 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined IP address 192.168.50.117 and MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:31.305012 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHHostname
	I0603 14:10:31.307547 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:31.307892 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:c8:b7", ip: ""} in network mk-newest-cni-937150: {Iface:virbr2 ExpiryTime:2024-06-03 15:10:25 +0000 UTC Type:0 Mac:52:54:00:86:c8:b7 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:newest-cni-937150 Clientid:01:52:54:00:86:c8:b7}
	I0603 14:10:31.307926 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined IP address 192.168.50.117 and MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:31.308151 1149858 provision.go:143] copyHostCerts
	I0603 14:10:31.308221 1149858 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem, removing ...
	I0603 14:10:31.308242 1149858 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 14:10:31.308308 1149858 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem (1123 bytes)
	I0603 14:10:31.308442 1149858 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem, removing ...
	I0603 14:10:31.308453 1149858 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 14:10:31.308479 1149858 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem (1675 bytes)
	I0603 14:10:31.308563 1149858 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem, removing ...
	I0603 14:10:31.308571 1149858 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 14:10:31.308593 1149858 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem (1078 bytes)
	I0603 14:10:31.308639 1149858 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem org=jenkins.newest-cni-937150 san=[127.0.0.1 192.168.50.117 localhost minikube newest-cni-937150]
	I0603 14:10:31.706209 1149858 provision.go:177] copyRemoteCerts
	I0603 14:10:31.706272 1149858 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 14:10:31.706314 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHHostname
	I0603 14:10:31.709224 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:31.709619 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:c8:b7", ip: ""} in network mk-newest-cni-937150: {Iface:virbr2 ExpiryTime:2024-06-03 15:10:25 +0000 UTC Type:0 Mac:52:54:00:86:c8:b7 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:newest-cni-937150 Clientid:01:52:54:00:86:c8:b7}
	I0603 14:10:31.709657 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined IP address 192.168.50.117 and MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:31.709918 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHPort
	I0603 14:10:31.710161 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHKeyPath
	I0603 14:10:31.710341 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHUsername
	I0603 14:10:31.710557 1149858 sshutil.go:53] new ssh client: &{IP:192.168.50.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/newest-cni-937150/id_rsa Username:docker}
	I0603 14:10:31.795463 1149858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 14:10:31.823433 1149858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0603 14:10:31.849834 1149858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 14:10:31.875234 1149858 provision.go:87] duration metric: took 573.971415ms to configureAuth
	I0603 14:10:31.875263 1149858 buildroot.go:189] setting minikube options for container-runtime
	I0603 14:10:31.875484 1149858 config.go:182] Loaded profile config "newest-cni-937150": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 14:10:31.875587 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHHostname
	I0603 14:10:31.878457 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:31.878781 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:c8:b7", ip: ""} in network mk-newest-cni-937150: {Iface:virbr2 ExpiryTime:2024-06-03 15:10:25 +0000 UTC Type:0 Mac:52:54:00:86:c8:b7 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:newest-cni-937150 Clientid:01:52:54:00:86:c8:b7}
	I0603 14:10:31.878811 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined IP address 192.168.50.117 and MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:31.879029 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHPort
	I0603 14:10:31.879253 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHKeyPath
	I0603 14:10:31.879526 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHKeyPath
	I0603 14:10:31.879729 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHUsername
	I0603 14:10:31.879918 1149858 main.go:141] libmachine: Using SSH client type: native
	I0603 14:10:31.880107 1149858 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.117 22 <nil> <nil>}
	I0603 14:10:31.880129 1149858 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 14:10:32.183853 1149858 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 14:10:32.183921 1149858 main.go:141] libmachine: Checking connection to Docker...
	I0603 14:10:32.183935 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetURL
	I0603 14:10:32.185578 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | Using libvirt version 6000000
	I0603 14:10:32.188015 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:32.188388 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:c8:b7", ip: ""} in network mk-newest-cni-937150: {Iface:virbr2 ExpiryTime:2024-06-03 15:10:25 +0000 UTC Type:0 Mac:52:54:00:86:c8:b7 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:newest-cni-937150 Clientid:01:52:54:00:86:c8:b7}
	I0603 14:10:32.188422 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined IP address 192.168.50.117 and MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:32.188588 1149858 main.go:141] libmachine: Docker is up and running!
	I0603 14:10:32.188607 1149858 main.go:141] libmachine: Reticulating splines...
	I0603 14:10:32.188616 1149858 client.go:171] duration metric: took 21.71719522s to LocalClient.Create
	I0603 14:10:32.188653 1149858 start.go:167] duration metric: took 21.717275209s to libmachine.API.Create "newest-cni-937150"
	I0603 14:10:32.188666 1149858 start.go:293] postStartSetup for "newest-cni-937150" (driver="kvm2")
	I0603 14:10:32.188681 1149858 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 14:10:32.188705 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .DriverName
	I0603 14:10:32.188982 1149858 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 14:10:32.189007 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHHostname
	I0603 14:10:32.191264 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:32.191625 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:c8:b7", ip: ""} in network mk-newest-cni-937150: {Iface:virbr2 ExpiryTime:2024-06-03 15:10:25 +0000 UTC Type:0 Mac:52:54:00:86:c8:b7 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:newest-cni-937150 Clientid:01:52:54:00:86:c8:b7}
	I0603 14:10:32.191662 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined IP address 192.168.50.117 and MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:32.191801 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHPort
	I0603 14:10:32.191990 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHKeyPath
	I0603 14:10:32.192129 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHUsername
	I0603 14:10:32.192296 1149858 sshutil.go:53] new ssh client: &{IP:192.168.50.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/newest-cni-937150/id_rsa Username:docker}
	I0603 14:10:32.281048 1149858 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 14:10:32.285937 1149858 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 14:10:32.285968 1149858 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/addons for local assets ...
	I0603 14:10:32.286066 1149858 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/files for local assets ...
	I0603 14:10:32.286192 1149858 filesync.go:149] local asset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> 10862512.pem in /etc/ssl/certs
	I0603 14:10:32.286347 1149858 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 14:10:32.296925 1149858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 14:10:32.323191 1149858 start.go:296] duration metric: took 134.509724ms for postStartSetup
	I0603 14:10:32.323285 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetConfigRaw
	I0603 14:10:32.323977 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetIP
	I0603 14:10:32.326874 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:32.327266 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:c8:b7", ip: ""} in network mk-newest-cni-937150: {Iface:virbr2 ExpiryTime:2024-06-03 15:10:25 +0000 UTC Type:0 Mac:52:54:00:86:c8:b7 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:newest-cni-937150 Clientid:01:52:54:00:86:c8:b7}
	I0603 14:10:32.327297 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined IP address 192.168.50.117 and MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:32.327660 1149858 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/config.json ...
	I0603 14:10:32.327865 1149858 start.go:128] duration metric: took 21.875654351s to createHost
	I0603 14:10:32.327906 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHHostname
	I0603 14:10:32.330266 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:32.330570 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:c8:b7", ip: ""} in network mk-newest-cni-937150: {Iface:virbr2 ExpiryTime:2024-06-03 15:10:25 +0000 UTC Type:0 Mac:52:54:00:86:c8:b7 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:newest-cni-937150 Clientid:01:52:54:00:86:c8:b7}
	I0603 14:10:32.330600 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined IP address 192.168.50.117 and MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:32.330754 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHPort
	I0603 14:10:32.330950 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHKeyPath
	I0603 14:10:32.331103 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHKeyPath
	I0603 14:10:32.331236 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHUsername
	I0603 14:10:32.331391 1149858 main.go:141] libmachine: Using SSH client type: native
	I0603 14:10:32.331606 1149858 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.117 22 <nil> <nil>}
	I0603 14:10:32.331624 1149858 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 14:10:32.450769 1149858 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717423832.419930077
	
	I0603 14:10:32.450799 1149858 fix.go:216] guest clock: 1717423832.419930077
	I0603 14:10:32.450809 1149858 fix.go:229] Guest: 2024-06-03 14:10:32.419930077 +0000 UTC Remote: 2024-06-03 14:10:32.32788916 +0000 UTC m=+21.990339354 (delta=92.040917ms)
	I0603 14:10:32.450853 1149858 fix.go:200] guest clock delta is within tolerance: 92.040917ms
	I0603 14:10:32.450864 1149858 start.go:83] releasing machines lock for "newest-cni-937150", held for 21.998803352s
	I0603 14:10:32.450900 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .DriverName
	I0603 14:10:32.451205 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetIP
	I0603 14:10:32.454324 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:32.454704 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:c8:b7", ip: ""} in network mk-newest-cni-937150: {Iface:virbr2 ExpiryTime:2024-06-03 15:10:25 +0000 UTC Type:0 Mac:52:54:00:86:c8:b7 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:newest-cni-937150 Clientid:01:52:54:00:86:c8:b7}
	I0603 14:10:32.454735 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined IP address 192.168.50.117 and MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:32.454911 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .DriverName
	I0603 14:10:32.455517 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .DriverName
	I0603 14:10:32.455751 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .DriverName
	I0603 14:10:32.455834 1149858 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 14:10:32.455891 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHHostname
	I0603 14:10:32.456032 1149858 ssh_runner.go:195] Run: cat /version.json
	I0603 14:10:32.456063 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHHostname
	I0603 14:10:32.458708 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:32.458932 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:32.459076 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:c8:b7", ip: ""} in network mk-newest-cni-937150: {Iface:virbr2 ExpiryTime:2024-06-03 15:10:25 +0000 UTC Type:0 Mac:52:54:00:86:c8:b7 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:newest-cni-937150 Clientid:01:52:54:00:86:c8:b7}
	I0603 14:10:32.459106 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined IP address 192.168.50.117 and MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:32.459244 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:c8:b7", ip: ""} in network mk-newest-cni-937150: {Iface:virbr2 ExpiryTime:2024-06-03 15:10:25 +0000 UTC Type:0 Mac:52:54:00:86:c8:b7 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:newest-cni-937150 Clientid:01:52:54:00:86:c8:b7}
	I0603 14:10:32.459278 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHPort
	I0603 14:10:32.459284 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined IP address 192.168.50.117 and MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:32.459450 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHPort
	I0603 14:10:32.459565 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHKeyPath
	I0603 14:10:32.459675 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHKeyPath
	I0603 14:10:32.459722 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHUsername
	I0603 14:10:32.459834 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHUsername
	I0603 14:10:32.459954 1149858 sshutil.go:53] new ssh client: &{IP:192.168.50.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/newest-cni-937150/id_rsa Username:docker}
	I0603 14:10:32.460007 1149858 sshutil.go:53] new ssh client: &{IP:192.168.50.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/newest-cni-937150/id_rsa Username:docker}
	I0603 14:10:32.571744 1149858 ssh_runner.go:195] Run: systemctl --version
	I0603 14:10:32.578153 1149858 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 14:10:32.734363 1149858 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 14:10:32.740408 1149858 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 14:10:32.740487 1149858 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 14:10:32.758408 1149858 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 14:10:32.758435 1149858 start.go:494] detecting cgroup driver to use...
	I0603 14:10:32.758533 1149858 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 14:10:32.776560 1149858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 14:10:32.792117 1149858 docker.go:217] disabling cri-docker service (if available) ...
	I0603 14:10:32.792177 1149858 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 14:10:32.808337 1149858 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 14:10:32.823964 1149858 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 14:10:32.949275 1149858 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 14:10:33.102685 1149858 docker.go:233] disabling docker service ...
	I0603 14:10:33.102768 1149858 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 14:10:33.117681 1149858 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 14:10:33.131293 1149858 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 14:10:33.254836 1149858 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 14:10:33.389135 1149858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 14:10:33.404530 1149858 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 14:10:33.425480 1149858 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 14:10:33.425550 1149858 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 14:10:33.437504 1149858 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 14:10:33.437578 1149858 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 14:10:33.449342 1149858 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 14:10:33.460206 1149858 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 14:10:33.470545 1149858 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 14:10:33.480990 1149858 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 14:10:33.491735 1149858 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 14:10:33.510331 1149858 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 14:10:33.521151 1149858 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 14:10:33.531116 1149858 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 14:10:33.531184 1149858 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 14:10:33.544277 1149858 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 14:10:33.554136 1149858 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 14:10:33.676654 1149858 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 14:10:33.834506 1149858 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 14:10:33.834595 1149858 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 14:10:33.840526 1149858 start.go:562] Will wait 60s for crictl version
	I0603 14:10:33.840595 1149858 ssh_runner.go:195] Run: which crictl
	I0603 14:10:33.845114 1149858 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 14:10:33.892075 1149858 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 14:10:33.892152 1149858 ssh_runner.go:195] Run: crio --version
	I0603 14:10:33.922949 1149858 ssh_runner.go:195] Run: crio --version
	I0603 14:10:33.955395 1149858 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 14:10:33.956703 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetIP
	I0603 14:10:33.959426 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:33.959838 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:c8:b7", ip: ""} in network mk-newest-cni-937150: {Iface:virbr2 ExpiryTime:2024-06-03 15:10:25 +0000 UTC Type:0 Mac:52:54:00:86:c8:b7 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:newest-cni-937150 Clientid:01:52:54:00:86:c8:b7}
	I0603 14:10:33.959866 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined IP address 192.168.50.117 and MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:33.960150 1149858 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0603 14:10:33.964703 1149858 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 14:10:33.980474 1149858 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0603 14:10:33.981718 1149858 kubeadm.go:877] updating cluster {Name:newest-cni-937150 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:newest-cni-937150 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.117 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 14:10:33.981855 1149858 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 14:10:33.981929 1149858 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 14:10:34.019028 1149858 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0603 14:10:34.019122 1149858 ssh_runner.go:195] Run: which lz4
	I0603 14:10:34.023813 1149858 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 14:10:34.028406 1149858 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 14:10:34.028436 1149858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0603 14:10:35.555838 1149858 crio.go:462] duration metric: took 1.532063899s to copy over tarball
	I0603 14:10:35.555925 1149858 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 14:10:37.872818 1149858 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.316853217s)
	I0603 14:10:37.872853 1149858 crio.go:469] duration metric: took 2.316978379s to extract the tarball
	I0603 14:10:37.872864 1149858 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 14:10:37.913205 1149858 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 14:10:37.962131 1149858 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 14:10:37.962164 1149858 cache_images.go:84] Images are preloaded, skipping loading
	I0603 14:10:37.962180 1149858 kubeadm.go:928] updating node { 192.168.50.117 8443 v1.30.1 crio true true} ...
	I0603 14:10:37.962353 1149858 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-937150 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.117
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:newest-cni-937150 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 14:10:37.962440 1149858 ssh_runner.go:195] Run: crio config
	I0603 14:10:38.014940 1149858 cni.go:84] Creating CNI manager for ""
	I0603 14:10:38.014962 1149858 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 14:10:38.014971 1149858 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0603 14:10:38.014994 1149858 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.50.117 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-937150 NodeName:newest-cni-937150 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.117"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.50.117 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 14:10:38.015151 1149858 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.117
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-937150"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.117
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.117"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 14:10:38.015216 1149858 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 14:10:38.028856 1149858 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 14:10:38.028944 1149858 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 14:10:38.039665 1149858 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0603 14:10:38.057433 1149858 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 14:10:38.075806 1149858 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2285 bytes)
	I0603 14:10:38.095263 1149858 ssh_runner.go:195] Run: grep 192.168.50.117	control-plane.minikube.internal$ /etc/hosts
	I0603 14:10:38.099443 1149858 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.117	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 14:10:38.112429 1149858 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 14:10:38.248428 1149858 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 14:10:38.267105 1149858 certs.go:68] Setting up /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150 for IP: 192.168.50.117
	I0603 14:10:38.267138 1149858 certs.go:194] generating shared ca certs ...
	I0603 14:10:38.267159 1149858 certs.go:226] acquiring lock for ca certs: {Name:mkeec5aabce7c9540fcb31b78e4f96c2851d54f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 14:10:38.267393 1149858 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key
	I0603 14:10:38.267472 1149858 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key
	I0603 14:10:38.267492 1149858 certs.go:256] generating profile certs ...
	I0603 14:10:38.267580 1149858 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/client.key
	I0603 14:10:38.267608 1149858 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/client.crt with IP's: []
	I0603 14:10:38.388932 1149858 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/client.crt ...
	I0603 14:10:38.388966 1149858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/client.crt: {Name:mkeac1660cd9acdbde243d96ed0eaf6d3aafc544 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 14:10:38.389206 1149858 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/client.key ...
	I0603 14:10:38.389226 1149858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/client.key: {Name:mk3c5dcc647c1a29bae27c60136d087199f77c84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 14:10:38.389323 1149858 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/apiserver.key.095d4dda
	I0603 14:10:38.389340 1149858 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/apiserver.crt.095d4dda with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.117]
	I0603 14:10:38.553425 1149858 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/apiserver.crt.095d4dda ...
	I0603 14:10:38.553459 1149858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/apiserver.crt.095d4dda: {Name:mkc545fa3ddf8db013caa5ca7400370bda54bcf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 14:10:38.553631 1149858 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/apiserver.key.095d4dda ...
	I0603 14:10:38.553644 1149858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/apiserver.key.095d4dda: {Name:mkb5e93bb41b676b2b6fac31f5e42d8cf2dff975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 14:10:38.553731 1149858 certs.go:381] copying /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/apiserver.crt.095d4dda -> /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/apiserver.crt
	I0603 14:10:38.553811 1149858 certs.go:385] copying /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/apiserver.key.095d4dda -> /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/apiserver.key
	I0603 14:10:38.553865 1149858 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/proxy-client.key
	I0603 14:10:38.553881 1149858 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/proxy-client.crt with IP's: []
	I0603 14:10:38.779341 1149858 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/proxy-client.crt ...
	I0603 14:10:38.779375 1149858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/proxy-client.crt: {Name:mk2e008c1cb6dc4fb341b2e569324ba9b7ca2339 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 14:10:38.779605 1149858 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/proxy-client.key ...
	I0603 14:10:38.779624 1149858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/proxy-client.key: {Name:mkd05bb014dcd066b3793441b0da6f58fa48ffdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 14:10:38.779862 1149858 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem (1338 bytes)
	W0603 14:10:38.779906 1149858 certs.go:480] ignoring /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251_empty.pem, impossibly tiny 0 bytes
	I0603 14:10:38.779923 1149858 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 14:10:38.779957 1149858 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem (1078 bytes)
	I0603 14:10:38.779985 1149858 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem (1123 bytes)
	I0603 14:10:38.780025 1149858 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem (1675 bytes)
	I0603 14:10:38.780085 1149858 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 14:10:38.780753 1149858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 14:10:38.815194 1149858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 14:10:38.844375 1149858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 14:10:38.872853 1149858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0603 14:10:38.900181 1149858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0603 14:10:38.927682 1149858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 14:10:38.956080 1149858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 14:10:38.985021 1149858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0603 14:10:39.013358 1149858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /usr/share/ca-certificates/10862512.pem (1708 bytes)
	I0603 14:10:39.042106 1149858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 14:10:39.072045 1149858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem --> /usr/share/ca-certificates/1086251.pem (1338 bytes)
	I0603 14:10:39.099885 1149858 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 14:10:39.118274 1149858 ssh_runner.go:195] Run: openssl version
	I0603 14:10:39.125064 1149858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10862512.pem && ln -fs /usr/share/ca-certificates/10862512.pem /etc/ssl/certs/10862512.pem"
	I0603 14:10:39.136385 1149858 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10862512.pem
	I0603 14:10:39.141326 1149858 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:37 /usr/share/ca-certificates/10862512.pem
	I0603 14:10:39.141383 1149858 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10862512.pem
	I0603 14:10:39.147900 1149858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10862512.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 14:10:39.159667 1149858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 14:10:39.171044 1149858 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 14:10:39.175817 1149858 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:24 /usr/share/ca-certificates/minikubeCA.pem
	I0603 14:10:39.175882 1149858 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 14:10:39.182138 1149858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 14:10:39.206779 1149858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1086251.pem && ln -fs /usr/share/ca-certificates/1086251.pem /etc/ssl/certs/1086251.pem"
	I0603 14:10:39.224524 1149858 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1086251.pem
	I0603 14:10:39.230510 1149858 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:37 /usr/share/ca-certificates/1086251.pem
	I0603 14:10:39.230584 1149858 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1086251.pem
	I0603 14:10:39.245290 1149858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1086251.pem /etc/ssl/certs/51391683.0"
	I0603 14:10:39.262292 1149858 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 14:10:39.266909 1149858 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 14:10:39.266964 1149858 kubeadm.go:391] StartCluster: {Name:newest-cni-937150 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:newest-cni-937150 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.117 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 14:10:39.267048 1149858 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 14:10:39.267111 1149858 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 14:10:39.311861 1149858 cri.go:89] found id: ""
	I0603 14:10:39.311950 1149858 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0603 14:10:39.328960 1149858 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 14:10:39.340618 1149858 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 14:10:39.351555 1149858 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 14:10:39.351581 1149858 kubeadm.go:156] found existing configuration files:
	
	I0603 14:10:39.351634 1149858 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 14:10:39.361659 1149858 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 14:10:39.361735 1149858 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 14:10:39.372490 1149858 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 14:10:39.382854 1149858 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 14:10:39.382918 1149858 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 14:10:39.394239 1149858 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 14:10:39.403819 1149858 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 14:10:39.403896 1149858 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 14:10:39.413875 1149858 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 14:10:39.424160 1149858 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 14:10:39.424235 1149858 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 14:10:39.434930 1149858 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 14:10:39.564729 1149858 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0603 14:10:39.564800 1149858 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 14:10:39.690185 1149858 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 14:10:39.690312 1149858 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 14:10:39.690452 1149858 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 14:10:39.910223 1149858 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 14:10:40.032509 1149858 out.go:204]   - Generating certificates and keys ...
	I0603 14:10:40.032624 1149858 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 14:10:40.032709 1149858 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 14:10:40.213537 1149858 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0603 14:10:40.408041 1149858 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0603 14:10:40.617896 1149858 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0603 14:10:40.771057 1149858 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0603 14:10:40.882541 1149858 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0603 14:10:40.882764 1149858 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-937150] and IPs [192.168.50.117 127.0.0.1 ::1]
	I0603 14:10:41.058241 1149858 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0603 14:10:41.058581 1149858 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-937150] and IPs [192.168.50.117 127.0.0.1 ::1]
	I0603 14:10:41.354995 1149858 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0603 14:10:41.591380 1149858 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0603 14:10:41.684429 1149858 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0603 14:10:41.684506 1149858 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 14:10:41.839182 1149858 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 14:10:42.189547 1149858 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0603 14:10:42.548626 1149858 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 14:10:42.721883 1149858 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 14:10:42.917578 1149858 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 14:10:42.918380 1149858 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 14:10:42.921799 1149858 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 14:10:42.923762 1149858 out.go:204]   - Booting up control plane ...
	I0603 14:10:42.923885 1149858 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 14:10:42.923973 1149858 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 14:10:42.927125 1149858 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 14:10:42.952275 1149858 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 14:10:42.953628 1149858 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 14:10:42.953701 1149858 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 14:10:43.084989 1149858 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0603 14:10:43.085144 1149858 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0603 14:10:43.605917 1149858 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 521.163342ms
	I0603 14:10:43.606026 1149858 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0603 14:10:49.103486 1149858 kubeadm.go:309] [api-check] The API server is healthy after 5.501823647s
	I0603 14:10:49.117564 1149858 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0603 14:10:49.133952 1149858 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0603 14:10:49.162476 1149858 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0603 14:10:49.162725 1149858 kubeadm.go:309] [mark-control-plane] Marking the node newest-cni-937150 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0603 14:10:49.176769 1149858 kubeadm.go:309] [bootstrap-token] Using token: 9iogr7.p4sgbr1j5c0rj9kv
	I0603 14:10:49.178455 1149858 out.go:204]   - Configuring RBAC rules ...
	I0603 14:10:49.178595 1149858 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0603 14:10:49.186898 1149858 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0603 14:10:49.195454 1149858 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0603 14:10:49.199406 1149858 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0603 14:10:49.204372 1149858 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0603 14:10:49.211328 1149858 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0603 14:10:49.513626 1149858 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0603 14:10:49.959338 1149858 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0603 14:10:50.513629 1149858 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0603 14:10:50.515215 1149858 kubeadm.go:309] 
	I0603 14:10:50.515328 1149858 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0603 14:10:50.515350 1149858 kubeadm.go:309] 
	I0603 14:10:50.515452 1149858 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0603 14:10:50.515462 1149858 kubeadm.go:309] 
	I0603 14:10:50.515504 1149858 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0603 14:10:50.515588 1149858 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0603 14:10:50.515678 1149858 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0603 14:10:50.515700 1149858 kubeadm.go:309] 
	I0603 14:10:50.515790 1149858 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0603 14:10:50.515811 1149858 kubeadm.go:309] 
	I0603 14:10:50.515886 1149858 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0603 14:10:50.515896 1149858 kubeadm.go:309] 
	I0603 14:10:50.515964 1149858 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0603 14:10:50.516080 1149858 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0603 14:10:50.516176 1149858 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0603 14:10:50.516186 1149858 kubeadm.go:309] 
	I0603 14:10:50.516310 1149858 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0603 14:10:50.516433 1149858 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0603 14:10:50.516446 1149858 kubeadm.go:309] 
	I0603 14:10:50.516567 1149858 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 9iogr7.p4sgbr1j5c0rj9kv \
	I0603 14:10:50.516719 1149858 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c33e9516f6d05db03b44f9194bafe44692a1b8ae1d860b8bc74f77578e93fdb1 \
	I0603 14:10:50.516756 1149858 kubeadm.go:309] 	--control-plane 
	I0603 14:10:50.516762 1149858 kubeadm.go:309] 
	I0603 14:10:50.516877 1149858 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0603 14:10:50.516889 1149858 kubeadm.go:309] 
	I0603 14:10:50.516989 1149858 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 9iogr7.p4sgbr1j5c0rj9kv \
	I0603 14:10:50.517140 1149858 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c33e9516f6d05db03b44f9194bafe44692a1b8ae1d860b8bc74f77578e93fdb1 
	I0603 14:10:50.517299 1149858 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 14:10:50.517351 1149858 cni.go:84] Creating CNI manager for ""
	I0603 14:10:50.517372 1149858 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 14:10:50.519331 1149858 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Jun 03 14:10:52 embed-certs-223260 crio[727]: time="2024-06-03 14:10:52.213952782Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717423852213918913,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e333fff9-fe6c-42bd-a096-4b899e0dbe3d name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:10:52 embed-certs-223260 crio[727]: time="2024-06-03 14:10:52.215346704Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0db4d518-8915-4627-9a22-8526b245b8b5 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:10:52 embed-certs-223260 crio[727]: time="2024-06-03 14:10:52.215416247Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0db4d518-8915-4627-9a22-8526b245b8b5 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:10:52 embed-certs-223260 crio[727]: time="2024-06-03 14:10:52.215590475Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b,PodSandboxId:2f1fb72c5f8c2c72ebf4746dbca03f77016b193b2f2458ff4774f83e348649eb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717422631458640760,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ff65744-2d90-4589-a97f-d6b4d792eab4,},Annotations:map[string]string{io.kubernetes.container.hash: a357a252,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a1bcc8fd115462594b57cf2f156c17b6430e92d0215d9e85b595b804bdde5a0,PodSandboxId:21c139c5637b1e6fb84ff27abb4d8ccc37204ab70f3839945ca14b6c0315fced,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1717422609263490429,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 281b59a6-05da-460b-a9de-353a33f7d95c,},Annotations:map[string]string{io.kubernetes.container.hash: 86c813e6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574,PodSandboxId:2236cb094f7ea29487f0c17b14b07af0c8a34d72721ccbb2b0e7e8dbcd75289b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717422608426831157,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qdjrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a490ea5-c189-4d28-bd6b-509610d35f37,},Annotations:map[string]string{io.kubernetes.container.hash: b90f4363,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1,PodSandboxId:928e28c81071bc7e9c03b3c60aa6537d56e01260a9a1241612b3020d1ac622fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717422600620990049,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s5vdl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c515f67-d265-4140-8
2ec-ba9ac4ddda80,},Annotations:map[string]string{io.kubernetes.container.hash: bfdc2270,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88,PodSandboxId:2f1fb72c5f8c2c72ebf4746dbca03f77016b193b2f2458ff4774f83e348649eb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717422600623813012,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ff65744-2d90-4589-a97f-d6b4d792e
ab4,},Annotations:map[string]string{io.kubernetes.container.hash: a357a252,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87,PodSandboxId:9e10d6ddb6a57b24c35cfeeb3344d2f0a50479e14797081a21e741618587ab78,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717422595884622538,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-223260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb8fdbada528a524d35aa6977dd3e121,},Annota
tions:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05,PodSandboxId:9b2b4e4bc09bf83a74b7cda03a0c98a894560ad4ed473d4d5c97de96b5c9a0f4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717422595941894235,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-223260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 607b55931bc26d0cc508f778fa375941,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 6d82c3b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d,PodSandboxId:10bbc9d33598f91cd2fcdd614d2644ea6231fa73297da25751ec608eaccf4794,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717422595921637833,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-223260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee5d442f159594c31098eb6daf7db70,},Annotations:map[string]string{io.k
ubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a,PodSandboxId:daff7e3f3b25385b3289805a323290be27422e2e530c8ce0806f63f5121d103e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717422595838880008,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-223260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd73855663a9f159bd07762c06815ac3,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 97521c20,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0db4d518-8915-4627-9a22-8526b245b8b5 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:10:52 embed-certs-223260 crio[727]: time="2024-06-03 14:10:52.262452768Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=80dcbf50-9326-4898-8068-054e9153ccac name=/runtime.v1.RuntimeService/Version
	Jun 03 14:10:52 embed-certs-223260 crio[727]: time="2024-06-03 14:10:52.262529646Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=80dcbf50-9326-4898-8068-054e9153ccac name=/runtime.v1.RuntimeService/Version
	Jun 03 14:10:52 embed-certs-223260 crio[727]: time="2024-06-03 14:10:52.263786796Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1245355c-0b8b-48ec-8dd6-0e83dfb2a78f name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:10:52 embed-certs-223260 crio[727]: time="2024-06-03 14:10:52.264690953Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717423852264665453,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1245355c-0b8b-48ec-8dd6-0e83dfb2a78f name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:10:52 embed-certs-223260 crio[727]: time="2024-06-03 14:10:52.265743667Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d42eb176-2833-4e8b-92d0-0bfeb2460ded name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:10:52 embed-certs-223260 crio[727]: time="2024-06-03 14:10:52.265816681Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d42eb176-2833-4e8b-92d0-0bfeb2460ded name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:10:52 embed-certs-223260 crio[727]: time="2024-06-03 14:10:52.266009934Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b,PodSandboxId:2f1fb72c5f8c2c72ebf4746dbca03f77016b193b2f2458ff4774f83e348649eb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717422631458640760,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ff65744-2d90-4589-a97f-d6b4d792eab4,},Annotations:map[string]string{io.kubernetes.container.hash: a357a252,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a1bcc8fd115462594b57cf2f156c17b6430e92d0215d9e85b595b804bdde5a0,PodSandboxId:21c139c5637b1e6fb84ff27abb4d8ccc37204ab70f3839945ca14b6c0315fced,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1717422609263490429,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 281b59a6-05da-460b-a9de-353a33f7d95c,},Annotations:map[string]string{io.kubernetes.container.hash: 86c813e6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574,PodSandboxId:2236cb094f7ea29487f0c17b14b07af0c8a34d72721ccbb2b0e7e8dbcd75289b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717422608426831157,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qdjrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a490ea5-c189-4d28-bd6b-509610d35f37,},Annotations:map[string]string{io.kubernetes.container.hash: b90f4363,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1,PodSandboxId:928e28c81071bc7e9c03b3c60aa6537d56e01260a9a1241612b3020d1ac622fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717422600620990049,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s5vdl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c515f67-d265-4140-8
2ec-ba9ac4ddda80,},Annotations:map[string]string{io.kubernetes.container.hash: bfdc2270,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88,PodSandboxId:2f1fb72c5f8c2c72ebf4746dbca03f77016b193b2f2458ff4774f83e348649eb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717422600623813012,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ff65744-2d90-4589-a97f-d6b4d792e
ab4,},Annotations:map[string]string{io.kubernetes.container.hash: a357a252,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87,PodSandboxId:9e10d6ddb6a57b24c35cfeeb3344d2f0a50479e14797081a21e741618587ab78,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717422595884622538,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-223260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb8fdbada528a524d35aa6977dd3e121,},Annota
tions:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05,PodSandboxId:9b2b4e4bc09bf83a74b7cda03a0c98a894560ad4ed473d4d5c97de96b5c9a0f4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717422595941894235,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-223260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 607b55931bc26d0cc508f778fa375941,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 6d82c3b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d,PodSandboxId:10bbc9d33598f91cd2fcdd614d2644ea6231fa73297da25751ec608eaccf4794,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717422595921637833,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-223260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee5d442f159594c31098eb6daf7db70,},Annotations:map[string]string{io.k
ubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a,PodSandboxId:daff7e3f3b25385b3289805a323290be27422e2e530c8ce0806f63f5121d103e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717422595838880008,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-223260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd73855663a9f159bd07762c06815ac3,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 97521c20,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d42eb176-2833-4e8b-92d0-0bfeb2460ded name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:10:52 embed-certs-223260 crio[727]: time="2024-06-03 14:10:52.307943494Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=28fbb43f-ff0a-41b2-8712-259359f50248 name=/runtime.v1.RuntimeService/Version
	Jun 03 14:10:52 embed-certs-223260 crio[727]: time="2024-06-03 14:10:52.308018965Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=28fbb43f-ff0a-41b2-8712-259359f50248 name=/runtime.v1.RuntimeService/Version
	Jun 03 14:10:52 embed-certs-223260 crio[727]: time="2024-06-03 14:10:52.309397321Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d069bf81-202d-4960-b379-290e9cf5e07e name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:10:52 embed-certs-223260 crio[727]: time="2024-06-03 14:10:52.309951337Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717423852309929581,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d069bf81-202d-4960-b379-290e9cf5e07e name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:10:52 embed-certs-223260 crio[727]: time="2024-06-03 14:10:52.310609170Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c60088f3-0482-4d6e-90b9-f53fcd2e3d5f name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:10:52 embed-certs-223260 crio[727]: time="2024-06-03 14:10:52.310699835Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c60088f3-0482-4d6e-90b9-f53fcd2e3d5f name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:10:52 embed-certs-223260 crio[727]: time="2024-06-03 14:10:52.310908403Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b,PodSandboxId:2f1fb72c5f8c2c72ebf4746dbca03f77016b193b2f2458ff4774f83e348649eb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717422631458640760,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ff65744-2d90-4589-a97f-d6b4d792eab4,},Annotations:map[string]string{io.kubernetes.container.hash: a357a252,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a1bcc8fd115462594b57cf2f156c17b6430e92d0215d9e85b595b804bdde5a0,PodSandboxId:21c139c5637b1e6fb84ff27abb4d8ccc37204ab70f3839945ca14b6c0315fced,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1717422609263490429,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 281b59a6-05da-460b-a9de-353a33f7d95c,},Annotations:map[string]string{io.kubernetes.container.hash: 86c813e6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574,PodSandboxId:2236cb094f7ea29487f0c17b14b07af0c8a34d72721ccbb2b0e7e8dbcd75289b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717422608426831157,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qdjrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a490ea5-c189-4d28-bd6b-509610d35f37,},Annotations:map[string]string{io.kubernetes.container.hash: b90f4363,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1,PodSandboxId:928e28c81071bc7e9c03b3c60aa6537d56e01260a9a1241612b3020d1ac622fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717422600620990049,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s5vdl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c515f67-d265-4140-8
2ec-ba9ac4ddda80,},Annotations:map[string]string{io.kubernetes.container.hash: bfdc2270,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88,PodSandboxId:2f1fb72c5f8c2c72ebf4746dbca03f77016b193b2f2458ff4774f83e348649eb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717422600623813012,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ff65744-2d90-4589-a97f-d6b4d792e
ab4,},Annotations:map[string]string{io.kubernetes.container.hash: a357a252,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87,PodSandboxId:9e10d6ddb6a57b24c35cfeeb3344d2f0a50479e14797081a21e741618587ab78,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717422595884622538,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-223260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb8fdbada528a524d35aa6977dd3e121,},Annota
tions:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05,PodSandboxId:9b2b4e4bc09bf83a74b7cda03a0c98a894560ad4ed473d4d5c97de96b5c9a0f4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717422595941894235,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-223260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 607b55931bc26d0cc508f778fa375941,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 6d82c3b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d,PodSandboxId:10bbc9d33598f91cd2fcdd614d2644ea6231fa73297da25751ec608eaccf4794,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717422595921637833,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-223260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee5d442f159594c31098eb6daf7db70,},Annotations:map[string]string{io.k
ubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a,PodSandboxId:daff7e3f3b25385b3289805a323290be27422e2e530c8ce0806f63f5121d103e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717422595838880008,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-223260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd73855663a9f159bd07762c06815ac3,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 97521c20,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c60088f3-0482-4d6e-90b9-f53fcd2e3d5f name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:10:52 embed-certs-223260 crio[727]: time="2024-06-03 14:10:52.347017461Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e110228f-cec7-46eb-9c97-2e25c39f8ba1 name=/runtime.v1.RuntimeService/Version
	Jun 03 14:10:52 embed-certs-223260 crio[727]: time="2024-06-03 14:10:52.347090946Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e110228f-cec7-46eb-9c97-2e25c39f8ba1 name=/runtime.v1.RuntimeService/Version
	Jun 03 14:10:52 embed-certs-223260 crio[727]: time="2024-06-03 14:10:52.348117644Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5a138b52-c09b-426a-b205-349cee680754 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:10:52 embed-certs-223260 crio[727]: time="2024-06-03 14:10:52.348671431Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717423852348596052,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5a138b52-c09b-426a-b205-349cee680754 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:10:52 embed-certs-223260 crio[727]: time="2024-06-03 14:10:52.349341612Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=68892935-ff60-457a-b65c-e5117fa78876 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:10:52 embed-certs-223260 crio[727]: time="2024-06-03 14:10:52.349440998Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=68892935-ff60-457a-b65c-e5117fa78876 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:10:52 embed-certs-223260 crio[727]: time="2024-06-03 14:10:52.349709460Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b,PodSandboxId:2f1fb72c5f8c2c72ebf4746dbca03f77016b193b2f2458ff4774f83e348649eb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717422631458640760,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ff65744-2d90-4589-a97f-d6b4d792eab4,},Annotations:map[string]string{io.kubernetes.container.hash: a357a252,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a1bcc8fd115462594b57cf2f156c17b6430e92d0215d9e85b595b804bdde5a0,PodSandboxId:21c139c5637b1e6fb84ff27abb4d8ccc37204ab70f3839945ca14b6c0315fced,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1717422609263490429,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 281b59a6-05da-460b-a9de-353a33f7d95c,},Annotations:map[string]string{io.kubernetes.container.hash: 86c813e6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574,PodSandboxId:2236cb094f7ea29487f0c17b14b07af0c8a34d72721ccbb2b0e7e8dbcd75289b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717422608426831157,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qdjrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a490ea5-c189-4d28-bd6b-509610d35f37,},Annotations:map[string]string{io.kubernetes.container.hash: b90f4363,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1,PodSandboxId:928e28c81071bc7e9c03b3c60aa6537d56e01260a9a1241612b3020d1ac622fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717422600620990049,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s5vdl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c515f67-d265-4140-8
2ec-ba9ac4ddda80,},Annotations:map[string]string{io.kubernetes.container.hash: bfdc2270,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88,PodSandboxId:2f1fb72c5f8c2c72ebf4746dbca03f77016b193b2f2458ff4774f83e348649eb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717422600623813012,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ff65744-2d90-4589-a97f-d6b4d792e
ab4,},Annotations:map[string]string{io.kubernetes.container.hash: a357a252,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87,PodSandboxId:9e10d6ddb6a57b24c35cfeeb3344d2f0a50479e14797081a21e741618587ab78,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717422595884622538,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-223260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb8fdbada528a524d35aa6977dd3e121,},Annota
tions:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05,PodSandboxId:9b2b4e4bc09bf83a74b7cda03a0c98a894560ad4ed473d4d5c97de96b5c9a0f4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717422595941894235,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-223260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 607b55931bc26d0cc508f778fa375941,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 6d82c3b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d,PodSandboxId:10bbc9d33598f91cd2fcdd614d2644ea6231fa73297da25751ec608eaccf4794,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717422595921637833,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-223260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee5d442f159594c31098eb6daf7db70,},Annotations:map[string]string{io.k
ubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a,PodSandboxId:daff7e3f3b25385b3289805a323290be27422e2e530c8ce0806f63f5121d103e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717422595838880008,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-223260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd73855663a9f159bd07762c06815ac3,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 97521c20,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=68892935-ff60-457a-b65c-e5117fa78876 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e0c551e53061e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       2                   2f1fb72c5f8c2       storage-provisioner
	9a1bcc8fd1154       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   20 minutes ago      Running             busybox                   1                   21c139c5637b1       busybox
	f8daff2944ee2       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      20 minutes ago      Running             coredns                   1                   2236cb094f7ea       coredns-7db6d8ff4d-qdjrv
	141e89821d9ba       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Exited              storage-provisioner       1                   2f1fb72c5f8c2       storage-provisioner
	c17ec1b1cf666       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      20 minutes ago      Running             kube-proxy                1                   928e28c81071b       kube-proxy-s5vdl
	114ee50eb8f33       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      20 minutes ago      Running             etcd                      1                   9b2b4e4bc09bf       etcd-embed-certs-223260
	a4f8ab9c0a067       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      20 minutes ago      Running             kube-controller-manager   1                   10bbc9d33598f       kube-controller-manager-embed-certs-223260
	f1a49ac6ea3e6       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      20 minutes ago      Running             kube-scheduler            1                   9e10d6ddb6a57       kube-scheduler-embed-certs-223260
	45eebdf59dbe2       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      20 minutes ago      Running             kube-apiserver            1                   daff7e3f3b253       kube-apiserver-embed-certs-223260
	
	
	==> coredns [f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 347fb4f25cc546215231b2e9ef34a7838489408c50ad1d77e38b06de967dd388dc540a0db2692259640c7998323f3763426b7a7e73fad2aa89cebddf27cf7c94
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57497 - 43282 "HINFO IN 3215531351917476745.3466927403052893141. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.036473674s
	
	
	==> describe nodes <==
	Name:               embed-certs-223260
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-223260
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	                    minikube.k8s.io/name=embed-certs-223260
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_03T13_42_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 13:42:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-223260
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 14:10:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 14:05:49 +0000   Mon, 03 Jun 2024 13:42:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 14:05:49 +0000   Mon, 03 Jun 2024 13:42:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 14:05:49 +0000   Mon, 03 Jun 2024 13:42:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 14:05:49 +0000   Mon, 03 Jun 2024 13:50:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.246
	  Hostname:    embed-certs-223260
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e7d84f5be1044fef8f60c281196faa94
	  System UUID:                e7d84f5b-e104-4fef-8f60-c281196faa94
	  Boot ID:                    6e007d64-1412-4605-8915-ff9f1ad29350
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 coredns-7db6d8ff4d-qdjrv                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-embed-certs-223260                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kube-apiserver-embed-certs-223260             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-embed-certs-223260    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-s5vdl                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-embed-certs-223260             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 metrics-server-569cc877fc-v7d9t               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 20m                kube-proxy       
	  Normal  NodeHasSufficientPID     28m                kubelet          Node embed-certs-223260 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node embed-certs-223260 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node embed-certs-223260 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeReady                28m                kubelet          Node embed-certs-223260 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node embed-certs-223260 event: Registered Node embed-certs-223260 in Controller
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node embed-certs-223260 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node embed-certs-223260 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node embed-certs-223260 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20m                node-controller  Node embed-certs-223260 event: Registered Node embed-certs-223260 in Controller
	
	
	==> dmesg <==
	[Jun 3 13:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052280] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040233] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.546708] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.405580] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.634814] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.861112] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.058440] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055014] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.166614] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.133550] systemd-fstab-generator[681]: Ignoring "noauto" option for root device
	[  +0.307987] systemd-fstab-generator[711]: Ignoring "noauto" option for root device
	[  +4.471170] systemd-fstab-generator[809]: Ignoring "noauto" option for root device
	[  +0.062604] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.791615] systemd-fstab-generator[932]: Ignoring "noauto" option for root device
	[Jun 3 13:50] kauditd_printk_skb: 97 callbacks suppressed
	[  +4.473224] systemd-fstab-generator[1552]: Ignoring "noauto" option for root device
	[  +1.268933] kauditd_printk_skb: 62 callbacks suppressed
	[  +8.167713] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05] <==
	{"level":"info","ts":"2024-06-03T13:50:16.178532Z","caller":"traceutil/trace.go:171","msg":"trace[785430932] transaction","detail":"{read_only:false; response_revision:603; number_of_response:1; }","duration":"259.336394ms","start":"2024-06-03T13:50:15.919175Z","end":"2024-06-03T13:50:16.178512Z","steps":["trace[785430932] 'process raft request'  (duration: 259.02513ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T13:50:16.774438Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"588.443358ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-223260\" ","response":"range_response_count:1 size:5500"}
	{"level":"info","ts":"2024-06-03T13:50:16.774505Z","caller":"traceutil/trace.go:171","msg":"trace[714207542] range","detail":"{range_begin:/registry/minions/embed-certs-223260; range_end:; response_count:1; response_revision:603; }","duration":"588.53985ms","start":"2024-06-03T13:50:16.185955Z","end":"2024-06-03T13:50:16.774494Z","steps":["trace[714207542] 'range keys from in-memory index tree'  (duration: 588.343048ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T13:50:16.774542Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-03T13:50:16.185945Z","time spent":"588.584182ms","remote":"127.0.0.1:60652","response type":"/etcdserverpb.KV/Range","request count":0,"request size":38,"response count":1,"response size":5523,"request content":"key:\"/registry/minions/embed-certs-223260\" "}
	{"level":"info","ts":"2024-06-03T13:50:16.774702Z","caller":"traceutil/trace.go:171","msg":"trace[423434282] transaction","detail":"{read_only:false; response_revision:604; number_of_response:1; }","duration":"583.897095ms","start":"2024-06-03T13:50:16.190793Z","end":"2024-06-03T13:50:16.77469Z","steps":["trace[423434282] 'process raft request'  (duration: 583.655177ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T13:50:16.775386Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-03T13:50:16.19078Z","time spent":"583.992729ms","remote":"127.0.0.1:60662","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6707,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-223260\" mod_revision:603 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-223260\" value_size:6639 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-223260\" > >"}
	{"level":"warn","ts":"2024-06-03T13:50:38.592544Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"265.017394ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2929749742812550189 > lease_revoke:<id:28a88fde5d2253b9>","response":"size:28"}
	{"level":"info","ts":"2024-06-03T13:50:38.592685Z","caller":"traceutil/trace.go:171","msg":"trace[9174562] linearizableReadLoop","detail":"{readStateIndex:669; appliedIndex:668; }","duration":"289.514822ms","start":"2024-06-03T13:50:38.303143Z","end":"2024-06-03T13:50:38.592658Z","steps":["trace[9174562] 'read index received'  (duration: 24.302777ms)","trace[9174562] 'applied index is now lower than readState.Index'  (duration: 265.210791ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-03T13:50:38.592819Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"289.697095ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-v7d9t\" ","response":"range_response_count:1 size:4283"}
	{"level":"info","ts":"2024-06-03T13:50:38.592851Z","caller":"traceutil/trace.go:171","msg":"trace[232053768] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-569cc877fc-v7d9t; range_end:; response_count:1; response_revision:623; }","duration":"289.755295ms","start":"2024-06-03T13:50:38.303084Z","end":"2024-06-03T13:50:38.592839Z","steps":["trace[232053768] 'agreement among raft nodes before linearized reading'  (duration: 289.631057ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-03T13:59:58.42747Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":848}
	{"level":"info","ts":"2024-06-03T13:59:58.439305Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":848,"took":"11.438145ms","hash":531994192,"current-db-size-bytes":2670592,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2670592,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-06-03T13:59:58.439373Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":531994192,"revision":848,"compact-revision":-1}
	{"level":"info","ts":"2024-06-03T14:04:58.435855Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1090}
	{"level":"info","ts":"2024-06-03T14:04:58.442765Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1090,"took":"6.572388ms","hash":2728688158,"current-db-size-bytes":2670592,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1568768,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-06-03T14:04:58.442822Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2728688158,"revision":1090,"compact-revision":848}
	{"level":"info","ts":"2024-06-03T14:09:58.44555Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1332}
	{"level":"info","ts":"2024-06-03T14:09:58.450097Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1332,"took":"3.743719ms","hash":3824254587,"current-db-size-bytes":2670592,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1556480,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-06-03T14:09:58.450181Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3824254587,"revision":1332,"compact-revision":1090}
	{"level":"info","ts":"2024-06-03T14:10:39.388678Z","caller":"traceutil/trace.go:171","msg":"trace[1655550014] linearizableReadLoop","detail":"{readStateIndex:1904; appliedIndex:1903; }","duration":"174.570237ms","start":"2024-06-03T14:10:39.214054Z","end":"2024-06-03T14:10:39.388624Z","steps":["trace[1655550014] 'read index received'  (duration: 174.340024ms)","trace[1655550014] 'applied index is now lower than readState.Index'  (duration: 229.708µs)"],"step_count":2}
	{"level":"info","ts":"2024-06-03T14:10:39.388888Z","caller":"traceutil/trace.go:171","msg":"trace[1962241271] transaction","detail":"{read_only:false; response_revision:1610; number_of_response:1; }","duration":"190.221872ms","start":"2024-06-03T14:10:39.198636Z","end":"2024-06-03T14:10:39.388858Z","steps":["trace[1962241271] 'process raft request'  (duration: 189.777708ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T14:10:39.389087Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"174.945615ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-03T14:10:39.389789Z","caller":"traceutil/trace.go:171","msg":"trace[2110855559] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1610; }","duration":"175.746274ms","start":"2024-06-03T14:10:39.214031Z","end":"2024-06-03T14:10:39.389777Z","steps":["trace[2110855559] 'agreement among raft nodes before linearized reading'  (duration: 174.942764ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T14:10:39.648416Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.387211ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-03T14:10:39.648569Z","caller":"traceutil/trace.go:171","msg":"trace[385667032] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1610; }","duration":"101.582722ms","start":"2024-06-03T14:10:39.546967Z","end":"2024-06-03T14:10:39.64855Z","steps":["trace[385667032] 'range keys from in-memory index tree'  (duration: 101.317712ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:10:52 up 21 min,  0 users,  load average: 0.22, 0.17, 0.11
	Linux embed-certs-223260 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a] <==
	I0603 14:05:00.783209       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 14:06:00.782425       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 14:06:00.782494       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0603 14:06:00.782503       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 14:06:00.783683       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 14:06:00.783761       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0603 14:06:00.783768       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 14:08:00.783179       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 14:08:00.783520       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0603 14:08:00.783550       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 14:08:00.784372       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 14:08:00.784435       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0603 14:08:00.785652       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 14:09:59.785929       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 14:09:59.786235       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0603 14:10:00.787117       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 14:10:00.787207       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0603 14:10:00.787215       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 14:10:00.787129       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 14:10:00.787291       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0603 14:10:00.788590       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d] <==
	I0603 14:05:15.245209       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 14:05:44.719178       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:05:45.253790       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 14:06:14.724545       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:06:15.262353       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0603 14:06:39.260305       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="223.841µs"
	E0603 14:06:44.729768       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:06:45.270190       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0603 14:06:50.253095       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="89.23µs"
	E0603 14:07:14.734948       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:07:15.277722       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 14:07:44.739790       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:07:45.285186       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 14:08:14.746392       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:08:15.293380       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 14:08:44.750984       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:08:45.302443       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 14:09:14.756121       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:09:15.310178       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 14:09:44.760898       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:09:45.318606       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 14:10:14.768504       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:10:15.326461       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 14:10:44.774883       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:10:45.335633       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1] <==
	I0603 13:50:00.880068       1 server_linux.go:69] "Using iptables proxy"
	I0603 13:50:00.896867       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.83.246"]
	I0603 13:50:00.949675       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 13:50:00.949766       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 13:50:00.949897       1 server_linux.go:165] "Using iptables Proxier"
	I0603 13:50:00.953304       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 13:50:00.953550       1 server.go:872] "Version info" version="v1.30.1"
	I0603 13:50:00.953808       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 13:50:00.955560       1 config.go:192] "Starting service config controller"
	I0603 13:50:00.955619       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 13:50:00.955667       1 config.go:101] "Starting endpoint slice config controller"
	I0603 13:50:00.955687       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 13:50:00.956159       1 config.go:319] "Starting node config controller"
	I0603 13:50:00.956192       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 13:50:01.056459       1 shared_informer.go:320] Caches are synced for node config
	I0603 13:50:01.056555       1 shared_informer.go:320] Caches are synced for service config
	I0603 13:50:01.056589       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87] <==
	I0603 13:49:57.121122       1 serving.go:380] Generated self-signed cert in-memory
	W0603 13:49:59.686786       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0603 13:49:59.686891       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0603 13:49:59.686905       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0603 13:49:59.686962       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0603 13:49:59.756799       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0603 13:49:59.756888       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 13:49:59.762436       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0603 13:49:59.762633       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0603 13:49:59.762672       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 13:49:59.764330       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 13:49:59.866831       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 03 14:07:55 embed-certs-223260 kubelet[939]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 14:07:56 embed-certs-223260 kubelet[939]: E0603 14:07:56.238424     939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-v7d9t" podUID="e89c698d-7aab-4acd-a9b3-5ba0315ad681"
	Jun 03 14:08:11 embed-certs-223260 kubelet[939]: E0603 14:08:11.238345     939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-v7d9t" podUID="e89c698d-7aab-4acd-a9b3-5ba0315ad681"
	Jun 03 14:08:25 embed-certs-223260 kubelet[939]: E0603 14:08:25.238794     939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-v7d9t" podUID="e89c698d-7aab-4acd-a9b3-5ba0315ad681"
	Jun 03 14:08:38 embed-certs-223260 kubelet[939]: E0603 14:08:38.237752     939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-v7d9t" podUID="e89c698d-7aab-4acd-a9b3-5ba0315ad681"
	Jun 03 14:08:49 embed-certs-223260 kubelet[939]: E0603 14:08:49.238753     939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-v7d9t" podUID="e89c698d-7aab-4acd-a9b3-5ba0315ad681"
	Jun 03 14:08:55 embed-certs-223260 kubelet[939]: E0603 14:08:55.253985     939 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 14:08:55 embed-certs-223260 kubelet[939]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 14:08:55 embed-certs-223260 kubelet[939]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 14:08:55 embed-certs-223260 kubelet[939]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 14:08:55 embed-certs-223260 kubelet[939]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 14:09:01 embed-certs-223260 kubelet[939]: E0603 14:09:01.238838     939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-v7d9t" podUID="e89c698d-7aab-4acd-a9b3-5ba0315ad681"
	Jun 03 14:09:14 embed-certs-223260 kubelet[939]: E0603 14:09:14.237538     939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-v7d9t" podUID="e89c698d-7aab-4acd-a9b3-5ba0315ad681"
	Jun 03 14:09:28 embed-certs-223260 kubelet[939]: E0603 14:09:28.237561     939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-v7d9t" podUID="e89c698d-7aab-4acd-a9b3-5ba0315ad681"
	Jun 03 14:09:39 embed-certs-223260 kubelet[939]: E0603 14:09:39.241974     939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-v7d9t" podUID="e89c698d-7aab-4acd-a9b3-5ba0315ad681"
	Jun 03 14:09:52 embed-certs-223260 kubelet[939]: E0603 14:09:52.238673     939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-v7d9t" podUID="e89c698d-7aab-4acd-a9b3-5ba0315ad681"
	Jun 03 14:09:55 embed-certs-223260 kubelet[939]: E0603 14:09:55.255844     939 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 14:09:55 embed-certs-223260 kubelet[939]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 14:09:55 embed-certs-223260 kubelet[939]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 14:09:55 embed-certs-223260 kubelet[939]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 14:09:55 embed-certs-223260 kubelet[939]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 14:10:03 embed-certs-223260 kubelet[939]: E0603 14:10:03.238987     939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-v7d9t" podUID="e89c698d-7aab-4acd-a9b3-5ba0315ad681"
	Jun 03 14:10:18 embed-certs-223260 kubelet[939]: E0603 14:10:18.237709     939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-v7d9t" podUID="e89c698d-7aab-4acd-a9b3-5ba0315ad681"
	Jun 03 14:10:30 embed-certs-223260 kubelet[939]: E0603 14:10:30.240363     939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-v7d9t" podUID="e89c698d-7aab-4acd-a9b3-5ba0315ad681"
	Jun 03 14:10:43 embed-certs-223260 kubelet[939]: E0603 14:10:43.237837     939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-v7d9t" podUID="e89c698d-7aab-4acd-a9b3-5ba0315ad681"
	
	
	==> storage-provisioner [141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88] <==
	I0603 13:50:00.789165       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0603 13:50:30.795562       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b] <==
	I0603 13:50:31.571689       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0603 13:50:31.585683       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0603 13:50:31.585798       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0603 13:50:48.992210       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0603 13:50:48.992920       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-223260_c0a76c61-0743-4c2f-ba8a-ad97be818e25!
	I0603 13:50:48.993440       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"353379f3-5b07-45b6-b1e9-5e7fcc2c94ed", APIVersion:"v1", ResourceVersion:"631", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-223260_c0a76c61-0743-4c2f-ba8a-ad97be818e25 became leader
	I0603 13:50:49.093426       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-223260_c0a76c61-0743-4c2f-ba8a-ad97be818e25!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-223260 -n embed-certs-223260
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-223260 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-v7d9t
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-223260 describe pod metrics-server-569cc877fc-v7d9t
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-223260 describe pod metrics-server-569cc877fc-v7d9t: exit status 1 (65.01063ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-v7d9t" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-223260 describe pod metrics-server-569cc877fc-v7d9t: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (439.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (448.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-030870 -n default-k8s-diff-port-030870
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-06-03 14:11:21.195684557 +0000 UTC m=+6433.400243073
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-030870 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-030870 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.555µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-030870 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-030870 -n default-k8s-diff-port-030870
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-030870 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-030870 logs -n 25: (1.248497447s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                     | default-k8s-diff-port-030870 | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:43 UTC |
	|         | default-k8s-diff-port-030870                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-817450             | no-preload-817450            | jenkins | v1.33.1 | 03 Jun 24 13:42 UTC | 03 Jun 24 13:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-817450                                   | no-preload-817450            | jenkins | v1.33.1 | 03 Jun 24 13:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-223260            | embed-certs-223260           | jenkins | v1.33.1 | 03 Jun 24 13:43 UTC | 03 Jun 24 13:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-223260                                  | embed-certs-223260           | jenkins | v1.33.1 | 03 Jun 24 13:43 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-030870  | default-k8s-diff-port-030870 | jenkins | v1.33.1 | 03 Jun 24 13:43 UTC | 03 Jun 24 13:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-030870 | jenkins | v1.33.1 | 03 Jun 24 13:43 UTC |                     |
	|         | default-k8s-diff-port-030870                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-817450                  | no-preload-817450            | jenkins | v1.33.1 | 03 Jun 24 13:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-151788        | old-k8s-version-151788       | jenkins | v1.33.1 | 03 Jun 24 13:44 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p no-preload-817450                                   | no-preload-817450            | jenkins | v1.33.1 | 03 Jun 24 13:44 UTC | 03 Jun 24 13:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-223260                 | embed-certs-223260           | jenkins | v1.33.1 | 03 Jun 24 13:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-223260                                  | embed-certs-223260           | jenkins | v1.33.1 | 03 Jun 24 13:45 UTC | 03 Jun 24 13:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-030870       | default-k8s-diff-port-030870 | jenkins | v1.33.1 | 03 Jun 24 13:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-030870 | jenkins | v1.33.1 | 03 Jun 24 13:46 UTC | 03 Jun 24 13:54 UTC |
	|         | default-k8s-diff-port-030870                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-151788                              | old-k8s-version-151788       | jenkins | v1.33.1 | 03 Jun 24 13:46 UTC | 03 Jun 24 13:46 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-151788             | old-k8s-version-151788       | jenkins | v1.33.1 | 03 Jun 24 13:46 UTC | 03 Jun 24 13:46 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-151788                              | old-k8s-version-151788       | jenkins | v1.33.1 | 03 Jun 24 13:46 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-151788                              | old-k8s-version-151788       | jenkins | v1.33.1 | 03 Jun 24 14:10 UTC | 03 Jun 24 14:10 UTC |
	| start   | -p newest-cni-937150 --memory=2200 --alsologtostderr   | newest-cni-937150            | jenkins | v1.33.1 | 03 Jun 24 14:10 UTC | 03 Jun 24 14:11 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-817450                                   | no-preload-817450            | jenkins | v1.33.1 | 03 Jun 24 14:10 UTC | 03 Jun 24 14:10 UTC |
	| delete  | -p embed-certs-223260                                  | embed-certs-223260           | jenkins | v1.33.1 | 03 Jun 24 14:10 UTC | 03 Jun 24 14:10 UTC |
	| addons  | enable metrics-server -p newest-cni-937150             | newest-cni-937150            | jenkins | v1.33.1 | 03 Jun 24 14:11 UTC | 03 Jun 24 14:11 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-937150                                   | newest-cni-937150            | jenkins | v1.33.1 | 03 Jun 24 14:11 UTC | 03 Jun 24 14:11 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-937150                  | newest-cni-937150            | jenkins | v1.33.1 | 03 Jun 24 14:11 UTC | 03 Jun 24 14:11 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-937150 --memory=2200 --alsologtostderr   | newest-cni-937150            | jenkins | v1.33.1 | 03 Jun 24 14:11 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 14:11:13
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 14:11:13.764568 1150802 out.go:291] Setting OutFile to fd 1 ...
	I0603 14:11:13.764685 1150802 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 14:11:13.764695 1150802 out.go:304] Setting ErrFile to fd 2...
	I0603 14:11:13.764699 1150802 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 14:11:13.764888 1150802 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
	I0603 14:11:13.765431 1150802 out.go:298] Setting JSON to false
	I0603 14:11:13.766483 1150802 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":17621,"bootTime":1717406253,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 14:11:13.766544 1150802 start.go:139] virtualization: kvm guest
	I0603 14:11:13.768807 1150802 out.go:177] * [newest-cni-937150] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 14:11:13.770114 1150802 out.go:177]   - MINIKUBE_LOCATION=19011
	I0603 14:11:13.770118 1150802 notify.go:220] Checking for updates...
	I0603 14:11:13.771488 1150802 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 14:11:13.772835 1150802 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 14:11:13.774011 1150802 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 14:11:13.775286 1150802 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 14:11:13.776742 1150802 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 14:11:13.778714 1150802 config.go:182] Loaded profile config "newest-cni-937150": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 14:11:13.779094 1150802 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 14:11:13.779138 1150802 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 14:11:13.795616 1150802 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43925
	I0603 14:11:13.796031 1150802 main.go:141] libmachine: () Calling .GetVersion
	I0603 14:11:13.796534 1150802 main.go:141] libmachine: Using API Version  1
	I0603 14:11:13.796558 1150802 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 14:11:13.796913 1150802 main.go:141] libmachine: () Calling .GetMachineName
	I0603 14:11:13.797122 1150802 main.go:141] libmachine: (newest-cni-937150) Calling .DriverName
	I0603 14:11:13.797382 1150802 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 14:11:13.797688 1150802 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 14:11:13.797723 1150802 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 14:11:13.812332 1150802 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43487
	I0603 14:11:13.812695 1150802 main.go:141] libmachine: () Calling .GetVersion
	I0603 14:11:13.813145 1150802 main.go:141] libmachine: Using API Version  1
	I0603 14:11:13.813166 1150802 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 14:11:13.813601 1150802 main.go:141] libmachine: () Calling .GetMachineName
	I0603 14:11:13.813795 1150802 main.go:141] libmachine: (newest-cni-937150) Calling .DriverName
	I0603 14:11:13.849570 1150802 out.go:177] * Using the kvm2 driver based on existing profile
	I0603 14:11:13.850820 1150802 start.go:297] selected driver: kvm2
	I0603 14:11:13.850837 1150802 start.go:901] validating driver "kvm2" against &{Name:newest-cni-937150 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:newest-cni-937150 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.117 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] St
artHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 14:11:13.850947 1150802 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 14:11:13.851590 1150802 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 14:11:13.851656 1150802 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19011-1078924/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0603 14:11:13.866845 1150802 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0603 14:11:13.867243 1150802 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0603 14:11:13.867271 1150802 cni.go:84] Creating CNI manager for ""
	I0603 14:11:13.867279 1150802 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 14:11:13.867314 1150802 start.go:340] cluster config:
	{Name:newest-cni-937150 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:newest-cni-937150 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.117 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network
: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 14:11:13.867438 1150802 iso.go:125] acquiring lock: {Name:mka26d6a83f88b83737ccc78b57cc462fbe70fe1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 14:11:13.869251 1150802 out.go:177] * Starting "newest-cni-937150" primary control-plane node in "newest-cni-937150" cluster
	I0603 14:11:13.870725 1150802 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 14:11:13.870767 1150802 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0603 14:11:13.870784 1150802 cache.go:56] Caching tarball of preloaded images
	I0603 14:11:13.870864 1150802 preload.go:173] Found /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 14:11:13.870873 1150802 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0603 14:11:13.870977 1150802 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/config.json ...
	I0603 14:11:13.871170 1150802 start.go:360] acquireMachinesLock for newest-cni-937150: {Name:mk20baaab39609d00406b78ad309423511e633ec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 14:11:13.871261 1150802 start.go:364] duration metric: took 64.866µs to acquireMachinesLock for "newest-cni-937150"
	I0603 14:11:13.871280 1150802 start.go:96] Skipping create...Using existing machine configuration
	I0603 14:11:13.871287 1150802 fix.go:54] fixHost starting: 
	I0603 14:11:13.871554 1150802 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 14:11:13.871587 1150802 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 14:11:13.885958 1150802 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46431
	I0603 14:11:13.886392 1150802 main.go:141] libmachine: () Calling .GetVersion
	I0603 14:11:13.887045 1150802 main.go:141] libmachine: Using API Version  1
	I0603 14:11:13.887070 1150802 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 14:11:13.887403 1150802 main.go:141] libmachine: () Calling .GetMachineName
	I0603 14:11:13.887631 1150802 main.go:141] libmachine: (newest-cni-937150) Calling .DriverName
	I0603 14:11:13.887808 1150802 main.go:141] libmachine: (newest-cni-937150) Calling .GetState
	I0603 14:11:13.889399 1150802 fix.go:112] recreateIfNeeded on newest-cni-937150: state=Stopped err=<nil>
	I0603 14:11:13.889454 1150802 main.go:141] libmachine: (newest-cni-937150) Calling .DriverName
	W0603 14:11:13.889622 1150802 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 14:11:13.891482 1150802 out.go:177] * Restarting existing kvm2 VM for "newest-cni-937150" ...
	I0603 14:11:13.892718 1150802 main.go:141] libmachine: (newest-cni-937150) Calling .Start
	I0603 14:11:13.892869 1150802 main.go:141] libmachine: (newest-cni-937150) Ensuring networks are active...
	I0603 14:11:13.893587 1150802 main.go:141] libmachine: (newest-cni-937150) Ensuring network default is active
	I0603 14:11:13.893883 1150802 main.go:141] libmachine: (newest-cni-937150) Ensuring network mk-newest-cni-937150 is active
	I0603 14:11:13.894216 1150802 main.go:141] libmachine: (newest-cni-937150) Getting domain xml...
	I0603 14:11:13.895045 1150802 main.go:141] libmachine: (newest-cni-937150) Creating domain...
	I0603 14:11:15.094718 1150802 main.go:141] libmachine: (newest-cni-937150) Waiting to get IP...
	I0603 14:11:15.095645 1150802 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:11:15.096049 1150802 main.go:141] libmachine: (newest-cni-937150) DBG | unable to find current IP address of domain newest-cni-937150 in network mk-newest-cni-937150
	I0603 14:11:15.096126 1150802 main.go:141] libmachine: (newest-cni-937150) DBG | I0603 14:11:15.096013 1150837 retry.go:31] will retry after 223.608068ms: waiting for machine to come up
	I0603 14:11:15.321655 1150802 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:11:15.322122 1150802 main.go:141] libmachine: (newest-cni-937150) DBG | unable to find current IP address of domain newest-cni-937150 in network mk-newest-cni-937150
	I0603 14:11:15.322149 1150802 main.go:141] libmachine: (newest-cni-937150) DBG | I0603 14:11:15.322065 1150837 retry.go:31] will retry after 288.039253ms: waiting for machine to come up
	I0603 14:11:15.611655 1150802 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:11:15.612144 1150802 main.go:141] libmachine: (newest-cni-937150) DBG | unable to find current IP address of domain newest-cni-937150 in network mk-newest-cni-937150
	I0603 14:11:15.612174 1150802 main.go:141] libmachine: (newest-cni-937150) DBG | I0603 14:11:15.612072 1150837 retry.go:31] will retry after 411.625758ms: waiting for machine to come up
	I0603 14:11:16.025779 1150802 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:11:16.026238 1150802 main.go:141] libmachine: (newest-cni-937150) DBG | unable to find current IP address of domain newest-cni-937150 in network mk-newest-cni-937150
	I0603 14:11:16.026293 1150802 main.go:141] libmachine: (newest-cni-937150) DBG | I0603 14:11:16.026171 1150837 retry.go:31] will retry after 428.261787ms: waiting for machine to come up
	I0603 14:11:16.456049 1150802 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:11:16.456632 1150802 main.go:141] libmachine: (newest-cni-937150) DBG | unable to find current IP address of domain newest-cni-937150 in network mk-newest-cni-937150
	I0603 14:11:16.456667 1150802 main.go:141] libmachine: (newest-cni-937150) DBG | I0603 14:11:16.456584 1150837 retry.go:31] will retry after 530.529302ms: waiting for machine to come up
	I0603 14:11:16.988621 1150802 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:11:16.989079 1150802 main.go:141] libmachine: (newest-cni-937150) DBG | unable to find current IP address of domain newest-cni-937150 in network mk-newest-cni-937150
	I0603 14:11:16.989109 1150802 main.go:141] libmachine: (newest-cni-937150) DBG | I0603 14:11:16.989032 1150837 retry.go:31] will retry after 572.53367ms: waiting for machine to come up
	I0603 14:11:17.563025 1150802 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:11:17.563538 1150802 main.go:141] libmachine: (newest-cni-937150) DBG | unable to find current IP address of domain newest-cni-937150 in network mk-newest-cni-937150
	I0603 14:11:17.563567 1150802 main.go:141] libmachine: (newest-cni-937150) DBG | I0603 14:11:17.563481 1150837 retry.go:31] will retry after 864.097053ms: waiting for machine to come up
	I0603 14:11:18.429060 1150802 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:11:18.429547 1150802 main.go:141] libmachine: (newest-cni-937150) DBG | unable to find current IP address of domain newest-cni-937150 in network mk-newest-cni-937150
	I0603 14:11:18.429576 1150802 main.go:141] libmachine: (newest-cni-937150) DBG | I0603 14:11:18.429472 1150837 retry.go:31] will retry after 1.125822006s: waiting for machine to come up
	
	
	==> CRI-O <==
	Jun 03 14:11:21 default-k8s-diff-port-030870 crio[725]: time="2024-06-03 14:11:21.879027410Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:5dc28682caa3ee97ac27574a0837308d28e050b5e3c79a879b619d37ce3a5d4c,Metadata:&PodSandboxMetadata{Name:busybox,Uid:f50f4fd8-3455-456e-805d-c17087c1ca83,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717422629991788018,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f50f4fd8-3455-456e-805d-c17087c1ca83,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03T13:50:22.081263068Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b4fafb83fdac4a702b3ef20fd25c696380772cc3a76753f310332677a34b6765,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-flxqj,Uid:a116f363-ca50-4e2d-8c77-e99498c81e36,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:171742
2629990076105,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-flxqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a116f363-ca50-4e2d-8c77-e99498c81e36,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03T13:50:22.081267017Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:38f7029ddeecb9dd697a572808182f4f4034f7cf574196b3131b953ae20f8bab,Metadata:&PodSandboxMetadata{Name:metrics-server-569cc877fc-8xw9v,Uid:4ab08177-2171-493b-928c-456d8a21fd68,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717422628190021173,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-569cc877fc-8xw9v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ab08177-2171-493b-928c-456d8a21fd68,k8s-app: metrics-server,pod-template-hash: 569cc877fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03
T13:50:22.081261681Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:50212807557a6cae3046b392a9f588d9a4a74ddbd31ba2fc4db11ed6b55179df,Metadata:&PodSandboxMetadata{Name:kube-proxy-thsrx,Uid:96df5442-b343-47c8-a561-681a2d568d50,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717422622399895884,Labels:map[string]string{controller-revision-hash: 5dbf89796d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-thsrx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96df5442-b343-47c8-a561-681a2d568d50,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-03T13:50:22.081253761Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a89474e4dfc767507ab6b4dfa45eea5b464d6a43061f913597fdabefb21b3359,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:64d080e5-d582-4ee5-adbc-a652e8e2b820,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717422622394234266,Labels:
map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64d080e5-d582-4ee5-adbc-a652e8e2b820,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,ku
bernetes.io/config.seen: 2024-06-03T13:50:22.081269192Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f89c6219a84bbc27f2068d7ee65d9d5f07ba8c06ae07b3319be9f17ffbae51d9,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-030870,Uid:6e5db6f179904992f4c5d517b64cc96f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717422617628484750,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-030870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e5db6f179904992f4c5d517b64cc96f,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 6e5db6f179904992f4c5d517b64cc96f,kubernetes.io/config.seen: 2024-06-03T13:50:17.073005883Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:105382b122cbc95b14a6aa53c76054737a5391260f189f0dd20e72059cdca7bd,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-030870,Uid:9b1dfed2df
38083366ac860dd7d5c185,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717422617624811857,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-030870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b1dfed2df38083366ac860dd7d5c185,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.177:2379,kubernetes.io/config.hash: 9b1dfed2df38083366ac860dd7d5c185,kubernetes.io/config.seen: 2024-06-03T13:50:17.169407395Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:83a56ce979c24bec8630915596eb0ad1317808b60d3bc5c1ab9534f669454d2b,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-030870,Uid:16b03774b0b44028cd4391d23b00169b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717422617603916151,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.nam
e: kube-apiserver-default-k8s-diff-port-030870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16b03774b0b44028cd4391d23b00169b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.177:8444,kubernetes.io/config.hash: 16b03774b0b44028cd4391d23b00169b,kubernetes.io/config.seen: 2024-06-03T13:50:17.072999959Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:253ddf921d5b0de5dee2202c0700997ad5d1efd4e620d239a3bdb1f47ac1b445,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-030870,Uid:ec4fa399a65397995760045276de0216,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717422617603015433,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-030870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec4fa399a65397995760045276de0216,tier: control-p
lane,},Annotations:map[string]string{kubernetes.io/config.hash: ec4fa399a65397995760045276de0216,kubernetes.io/config.seen: 2024-06-03T13:50:17.073004772Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=ea1926fc-4ef1-4219-a477-ba4ab8b1dd90 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 03 14:11:21 default-k8s-diff-port-030870 crio[725]: time="2024-06-03 14:11:21.879684671Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3a4a5e27-b2fd-42bc-b295-1b3a42e37d2c name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:11:21 default-k8s-diff-port-030870 crio[725]: time="2024-06-03 14:11:21.879754367Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3a4a5e27-b2fd-42bc-b295-1b3a42e37d2c name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:11:21 default-k8s-diff-port-030870 crio[725]: time="2024-06-03 14:11:21.879966633Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240,PodSandboxId:a89474e4dfc767507ab6b4dfa45eea5b464d6a43061f913597fdabefb21b3359,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717422653381235369,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64d080e5-d582-4ee5-adbc-a652e8e2b820,},Annotations:map[string]string{io.kubernetes.container.hash: 310fe3a2,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ac76999daa4b17dee826c4752ad866f2acbc6b38d89b12d2ef962de2a4f20e3,PodSandboxId:5dc28682caa3ee97ac27574a0837308d28e050b5e3c79a879b619d37ce3a5d4c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1717422631195089075,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f50f4fd8-3455-456e-805d-c17087c1ca83,},Annotations:map[string]string{io.kubernetes.container.hash: f28de140,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d,PodSandboxId:b4fafb83fdac4a702b3ef20fd25c696380772cc3a76753f310332677a34b6765,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717422630294360668,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-flxqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a116f363-ca50-4e2d-8c77-e99498c81e36,},Annotations:map[string]string{io.kubernetes.container.hash: f9622600,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b,PodSandboxId:50212807557a6cae3046b392a9f588d9a4a74ddbd31ba2fc4db11ed6b55179df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717422622608060742,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-thsrx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96df5442-b
343-47c8-a561-681a2d568d50,},Annotations:map[string]string{io.kubernetes.container.hash: e9a8b0fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4,PodSandboxId:a89474e4dfc767507ab6b4dfa45eea5b464d6a43061f913597fdabefb21b3359,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717422622572995988,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64d080e5-d582-4ee5-adbc
-a652e8e2b820,},Annotations:map[string]string{io.kubernetes.container.hash: 310fe3a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d,PodSandboxId:105382b122cbc95b14a6aa53c76054737a5391260f189f0dd20e72059cdca7bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717422617925241759,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-030870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b1dfed2df38083366ac860dd7d5c185,},Annotations:map[
string]string{io.kubernetes.container.hash: f5995d14,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a,PodSandboxId:f89c6219a84bbc27f2068d7ee65d9d5f07ba8c06ae07b3319be9f17ffbae51d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717422617879459233,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-030870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e5db6f179904992f4c5d517b64cc96f,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7,PodSandboxId:253ddf921d5b0de5dee2202c0700997ad5d1efd4e620d239a3bdb1f47ac1b445,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717422617865448707,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-030870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec4fa399a65397995760045276de
0216,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836,PodSandboxId:83a56ce979c24bec8630915596eb0ad1317808b60d3bc5c1ab9534f669454d2b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717422617784234081,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-030870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16b03774b0b44028cd4391d23b0016
9b,},Annotations:map[string]string{io.kubernetes.container.hash: 447f56d2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3a4a5e27-b2fd-42bc-b295-1b3a42e37d2c name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:11:21 default-k8s-diff-port-030870 crio[725]: time="2024-06-03 14:11:21.880979148Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1a923e64-850d-4595-a7f5-cd424e08084f name=/runtime.v1.RuntimeService/Version
	Jun 03 14:11:21 default-k8s-diff-port-030870 crio[725]: time="2024-06-03 14:11:21.881051059Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1a923e64-850d-4595-a7f5-cd424e08084f name=/runtime.v1.RuntimeService/Version
	Jun 03 14:11:21 default-k8s-diff-port-030870 crio[725]: time="2024-06-03 14:11:21.882321115Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=abb33878-8dc2-4131-9970-6b1f1751d0fa name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:11:21 default-k8s-diff-port-030870 crio[725]: time="2024-06-03 14:11:21.882897842Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717423881882877977,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=abb33878-8dc2-4131-9970-6b1f1751d0fa name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:11:21 default-k8s-diff-port-030870 crio[725]: time="2024-06-03 14:11:21.883323703Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a114630a-9bbb-4b0a-9743-10a9394e8002 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:11:21 default-k8s-diff-port-030870 crio[725]: time="2024-06-03 14:11:21.883399013Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a114630a-9bbb-4b0a-9743-10a9394e8002 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:11:21 default-k8s-diff-port-030870 crio[725]: time="2024-06-03 14:11:21.883653933Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240,PodSandboxId:a89474e4dfc767507ab6b4dfa45eea5b464d6a43061f913597fdabefb21b3359,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717422653381235369,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64d080e5-d582-4ee5-adbc-a652e8e2b820,},Annotations:map[string]string{io.kubernetes.container.hash: 310fe3a2,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ac76999daa4b17dee826c4752ad866f2acbc6b38d89b12d2ef962de2a4f20e3,PodSandboxId:5dc28682caa3ee97ac27574a0837308d28e050b5e3c79a879b619d37ce3a5d4c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1717422631195089075,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f50f4fd8-3455-456e-805d-c17087c1ca83,},Annotations:map[string]string{io.kubernetes.container.hash: f28de140,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d,PodSandboxId:b4fafb83fdac4a702b3ef20fd25c696380772cc3a76753f310332677a34b6765,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717422630294360668,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-flxqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a116f363-ca50-4e2d-8c77-e99498c81e36,},Annotations:map[string]string{io.kubernetes.container.hash: f9622600,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b,PodSandboxId:50212807557a6cae3046b392a9f588d9a4a74ddbd31ba2fc4db11ed6b55179df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717422622608060742,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-thsrx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96df5442-b
343-47c8-a561-681a2d568d50,},Annotations:map[string]string{io.kubernetes.container.hash: e9a8b0fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4,PodSandboxId:a89474e4dfc767507ab6b4dfa45eea5b464d6a43061f913597fdabefb21b3359,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717422622572995988,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64d080e5-d582-4ee5-adbc
-a652e8e2b820,},Annotations:map[string]string{io.kubernetes.container.hash: 310fe3a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d,PodSandboxId:105382b122cbc95b14a6aa53c76054737a5391260f189f0dd20e72059cdca7bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717422617925241759,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-030870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b1dfed2df38083366ac860dd7d5c185,},Annotations:map[
string]string{io.kubernetes.container.hash: f5995d14,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a,PodSandboxId:f89c6219a84bbc27f2068d7ee65d9d5f07ba8c06ae07b3319be9f17ffbae51d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717422617879459233,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-030870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e5db6f179904992f4c5d517b64cc96f,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7,PodSandboxId:253ddf921d5b0de5dee2202c0700997ad5d1efd4e620d239a3bdb1f47ac1b445,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717422617865448707,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-030870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec4fa399a65397995760045276de
0216,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836,PodSandboxId:83a56ce979c24bec8630915596eb0ad1317808b60d3bc5c1ab9534f669454d2b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717422617784234081,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-030870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16b03774b0b44028cd4391d23b0016
9b,},Annotations:map[string]string{io.kubernetes.container.hash: 447f56d2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a114630a-9bbb-4b0a-9743-10a9394e8002 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:11:21 default-k8s-diff-port-030870 crio[725]: time="2024-06-03 14:11:21.923173141Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c1374506-28a0-4eb8-a18a-ba9055d99030 name=/runtime.v1.RuntimeService/Version
	Jun 03 14:11:21 default-k8s-diff-port-030870 crio[725]: time="2024-06-03 14:11:21.923260004Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c1374506-28a0-4eb8-a18a-ba9055d99030 name=/runtime.v1.RuntimeService/Version
	Jun 03 14:11:21 default-k8s-diff-port-030870 crio[725]: time="2024-06-03 14:11:21.924902952Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d32130c1-103f-4aee-9932-e60953a65247 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:11:21 default-k8s-diff-port-030870 crio[725]: time="2024-06-03 14:11:21.925604438Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717423881925465140,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d32130c1-103f-4aee-9932-e60953a65247 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:11:21 default-k8s-diff-port-030870 crio[725]: time="2024-06-03 14:11:21.926358239Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fe85ad78-167f-4fb5-892f-168b56737fd5 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:11:21 default-k8s-diff-port-030870 crio[725]: time="2024-06-03 14:11:21.926427083Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fe85ad78-167f-4fb5-892f-168b56737fd5 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:11:21 default-k8s-diff-port-030870 crio[725]: time="2024-06-03 14:11:21.926673313Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240,PodSandboxId:a89474e4dfc767507ab6b4dfa45eea5b464d6a43061f913597fdabefb21b3359,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717422653381235369,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64d080e5-d582-4ee5-adbc-a652e8e2b820,},Annotations:map[string]string{io.kubernetes.container.hash: 310fe3a2,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ac76999daa4b17dee826c4752ad866f2acbc6b38d89b12d2ef962de2a4f20e3,PodSandboxId:5dc28682caa3ee97ac27574a0837308d28e050b5e3c79a879b619d37ce3a5d4c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1717422631195089075,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f50f4fd8-3455-456e-805d-c17087c1ca83,},Annotations:map[string]string{io.kubernetes.container.hash: f28de140,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d,PodSandboxId:b4fafb83fdac4a702b3ef20fd25c696380772cc3a76753f310332677a34b6765,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717422630294360668,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-flxqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a116f363-ca50-4e2d-8c77-e99498c81e36,},Annotations:map[string]string{io.kubernetes.container.hash: f9622600,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b,PodSandboxId:50212807557a6cae3046b392a9f588d9a4a74ddbd31ba2fc4db11ed6b55179df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717422622608060742,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-thsrx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96df5442-b
343-47c8-a561-681a2d568d50,},Annotations:map[string]string{io.kubernetes.container.hash: e9a8b0fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4,PodSandboxId:a89474e4dfc767507ab6b4dfa45eea5b464d6a43061f913597fdabefb21b3359,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717422622572995988,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64d080e5-d582-4ee5-adbc
-a652e8e2b820,},Annotations:map[string]string{io.kubernetes.container.hash: 310fe3a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d,PodSandboxId:105382b122cbc95b14a6aa53c76054737a5391260f189f0dd20e72059cdca7bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717422617925241759,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-030870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b1dfed2df38083366ac860dd7d5c185,},Annotations:map[
string]string{io.kubernetes.container.hash: f5995d14,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a,PodSandboxId:f89c6219a84bbc27f2068d7ee65d9d5f07ba8c06ae07b3319be9f17ffbae51d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717422617879459233,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-030870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e5db6f179904992f4c5d517b64cc96f,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7,PodSandboxId:253ddf921d5b0de5dee2202c0700997ad5d1efd4e620d239a3bdb1f47ac1b445,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717422617865448707,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-030870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec4fa399a65397995760045276de
0216,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836,PodSandboxId:83a56ce979c24bec8630915596eb0ad1317808b60d3bc5c1ab9534f669454d2b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717422617784234081,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-030870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16b03774b0b44028cd4391d23b0016
9b,},Annotations:map[string]string{io.kubernetes.container.hash: 447f56d2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fe85ad78-167f-4fb5-892f-168b56737fd5 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:11:21 default-k8s-diff-port-030870 crio[725]: time="2024-06-03 14:11:21.961468125Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7e3e1414-21fd-4105-bc3c-7ffe6fb7aa2b name=/runtime.v1.RuntimeService/Version
	Jun 03 14:11:21 default-k8s-diff-port-030870 crio[725]: time="2024-06-03 14:11:21.961616157Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7e3e1414-21fd-4105-bc3c-7ffe6fb7aa2b name=/runtime.v1.RuntimeService/Version
	Jun 03 14:11:21 default-k8s-diff-port-030870 crio[725]: time="2024-06-03 14:11:21.962893497Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d0d54bdd-6e57-4639-b8f2-cff5e2923ef6 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:11:21 default-k8s-diff-port-030870 crio[725]: time="2024-06-03 14:11:21.963572239Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717423881963468849,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d0d54bdd-6e57-4639-b8f2-cff5e2923ef6 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:11:21 default-k8s-diff-port-030870 crio[725]: time="2024-06-03 14:11:21.964324791Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=991de7a4-3202-4e53-bb82-c7d18ba5d616 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:11:21 default-k8s-diff-port-030870 crio[725]: time="2024-06-03 14:11:21.964398875Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=991de7a4-3202-4e53-bb82-c7d18ba5d616 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:11:21 default-k8s-diff-port-030870 crio[725]: time="2024-06-03 14:11:21.964638703Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240,PodSandboxId:a89474e4dfc767507ab6b4dfa45eea5b464d6a43061f913597fdabefb21b3359,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717422653381235369,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64d080e5-d582-4ee5-adbc-a652e8e2b820,},Annotations:map[string]string{io.kubernetes.container.hash: 310fe3a2,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ac76999daa4b17dee826c4752ad866f2acbc6b38d89b12d2ef962de2a4f20e3,PodSandboxId:5dc28682caa3ee97ac27574a0837308d28e050b5e3c79a879b619d37ce3a5d4c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1717422631195089075,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f50f4fd8-3455-456e-805d-c17087c1ca83,},Annotations:map[string]string{io.kubernetes.container.hash: f28de140,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d,PodSandboxId:b4fafb83fdac4a702b3ef20fd25c696380772cc3a76753f310332677a34b6765,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717422630294360668,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-flxqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a116f363-ca50-4e2d-8c77-e99498c81e36,},Annotations:map[string]string{io.kubernetes.container.hash: f9622600,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b,PodSandboxId:50212807557a6cae3046b392a9f588d9a4a74ddbd31ba2fc4db11ed6b55179df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717422622608060742,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-thsrx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96df5442-b
343-47c8-a561-681a2d568d50,},Annotations:map[string]string{io.kubernetes.container.hash: e9a8b0fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4,PodSandboxId:a89474e4dfc767507ab6b4dfa45eea5b464d6a43061f913597fdabefb21b3359,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717422622572995988,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64d080e5-d582-4ee5-adbc
-a652e8e2b820,},Annotations:map[string]string{io.kubernetes.container.hash: 310fe3a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d,PodSandboxId:105382b122cbc95b14a6aa53c76054737a5391260f189f0dd20e72059cdca7bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717422617925241759,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-030870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b1dfed2df38083366ac860dd7d5c185,},Annotations:map[
string]string{io.kubernetes.container.hash: f5995d14,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a,PodSandboxId:f89c6219a84bbc27f2068d7ee65d9d5f07ba8c06ae07b3319be9f17ffbae51d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717422617879459233,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-030870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e5db6f179904992f4c5d517b64cc96f,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7,PodSandboxId:253ddf921d5b0de5dee2202c0700997ad5d1efd4e620d239a3bdb1f47ac1b445,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717422617865448707,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-030870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec4fa399a65397995760045276de
0216,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836,PodSandboxId:83a56ce979c24bec8630915596eb0ad1317808b60d3bc5c1ab9534f669454d2b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717422617784234081,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-030870,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16b03774b0b44028cd4391d23b0016
9b,},Annotations:map[string]string{io.kubernetes.container.hash: 447f56d2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=991de7a4-3202-4e53-bb82-c7d18ba5d616 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	969178964b33d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       2                   a89474e4dfc76       storage-provisioner
	6ac76999daa4b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   20 minutes ago      Running             busybox                   1                   5dc28682caa3e       busybox
	bc9ddfc8f250b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      20 minutes ago      Running             coredns                   1                   b4fafb83fdac4       coredns-7db6d8ff4d-flxqj
	9359de3110480       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      20 minutes ago      Running             kube-proxy                1                   50212807557a6       kube-proxy-thsrx
	bc407a1d19d20       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Exited              storage-provisioner       1                   a89474e4dfc76       storage-provisioner
	c1051588032f5       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      21 minutes ago      Running             etcd                      1                   105382b122cbc       etcd-default-k8s-diff-port-030870
	7aab9931698b9       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      21 minutes ago      Running             kube-scheduler            1                   f89c6219a84bb       kube-scheduler-default-k8s-diff-port-030870
	b97dd1f775dd3       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      21 minutes ago      Running             kube-controller-manager   1                   253ddf921d5b0       kube-controller-manager-default-k8s-diff-port-030870
	50541b09cc089       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      21 minutes ago      Running             kube-apiserver            1                   83a56ce979c24       kube-apiserver-default-k8s-diff-port-030870
	
	
	==> coredns [bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:39746 - 11886 "HINFO IN 1972896720099381992.1985859716288422354. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030146491s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-030870
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-030870
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	                    minikube.k8s.io/name=default-k8s-diff-port-030870
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_03T13_42_34_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 13:42:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-030870
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 14:11:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 14:11:17 +0000   Mon, 03 Jun 2024 13:42:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 14:11:17 +0000   Mon, 03 Jun 2024 13:42:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 14:11:17 +0000   Mon, 03 Jun 2024 13:42:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 14:11:17 +0000   Mon, 03 Jun 2024 13:50:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.177
	  Hostname:    default-k8s-diff-port-030870
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 542b7249b64443b180ba274289f8f2ee
	  System UUID:                542b7249-b644-43b1-80ba-274289f8f2ee
	  Boot ID:                    cfbcbd2e-8522-45d1-b37a-c0a941b08c1e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 coredns-7db6d8ff4d-flxqj                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-default-k8s-diff-port-030870                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kube-apiserver-default-k8s-diff-port-030870             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-030870    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-thsrx                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-default-k8s-diff-port-030870             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 metrics-server-569cc877fc-8xw9v                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 20m                kube-proxy       
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28m (x8 over 28m)  kubelet          Node default-k8s-diff-port-030870 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m (x8 over 28m)  kubelet          Node default-k8s-diff-port-030870 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m (x7 over 28m)  kubelet          Node default-k8s-diff-port-030870 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node default-k8s-diff-port-030870 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node default-k8s-diff-port-030870 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     28m                kubelet          Node default-k8s-diff-port-030870 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                28m                kubelet          Node default-k8s-diff-port-030870 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node default-k8s-diff-port-030870 event: Registered Node default-k8s-diff-port-030870 in Controller
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-030870 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-030870 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-030870 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20m                node-controller  Node default-k8s-diff-port-030870 event: Registered Node default-k8s-diff-port-030870 in Controller
	
	
	==> dmesg <==
	[Jun 3 13:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.056780] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041745] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.708923] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.416866] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.635491] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jun 3 13:50] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.066610] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.084347] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.205695] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.172925] systemd-fstab-generator[682]: Ignoring "noauto" option for root device
	[  +0.351117] systemd-fstab-generator[711]: Ignoring "noauto" option for root device
	[  +4.868188] systemd-fstab-generator[807]: Ignoring "noauto" option for root device
	[  +0.063444] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.516215] systemd-fstab-generator[931]: Ignoring "noauto" option for root device
	[  +5.623561] kauditd_printk_skb: 97 callbacks suppressed
	[  +3.998942] systemd-fstab-generator[1545]: Ignoring "noauto" option for root device
	[  +1.728805] kauditd_printk_skb: 62 callbacks suppressed
	[  +7.900832] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d] <==
	{"level":"warn","ts":"2024-06-03T13:50:37.316635Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"312.940319ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-03T13:50:37.316657Z","caller":"traceutil/trace.go:171","msg":"trace[1157352558] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:636; }","duration":"312.992521ms","start":"2024-06-03T13:50:37.003658Z","end":"2024-06-03T13:50:37.316651Z","steps":["trace[1157352558] 'agreement among raft nodes before linearized reading'  (duration: 312.95397ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T13:50:37.316675Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-03T13:50:37.003643Z","time spent":"313.027787ms","remote":"127.0.0.1:40062","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-06-03T13:50:37.316741Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-03T13:50:36.906616Z","time spent":"409.433487ms","remote":"127.0.0.1:40242","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5460,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-030870\" mod_revision:574 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-030870\" value_size:5392 >> failure:<request_range:<key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-030870\" > >"}
	{"level":"warn","ts":"2024-06-03T13:50:37.935733Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"390.850915ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-030870\" ","response":"range_response_count:1 size:5550"}
	{"level":"info","ts":"2024-06-03T13:50:37.935883Z","caller":"traceutil/trace.go:171","msg":"trace[1679571383] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-030870; range_end:; response_count:1; response_revision:636; }","duration":"391.044047ms","start":"2024-06-03T13:50:37.544824Z","end":"2024-06-03T13:50:37.935868Z","steps":["trace[1679571383] 'range keys from in-memory index tree'  (duration: 390.708926ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T13:50:37.935947Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-03T13:50:37.544807Z","time spent":"391.129612ms","remote":"127.0.0.1:40240","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":5574,"request content":"key:\"/registry/minions/default-k8s-diff-port-030870\" "}
	{"level":"warn","ts":"2024-06-03T13:50:59.606114Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.920762ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8080178866669689405 > lease_revoke:<id:70228fde5d7719c9>","response":"size:29"}
	{"level":"info","ts":"2024-06-03T14:00:19.845782Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":879}
	{"level":"info","ts":"2024-06-03T14:00:19.859098Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":879,"took":"12.905628ms","hash":3172330257,"current-db-size-bytes":2895872,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":2895872,"current-db-size-in-use":"2.9 MB"}
	{"level":"info","ts":"2024-06-03T14:00:19.859171Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3172330257,"revision":879,"compact-revision":-1}
	{"level":"info","ts":"2024-06-03T14:05:19.854839Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1122}
	{"level":"info","ts":"2024-06-03T14:05:19.85966Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1122,"took":"4.518506ms","hash":2820318459,"current-db-size-bytes":2895872,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":1708032,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-06-03T14:05:19.859718Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2820318459,"revision":1122,"compact-revision":879}
	{"level":"info","ts":"2024-06-03T14:10:19.863987Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1366}
	{"level":"info","ts":"2024-06-03T14:10:19.871759Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1366,"took":"6.977404ms","hash":2180775227,"current-db-size-bytes":2895872,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":1667072,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-06-03T14:10:19.871968Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2180775227,"revision":1366,"compact-revision":1122}
	{"level":"info","ts":"2024-06-03T14:10:39.162955Z","caller":"traceutil/trace.go:171","msg":"trace[1941580225] linearizableReadLoop","detail":"{readStateIndex:1917; appliedIndex:1916; }","duration":"122.501309ms","start":"2024-06-03T14:10:39.040402Z","end":"2024-06-03T14:10:39.162903Z","steps":["trace[1941580225] 'read index received'  (duration: 122.341893ms)","trace[1941580225] 'applied index is now lower than readState.Index'  (duration: 158.967µs)"],"step_count":2}
	{"level":"info","ts":"2024-06-03T14:10:39.163172Z","caller":"traceutil/trace.go:171","msg":"trace[1606049721] transaction","detail":"{read_only:false; response_revision:1625; number_of_response:1; }","duration":"235.712352ms","start":"2024-06-03T14:10:38.92744Z","end":"2024-06-03T14:10:39.163152Z","steps":["trace[1606049721] 'process raft request'  (duration: 235.338315ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T14:10:39.163282Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.729206ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/\" range_end:\"/registry/csinodes0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-06-03T14:10:39.163344Z","caller":"traceutil/trace.go:171","msg":"trace[1924658618] range","detail":"{range_begin:/registry/csinodes/; range_end:/registry/csinodes0; response_count:0; response_revision:1625; }","duration":"122.951609ms","start":"2024-06-03T14:10:39.040382Z","end":"2024-06-03T14:10:39.163333Z","steps":["trace[1924658618] 'agreement among raft nodes before linearized reading'  (duration: 122.72347ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-03T14:10:40.107205Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.214571ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8080178866669695895 > lease_revoke:<id:70228fde5d773349>","response":"size:29"}
	{"level":"info","ts":"2024-06-03T14:10:40.107352Z","caller":"traceutil/trace.go:171","msg":"trace[1160188295] linearizableReadLoop","detail":"{readStateIndex:1918; appliedIndex:1917; }","duration":"104.707707ms","start":"2024-06-03T14:10:40.002634Z","end":"2024-06-03T14:10:40.107342Z","steps":["trace[1160188295] 'read index received'  (duration: 23.756µs)","trace[1160188295] 'applied index is now lower than readState.Index'  (duration: 104.682732ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-03T14:10:40.107428Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.803668ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-03T14:10:40.107561Z","caller":"traceutil/trace.go:171","msg":"trace[1286494201] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1625; }","duration":"104.888238ms","start":"2024-06-03T14:10:40.00259Z","end":"2024-06-03T14:10:40.107478Z","steps":["trace[1286494201] 'agreement among raft nodes before linearized reading'  (duration: 104.804546ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:11:22 up 21 min,  0 users,  load average: 0.04, 0.09, 0.09
	Linux default-k8s-diff-port-030870 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836] <==
	I0603 14:06:22.288935       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 14:08:22.287660       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 14:08:22.288140       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0603 14:08:22.288207       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 14:08:22.289907       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 14:08:22.289950       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0603 14:08:22.289958       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 14:10:21.290001       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 14:10:21.290152       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0603 14:10:22.290774       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 14:10:22.290824       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0603 14:10:22.290834       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 14:10:22.291068       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 14:10:22.291208       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0603 14:10:22.292399       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 14:11:22.291346       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 14:11:22.291422       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0603 14:11:22.291436       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 14:11:22.292778       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 14:11:22.292906       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0603 14:11:22.292943       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7] <==
	I0603 14:05:36.674871       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 14:06:06.077543       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:06:06.681660       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0603 14:06:29.195362       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="319.662µs"
	E0603 14:06:36.082754       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:06:36.689656       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0603 14:06:42.193310       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="43.904µs"
	E0603 14:07:06.087767       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:07:06.697112       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 14:07:36.093321       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:07:36.705292       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 14:08:06.098128       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:08:06.714024       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 14:08:36.103822       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:08:36.721380       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 14:09:06.109277       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:09:06.730094       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 14:09:36.114117       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:09:36.738821       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 14:10:06.121836       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:10:06.754022       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 14:10:36.127844       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:10:36.764093       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 14:11:06.133231       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:11:06.772790       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b] <==
	I0603 13:50:22.811547       1 server_linux.go:69] "Using iptables proxy"
	I0603 13:50:22.823458       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.177"]
	I0603 13:50:22.873189       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 13:50:22.873271       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 13:50:22.873298       1 server_linux.go:165] "Using iptables Proxier"
	I0603 13:50:22.876637       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 13:50:22.876958       1 server.go:872] "Version info" version="v1.30.1"
	I0603 13:50:22.876989       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 13:50:22.878193       1 config.go:192] "Starting service config controller"
	I0603 13:50:22.878228       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 13:50:22.878261       1 config.go:101] "Starting endpoint slice config controller"
	I0603 13:50:22.878265       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 13:50:22.879453       1 config.go:319] "Starting node config controller"
	I0603 13:50:22.879595       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 13:50:22.979232       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 13:50:22.979309       1 shared_informer.go:320] Caches are synced for service config
	I0603 13:50:22.979941       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a] <==
	I0603 13:50:18.797414       1 serving.go:380] Generated self-signed cert in-memory
	W0603 13:50:21.239685       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0603 13:50:21.239816       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0603 13:50:21.239926       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0603 13:50:21.239951       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0603 13:50:21.276905       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0603 13:50:21.277016       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 13:50:21.280874       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0603 13:50:21.280945       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0603 13:50:21.281444       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0603 13:50:21.281832       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0603 13:50:21.382144       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 03 14:09:12 default-k8s-diff-port-030870 kubelet[938]: E0603 14:09:12.179641     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-8xw9v" podUID="4ab08177-2171-493b-928c-456d8a21fd68"
	Jun 03 14:09:17 default-k8s-diff-port-030870 kubelet[938]: E0603 14:09:17.203352     938 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 14:09:17 default-k8s-diff-port-030870 kubelet[938]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 14:09:17 default-k8s-diff-port-030870 kubelet[938]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 14:09:17 default-k8s-diff-port-030870 kubelet[938]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 14:09:17 default-k8s-diff-port-030870 kubelet[938]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 14:09:26 default-k8s-diff-port-030870 kubelet[938]: E0603 14:09:26.178774     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-8xw9v" podUID="4ab08177-2171-493b-928c-456d8a21fd68"
	Jun 03 14:09:39 default-k8s-diff-port-030870 kubelet[938]: E0603 14:09:39.180721     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-8xw9v" podUID="4ab08177-2171-493b-928c-456d8a21fd68"
	Jun 03 14:09:51 default-k8s-diff-port-030870 kubelet[938]: E0603 14:09:51.181196     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-8xw9v" podUID="4ab08177-2171-493b-928c-456d8a21fd68"
	Jun 03 14:10:06 default-k8s-diff-port-030870 kubelet[938]: E0603 14:10:06.179983     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-8xw9v" podUID="4ab08177-2171-493b-928c-456d8a21fd68"
	Jun 03 14:10:17 default-k8s-diff-port-030870 kubelet[938]: E0603 14:10:17.207667     938 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 14:10:17 default-k8s-diff-port-030870 kubelet[938]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 14:10:17 default-k8s-diff-port-030870 kubelet[938]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 14:10:17 default-k8s-diff-port-030870 kubelet[938]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 14:10:17 default-k8s-diff-port-030870 kubelet[938]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 14:10:20 default-k8s-diff-port-030870 kubelet[938]: E0603 14:10:20.179189     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-8xw9v" podUID="4ab08177-2171-493b-928c-456d8a21fd68"
	Jun 03 14:10:34 default-k8s-diff-port-030870 kubelet[938]: E0603 14:10:34.183226     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-8xw9v" podUID="4ab08177-2171-493b-928c-456d8a21fd68"
	Jun 03 14:10:45 default-k8s-diff-port-030870 kubelet[938]: E0603 14:10:45.179624     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-8xw9v" podUID="4ab08177-2171-493b-928c-456d8a21fd68"
	Jun 03 14:11:00 default-k8s-diff-port-030870 kubelet[938]: E0603 14:11:00.179875     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-8xw9v" podUID="4ab08177-2171-493b-928c-456d8a21fd68"
	Jun 03 14:11:12 default-k8s-diff-port-030870 kubelet[938]: E0603 14:11:12.179035     938 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-8xw9v" podUID="4ab08177-2171-493b-928c-456d8a21fd68"
	Jun 03 14:11:17 default-k8s-diff-port-030870 kubelet[938]: E0603 14:11:17.208746     938 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 14:11:17 default-k8s-diff-port-030870 kubelet[938]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 14:11:17 default-k8s-diff-port-030870 kubelet[938]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 14:11:17 default-k8s-diff-port-030870 kubelet[938]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 14:11:17 default-k8s-diff-port-030870 kubelet[938]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240] <==
	I0603 13:50:53.493461       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0603 13:50:53.512767       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0603 13:50:53.512853       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0603 13:51:10.919097       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0603 13:51:10.921354       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-030870_5756df9b-0457-439a-9273-51b749b46572!
	I0603 13:51:10.922231       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2e9bfded-55bf-4dea-97b9-05156a907d75", APIVersion:"v1", ResourceVersion:"663", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-030870_5756df9b-0457-439a-9273-51b749b46572 became leader
	I0603 13:51:11.022553       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-030870_5756df9b-0457-439a-9273-51b749b46572!
	
	
	==> storage-provisioner [bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4] <==
	I0603 13:50:22.750285       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0603 13:50:52.753832       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-030870 -n default-k8s-diff-port-030870
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-030870 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-8xw9v
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-030870 describe pod metrics-server-569cc877fc-8xw9v
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-030870 describe pod metrics-server-569cc877fc-8xw9v: exit status 1 (69.763385ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-8xw9v" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-030870 describe pod metrics-server-569cc877fc-8xw9v: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (448.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (321.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-817450 -n no-preload-817450
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-06-03 14:10:49.879033268 +0000 UTC m=+6402.083591781
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-817450 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-817450 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.24µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-817450 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-817450 -n no-preload-817450
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-817450 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-817450 logs -n 25: (1.411599456s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-021279 sudo                                  | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-021279 sudo                                  | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-021279 sudo find                             | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-021279 sudo crio                             | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-021279                                       | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	| delete  | -p                                                     | disable-driver-mounts-069000 | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | disable-driver-mounts-069000                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-030870 | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:43 UTC |
	|         | default-k8s-diff-port-030870                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-817450             | no-preload-817450            | jenkins | v1.33.1 | 03 Jun 24 13:42 UTC | 03 Jun 24 13:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-817450                                   | no-preload-817450            | jenkins | v1.33.1 | 03 Jun 24 13:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-223260            | embed-certs-223260           | jenkins | v1.33.1 | 03 Jun 24 13:43 UTC | 03 Jun 24 13:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-223260                                  | embed-certs-223260           | jenkins | v1.33.1 | 03 Jun 24 13:43 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-030870  | default-k8s-diff-port-030870 | jenkins | v1.33.1 | 03 Jun 24 13:43 UTC | 03 Jun 24 13:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-030870 | jenkins | v1.33.1 | 03 Jun 24 13:43 UTC |                     |
	|         | default-k8s-diff-port-030870                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-817450                  | no-preload-817450            | jenkins | v1.33.1 | 03 Jun 24 13:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-151788        | old-k8s-version-151788       | jenkins | v1.33.1 | 03 Jun 24 13:44 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p no-preload-817450                                   | no-preload-817450            | jenkins | v1.33.1 | 03 Jun 24 13:44 UTC | 03 Jun 24 13:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-223260                 | embed-certs-223260           | jenkins | v1.33.1 | 03 Jun 24 13:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-223260                                  | embed-certs-223260           | jenkins | v1.33.1 | 03 Jun 24 13:45 UTC | 03 Jun 24 13:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-030870       | default-k8s-diff-port-030870 | jenkins | v1.33.1 | 03 Jun 24 13:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-030870 | jenkins | v1.33.1 | 03 Jun 24 13:46 UTC | 03 Jun 24 13:54 UTC |
	|         | default-k8s-diff-port-030870                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-151788                              | old-k8s-version-151788       | jenkins | v1.33.1 | 03 Jun 24 13:46 UTC | 03 Jun 24 13:46 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-151788             | old-k8s-version-151788       | jenkins | v1.33.1 | 03 Jun 24 13:46 UTC | 03 Jun 24 13:46 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-151788                              | old-k8s-version-151788       | jenkins | v1.33.1 | 03 Jun 24 13:46 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-151788                              | old-k8s-version-151788       | jenkins | v1.33.1 | 03 Jun 24 14:10 UTC | 03 Jun 24 14:10 UTC |
	| start   | -p newest-cni-937150 --memory=2200 --alsologtostderr   | newest-cni-937150            | jenkins | v1.33.1 | 03 Jun 24 14:10 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 14:10:10
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 14:10:10.374422 1149858 out.go:291] Setting OutFile to fd 1 ...
	I0603 14:10:10.374815 1149858 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 14:10:10.374864 1149858 out.go:304] Setting ErrFile to fd 2...
	I0603 14:10:10.374882 1149858 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 14:10:10.375383 1149858 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
	I0603 14:10:10.376501 1149858 out.go:298] Setting JSON to false
	I0603 14:10:10.377800 1149858 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":17557,"bootTime":1717406253,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 14:10:10.377878 1149858 start.go:139] virtualization: kvm guest
	I0603 14:10:10.380015 1149858 out.go:177] * [newest-cni-937150] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 14:10:10.381875 1149858 out.go:177]   - MINIKUBE_LOCATION=19011
	I0603 14:10:10.381875 1149858 notify.go:220] Checking for updates...
	I0603 14:10:10.383549 1149858 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 14:10:10.384915 1149858 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 14:10:10.386228 1149858 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 14:10:10.387567 1149858 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 14:10:10.389006 1149858 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 14:10:10.390875 1149858 config.go:182] Loaded profile config "default-k8s-diff-port-030870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 14:10:10.390965 1149858 config.go:182] Loaded profile config "embed-certs-223260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 14:10:10.391042 1149858 config.go:182] Loaded profile config "no-preload-817450": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 14:10:10.391141 1149858 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 14:10:10.428062 1149858 out.go:177] * Using the kvm2 driver based on user configuration
	I0603 14:10:10.429645 1149858 start.go:297] selected driver: kvm2
	I0603 14:10:10.429671 1149858 start.go:901] validating driver "kvm2" against <nil>
	I0603 14:10:10.429683 1149858 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 14:10:10.430490 1149858 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 14:10:10.430587 1149858 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19011-1078924/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0603 14:10:10.447373 1149858 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0603 14:10:10.447435 1149858 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0603 14:10:10.447479 1149858 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0603 14:10:10.447820 1149858 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0603 14:10:10.447938 1149858 cni.go:84] Creating CNI manager for ""
	I0603 14:10:10.447966 1149858 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 14:10:10.447982 1149858 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0603 14:10:10.448077 1149858 start.go:340] cluster config:
	{Name:newest-cni-937150 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:newest-cni-937150 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 14:10:10.448191 1149858 iso.go:125] acquiring lock: {Name:mka26d6a83f88b83737ccc78b57cc462fbe70fe1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 14:10:10.450351 1149858 out.go:177] * Starting "newest-cni-937150" primary control-plane node in "newest-cni-937150" cluster
	I0603 14:10:10.451445 1149858 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 14:10:10.451486 1149858 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0603 14:10:10.451500 1149858 cache.go:56] Caching tarball of preloaded images
	I0603 14:10:10.451625 1149858 preload.go:173] Found /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 14:10:10.451636 1149858 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0603 14:10:10.451762 1149858 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/config.json ...
	I0603 14:10:10.451795 1149858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/config.json: {Name:mk67c4ee36f012c1f62af75927e93199f9c68f08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 14:10:10.451989 1149858 start.go:360] acquireMachinesLock for newest-cni-937150: {Name:mk20baaab39609d00406b78ad309423511e633ec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 14:10:10.452047 1149858 start.go:364] duration metric: took 27.044µs to acquireMachinesLock for "newest-cni-937150"
	I0603 14:10:10.452076 1149858 start.go:93] Provisioning new machine with config: &{Name:newest-cni-937150 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.1 ClusterName:newest-cni-937150 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 14:10:10.452194 1149858 start.go:125] createHost starting for "" (driver="kvm2")
	I0603 14:10:10.453898 1149858 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0603 14:10:10.454062 1149858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 14:10:10.454116 1149858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 14:10:10.469365 1149858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43799
	I0603 14:10:10.469815 1149858 main.go:141] libmachine: () Calling .GetVersion
	I0603 14:10:10.470411 1149858 main.go:141] libmachine: Using API Version  1
	I0603 14:10:10.470435 1149858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 14:10:10.470796 1149858 main.go:141] libmachine: () Calling .GetMachineName
	I0603 14:10:10.471043 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetMachineName
	I0603 14:10:10.471189 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .DriverName
	I0603 14:10:10.471379 1149858 start.go:159] libmachine.API.Create for "newest-cni-937150" (driver="kvm2")
	I0603 14:10:10.471410 1149858 client.go:168] LocalClient.Create starting
	I0603 14:10:10.471441 1149858 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem
	I0603 14:10:10.471475 1149858 main.go:141] libmachine: Decoding PEM data...
	I0603 14:10:10.471494 1149858 main.go:141] libmachine: Parsing certificate...
	I0603 14:10:10.471550 1149858 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem
	I0603 14:10:10.471569 1149858 main.go:141] libmachine: Decoding PEM data...
	I0603 14:10:10.471580 1149858 main.go:141] libmachine: Parsing certificate...
	I0603 14:10:10.471594 1149858 main.go:141] libmachine: Running pre-create checks...
	I0603 14:10:10.471601 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .PreCreateCheck
	I0603 14:10:10.471986 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetConfigRaw
	I0603 14:10:10.472455 1149858 main.go:141] libmachine: Creating machine...
	I0603 14:10:10.472473 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .Create
	I0603 14:10:10.472663 1149858 main.go:141] libmachine: (newest-cni-937150) Creating KVM machine...
	I0603 14:10:10.474103 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | found existing default KVM network
	I0603 14:10:10.475564 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | I0603 14:10:10.475343 1149881 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:4f:31:b1} reservation:<nil>}
	I0603 14:10:10.476932 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | I0603 14:10:10.476834 1149881 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002b4a40}
	I0603 14:10:10.476973 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | created network xml: 
	I0603 14:10:10.476986 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | <network>
	I0603 14:10:10.476993 1149858 main.go:141] libmachine: (newest-cni-937150) DBG |   <name>mk-newest-cni-937150</name>
	I0603 14:10:10.477001 1149858 main.go:141] libmachine: (newest-cni-937150) DBG |   <dns enable='no'/>
	I0603 14:10:10.477009 1149858 main.go:141] libmachine: (newest-cni-937150) DBG |   
	I0603 14:10:10.477019 1149858 main.go:141] libmachine: (newest-cni-937150) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0603 14:10:10.477051 1149858 main.go:141] libmachine: (newest-cni-937150) DBG |     <dhcp>
	I0603 14:10:10.477081 1149858 main.go:141] libmachine: (newest-cni-937150) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0603 14:10:10.477093 1149858 main.go:141] libmachine: (newest-cni-937150) DBG |     </dhcp>
	I0603 14:10:10.477103 1149858 main.go:141] libmachine: (newest-cni-937150) DBG |   </ip>
	I0603 14:10:10.477116 1149858 main.go:141] libmachine: (newest-cni-937150) DBG |   
	I0603 14:10:10.477125 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | </network>
	I0603 14:10:10.477140 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | 
	I0603 14:10:10.482715 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | trying to create private KVM network mk-newest-cni-937150 192.168.50.0/24...
	I0603 14:10:10.557546 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | private KVM network mk-newest-cni-937150 192.168.50.0/24 created
	I0603 14:10:10.557594 1149858 main.go:141] libmachine: (newest-cni-937150) Setting up store path in /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/newest-cni-937150 ...
	I0603 14:10:10.557623 1149858 main.go:141] libmachine: (newest-cni-937150) Building disk image from file:///home/jenkins/minikube-integration/19011-1078924/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso
	I0603 14:10:10.557647 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | I0603 14:10:10.557589 1149881 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 14:10:10.557890 1149858 main.go:141] libmachine: (newest-cni-937150) Downloading /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19011-1078924/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0603 14:10:10.859248 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | I0603 14:10:10.859069 1149881 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/newest-cni-937150/id_rsa...
	I0603 14:10:11.063760 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | I0603 14:10:11.063617 1149881 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/newest-cni-937150/newest-cni-937150.rawdisk...
	I0603 14:10:11.063809 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | Writing magic tar header
	I0603 14:10:11.063827 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | Writing SSH key tar header
	I0603 14:10:11.063841 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | I0603 14:10:11.063779 1149881 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/newest-cni-937150 ...
	I0603 14:10:11.063930 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/newest-cni-937150
	I0603 14:10:11.063981 1149858 main.go:141] libmachine: (newest-cni-937150) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/newest-cni-937150 (perms=drwx------)
	I0603 14:10:11.064011 1149858 main.go:141] libmachine: (newest-cni-937150) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924/.minikube/machines (perms=drwxr-xr-x)
	I0603 14:10:11.064024 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines
	I0603 14:10:11.064036 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 14:10:11.064046 1149858 main.go:141] libmachine: (newest-cni-937150) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924/.minikube (perms=drwxr-xr-x)
	I0603 14:10:11.064063 1149858 main.go:141] libmachine: (newest-cni-937150) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924 (perms=drwxrwxr-x)
	I0603 14:10:11.064076 1149858 main.go:141] libmachine: (newest-cni-937150) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0603 14:10:11.064098 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924
	I0603 14:10:11.064113 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0603 14:10:11.064124 1149858 main.go:141] libmachine: (newest-cni-937150) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0603 14:10:11.064133 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | Checking permissions on dir: /home/jenkins
	I0603 14:10:11.064147 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | Checking permissions on dir: /home
	I0603 14:10:11.064157 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | Skipping /home - not owner
	I0603 14:10:11.064165 1149858 main.go:141] libmachine: (newest-cni-937150) Creating domain...
	I0603 14:10:11.065459 1149858 main.go:141] libmachine: (newest-cni-937150) define libvirt domain using xml: 
	I0603 14:10:11.065495 1149858 main.go:141] libmachine: (newest-cni-937150) <domain type='kvm'>
	I0603 14:10:11.065507 1149858 main.go:141] libmachine: (newest-cni-937150)   <name>newest-cni-937150</name>
	I0603 14:10:11.065515 1149858 main.go:141] libmachine: (newest-cni-937150)   <memory unit='MiB'>2200</memory>
	I0603 14:10:11.065524 1149858 main.go:141] libmachine: (newest-cni-937150)   <vcpu>2</vcpu>
	I0603 14:10:11.065537 1149858 main.go:141] libmachine: (newest-cni-937150)   <features>
	I0603 14:10:11.065546 1149858 main.go:141] libmachine: (newest-cni-937150)     <acpi/>
	I0603 14:10:11.065555 1149858 main.go:141] libmachine: (newest-cni-937150)     <apic/>
	I0603 14:10:11.065566 1149858 main.go:141] libmachine: (newest-cni-937150)     <pae/>
	I0603 14:10:11.065576 1149858 main.go:141] libmachine: (newest-cni-937150)     
	I0603 14:10:11.065584 1149858 main.go:141] libmachine: (newest-cni-937150)   </features>
	I0603 14:10:11.065592 1149858 main.go:141] libmachine: (newest-cni-937150)   <cpu mode='host-passthrough'>
	I0603 14:10:11.065629 1149858 main.go:141] libmachine: (newest-cni-937150)   
	I0603 14:10:11.065645 1149858 main.go:141] libmachine: (newest-cni-937150)   </cpu>
	I0603 14:10:11.065654 1149858 main.go:141] libmachine: (newest-cni-937150)   <os>
	I0603 14:10:11.065662 1149858 main.go:141] libmachine: (newest-cni-937150)     <type>hvm</type>
	I0603 14:10:11.065680 1149858 main.go:141] libmachine: (newest-cni-937150)     <boot dev='cdrom'/>
	I0603 14:10:11.065691 1149858 main.go:141] libmachine: (newest-cni-937150)     <boot dev='hd'/>
	I0603 14:10:11.065702 1149858 main.go:141] libmachine: (newest-cni-937150)     <bootmenu enable='no'/>
	I0603 14:10:11.065712 1149858 main.go:141] libmachine: (newest-cni-937150)   </os>
	I0603 14:10:11.065720 1149858 main.go:141] libmachine: (newest-cni-937150)   <devices>
	I0603 14:10:11.065731 1149858 main.go:141] libmachine: (newest-cni-937150)     <disk type='file' device='cdrom'>
	I0603 14:10:11.065744 1149858 main.go:141] libmachine: (newest-cni-937150)       <source file='/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/newest-cni-937150/boot2docker.iso'/>
	I0603 14:10:11.065757 1149858 main.go:141] libmachine: (newest-cni-937150)       <target dev='hdc' bus='scsi'/>
	I0603 14:10:11.065768 1149858 main.go:141] libmachine: (newest-cni-937150)       <readonly/>
	I0603 14:10:11.065778 1149858 main.go:141] libmachine: (newest-cni-937150)     </disk>
	I0603 14:10:11.065787 1149858 main.go:141] libmachine: (newest-cni-937150)     <disk type='file' device='disk'>
	I0603 14:10:11.065799 1149858 main.go:141] libmachine: (newest-cni-937150)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0603 14:10:11.065817 1149858 main.go:141] libmachine: (newest-cni-937150)       <source file='/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/newest-cni-937150/newest-cni-937150.rawdisk'/>
	I0603 14:10:11.065828 1149858 main.go:141] libmachine: (newest-cni-937150)       <target dev='hda' bus='virtio'/>
	I0603 14:10:11.065843 1149858 main.go:141] libmachine: (newest-cni-937150)     </disk>
	I0603 14:10:11.065853 1149858 main.go:141] libmachine: (newest-cni-937150)     <interface type='network'>
	I0603 14:10:11.065870 1149858 main.go:141] libmachine: (newest-cni-937150)       <source network='mk-newest-cni-937150'/>
	I0603 14:10:11.065881 1149858 main.go:141] libmachine: (newest-cni-937150)       <model type='virtio'/>
	I0603 14:10:11.065889 1149858 main.go:141] libmachine: (newest-cni-937150)     </interface>
	I0603 14:10:11.065897 1149858 main.go:141] libmachine: (newest-cni-937150)     <interface type='network'>
	I0603 14:10:11.065906 1149858 main.go:141] libmachine: (newest-cni-937150)       <source network='default'/>
	I0603 14:10:11.065914 1149858 main.go:141] libmachine: (newest-cni-937150)       <model type='virtio'/>
	I0603 14:10:11.065948 1149858 main.go:141] libmachine: (newest-cni-937150)     </interface>
	I0603 14:10:11.065974 1149858 main.go:141] libmachine: (newest-cni-937150)     <serial type='pty'>
	I0603 14:10:11.065989 1149858 main.go:141] libmachine: (newest-cni-937150)       <target port='0'/>
	I0603 14:10:11.066001 1149858 main.go:141] libmachine: (newest-cni-937150)     </serial>
	I0603 14:10:11.066010 1149858 main.go:141] libmachine: (newest-cni-937150)     <console type='pty'>
	I0603 14:10:11.066021 1149858 main.go:141] libmachine: (newest-cni-937150)       <target type='serial' port='0'/>
	I0603 14:10:11.066028 1149858 main.go:141] libmachine: (newest-cni-937150)     </console>
	I0603 14:10:11.066043 1149858 main.go:141] libmachine: (newest-cni-937150)     <rng model='virtio'>
	I0603 14:10:11.066056 1149858 main.go:141] libmachine: (newest-cni-937150)       <backend model='random'>/dev/random</backend>
	I0603 14:10:11.066068 1149858 main.go:141] libmachine: (newest-cni-937150)     </rng>
	I0603 14:10:11.066079 1149858 main.go:141] libmachine: (newest-cni-937150)     
	I0603 14:10:11.066089 1149858 main.go:141] libmachine: (newest-cni-937150)     
	I0603 14:10:11.066098 1149858 main.go:141] libmachine: (newest-cni-937150)   </devices>
	I0603 14:10:11.066111 1149858 main.go:141] libmachine: (newest-cni-937150) </domain>
	I0603 14:10:11.066124 1149858 main.go:141] libmachine: (newest-cni-937150) 
	I0603 14:10:11.070908 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:49:67:db in network default
	I0603 14:10:11.071583 1149858 main.go:141] libmachine: (newest-cni-937150) Ensuring networks are active...
	I0603 14:10:11.071604 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:11.072703 1149858 main.go:141] libmachine: (newest-cni-937150) Ensuring network default is active
	I0603 14:10:11.073040 1149858 main.go:141] libmachine: (newest-cni-937150) Ensuring network mk-newest-cni-937150 is active
	I0603 14:10:11.073579 1149858 main.go:141] libmachine: (newest-cni-937150) Getting domain xml...
	I0603 14:10:11.074420 1149858 main.go:141] libmachine: (newest-cni-937150) Creating domain...
	I0603 14:10:12.353217 1149858 main.go:141] libmachine: (newest-cni-937150) Waiting to get IP...
	I0603 14:10:12.354164 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:12.354684 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | unable to find current IP address of domain newest-cni-937150 in network mk-newest-cni-937150
	I0603 14:10:12.354715 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | I0603 14:10:12.354621 1149881 retry.go:31] will retry after 259.893127ms: waiting for machine to come up
	I0603 14:10:12.616214 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:12.616729 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | unable to find current IP address of domain newest-cni-937150 in network mk-newest-cni-937150
	I0603 14:10:12.616772 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | I0603 14:10:12.616700 1149881 retry.go:31] will retry after 380.359ms: waiting for machine to come up
	I0603 14:10:12.998408 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:12.998910 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | unable to find current IP address of domain newest-cni-937150 in network mk-newest-cni-937150
	I0603 14:10:12.998957 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | I0603 14:10:12.998864 1149881 retry.go:31] will retry after 488.054448ms: waiting for machine to come up
	I0603 14:10:13.488678 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:13.489157 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | unable to find current IP address of domain newest-cni-937150 in network mk-newest-cni-937150
	I0603 14:10:13.489200 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | I0603 14:10:13.489102 1149881 retry.go:31] will retry after 414.42816ms: waiting for machine to come up
	I0603 14:10:13.905763 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:13.906365 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | unable to find current IP address of domain newest-cni-937150 in network mk-newest-cni-937150
	I0603 14:10:13.906391 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | I0603 14:10:13.906303 1149881 retry.go:31] will retry after 521.411782ms: waiting for machine to come up
	I0603 14:10:14.429056 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:14.429543 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | unable to find current IP address of domain newest-cni-937150 in network mk-newest-cni-937150
	I0603 14:10:14.429623 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | I0603 14:10:14.429519 1149881 retry.go:31] will retry after 814.866584ms: waiting for machine to come up
	I0603 14:10:15.245815 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:15.246317 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | unable to find current IP address of domain newest-cni-937150 in network mk-newest-cni-937150
	I0603 14:10:15.246346 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | I0603 14:10:15.246258 1149881 retry.go:31] will retry after 1.021138707s: waiting for machine to come up
	I0603 14:10:16.268550 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:16.269101 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | unable to find current IP address of domain newest-cni-937150 in network mk-newest-cni-937150
	I0603 14:10:16.269146 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | I0603 14:10:16.269064 1149881 retry.go:31] will retry after 1.167022182s: waiting for machine to come up
	I0603 14:10:17.437421 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:17.437866 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | unable to find current IP address of domain newest-cni-937150 in network mk-newest-cni-937150
	I0603 14:10:17.437895 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | I0603 14:10:17.437817 1149881 retry.go:31] will retry after 1.415790047s: waiting for machine to come up
	I0603 14:10:18.855773 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:18.856289 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | unable to find current IP address of domain newest-cni-937150 in network mk-newest-cni-937150
	I0603 14:10:18.856318 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | I0603 14:10:18.856227 1149881 retry.go:31] will retry after 2.184943297s: waiting for machine to come up
	I0603 14:10:21.043362 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:21.043841 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | unable to find current IP address of domain newest-cni-937150 in network mk-newest-cni-937150
	I0603 14:10:21.043873 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | I0603 14:10:21.043763 1149881 retry.go:31] will retry after 1.770659238s: waiting for machine to come up
	I0603 14:10:22.816815 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:22.817222 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | unable to find current IP address of domain newest-cni-937150 in network mk-newest-cni-937150
	I0603 14:10:22.817247 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | I0603 14:10:22.817177 1149881 retry.go:31] will retry after 3.405487359s: waiting for machine to come up
	I0603 14:10:26.223700 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:26.224169 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | unable to find current IP address of domain newest-cni-937150 in network mk-newest-cni-937150
	I0603 14:10:26.224215 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | I0603 14:10:26.224129 1149881 retry.go:31] will retry after 4.352919539s: waiting for machine to come up
	I0603 14:10:30.578405 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:30.578883 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has current primary IP address 192.168.50.117 and MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:30.578913 1149858 main.go:141] libmachine: (newest-cni-937150) Found IP for machine: 192.168.50.117
	I0603 14:10:30.578927 1149858 main.go:141] libmachine: (newest-cni-937150) Reserving static IP address...
	I0603 14:10:30.579338 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | unable to find host DHCP lease matching {name: "newest-cni-937150", mac: "52:54:00:86:c8:b7", ip: "192.168.50.117"} in network mk-newest-cni-937150
	I0603 14:10:30.659372 1149858 main.go:141] libmachine: (newest-cni-937150) Reserved static IP address: 192.168.50.117
	I0603 14:10:30.659404 1149858 main.go:141] libmachine: (newest-cni-937150) Waiting for SSH to be available...
	I0603 14:10:30.659414 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | Getting to WaitForSSH function...
	I0603 14:10:30.662675 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:30.663210 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:c8:b7", ip: ""} in network mk-newest-cni-937150: {Iface:virbr2 ExpiryTime:2024-06-03 15:10:25 +0000 UTC Type:0 Mac:52:54:00:86:c8:b7 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:minikube Clientid:01:52:54:00:86:c8:b7}
	I0603 14:10:30.663248 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined IP address 192.168.50.117 and MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:30.663392 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | Using SSH client type: external
	I0603 14:10:30.663413 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | Using SSH private key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/newest-cni-937150/id_rsa (-rw-------)
	I0603 14:10:30.663443 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.117 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/newest-cni-937150/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 14:10:30.663453 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | About to run SSH command:
	I0603 14:10:30.663462 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | exit 0
	I0603 14:10:30.793746 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | SSH cmd err, output: <nil>: 
	I0603 14:10:30.794052 1149858 main.go:141] libmachine: (newest-cni-937150) KVM machine creation complete!
	I0603 14:10:30.794357 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetConfigRaw
	I0603 14:10:30.794981 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .DriverName
	I0603 14:10:30.795200 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .DriverName
	I0603 14:10:30.795436 1149858 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0603 14:10:30.795463 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetState
	I0603 14:10:30.796938 1149858 main.go:141] libmachine: Detecting operating system of created instance...
	I0603 14:10:30.796963 1149858 main.go:141] libmachine: Waiting for SSH to be available...
	I0603 14:10:30.796969 1149858 main.go:141] libmachine: Getting to WaitForSSH function...
	I0603 14:10:30.796975 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHHostname
	I0603 14:10:30.799957 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:30.800464 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:c8:b7", ip: ""} in network mk-newest-cni-937150: {Iface:virbr2 ExpiryTime:2024-06-03 15:10:25 +0000 UTC Type:0 Mac:52:54:00:86:c8:b7 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:newest-cni-937150 Clientid:01:52:54:00:86:c8:b7}
	I0603 14:10:30.800514 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined IP address 192.168.50.117 and MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:30.800687 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHPort
	I0603 14:10:30.800859 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHKeyPath
	I0603 14:10:30.801037 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHKeyPath
	I0603 14:10:30.801178 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHUsername
	I0603 14:10:30.801343 1149858 main.go:141] libmachine: Using SSH client type: native
	I0603 14:10:30.801644 1149858 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.117 22 <nil> <nil>}
	I0603 14:10:30.801662 1149858 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0603 14:10:30.916985 1149858 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 14:10:30.917013 1149858 main.go:141] libmachine: Detecting the provisioner...
	I0603 14:10:30.917026 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHHostname
	I0603 14:10:30.920194 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:30.920652 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:c8:b7", ip: ""} in network mk-newest-cni-937150: {Iface:virbr2 ExpiryTime:2024-06-03 15:10:25 +0000 UTC Type:0 Mac:52:54:00:86:c8:b7 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:newest-cni-937150 Clientid:01:52:54:00:86:c8:b7}
	I0603 14:10:30.920702 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined IP address 192.168.50.117 and MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:30.920869 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHPort
	I0603 14:10:30.921116 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHKeyPath
	I0603 14:10:30.921362 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHKeyPath
	I0603 14:10:30.921548 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHUsername
	I0603 14:10:30.921720 1149858 main.go:141] libmachine: Using SSH client type: native
	I0603 14:10:30.921946 1149858 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.117 22 <nil> <nil>}
	I0603 14:10:30.921961 1149858 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0603 14:10:31.038406 1149858 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0603 14:10:31.038510 1149858 main.go:141] libmachine: found compatible host: buildroot
	I0603 14:10:31.038520 1149858 main.go:141] libmachine: Provisioning with buildroot...
	I0603 14:10:31.038537 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetMachineName
	I0603 14:10:31.038827 1149858 buildroot.go:166] provisioning hostname "newest-cni-937150"
	I0603 14:10:31.038863 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetMachineName
	I0603 14:10:31.039075 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHHostname
	I0603 14:10:31.042115 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:31.042590 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:c8:b7", ip: ""} in network mk-newest-cni-937150: {Iface:virbr2 ExpiryTime:2024-06-03 15:10:25 +0000 UTC Type:0 Mac:52:54:00:86:c8:b7 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:newest-cni-937150 Clientid:01:52:54:00:86:c8:b7}
	I0603 14:10:31.042627 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined IP address 192.168.50.117 and MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:31.042676 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHPort
	I0603 14:10:31.042909 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHKeyPath
	I0603 14:10:31.043120 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHKeyPath
	I0603 14:10:31.043295 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHUsername
	I0603 14:10:31.043502 1149858 main.go:141] libmachine: Using SSH client type: native
	I0603 14:10:31.043658 1149858 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.117 22 <nil> <nil>}
	I0603 14:10:31.043673 1149858 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-937150 && echo "newest-cni-937150" | sudo tee /etc/hostname
	I0603 14:10:31.172948 1149858 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-937150
	
	I0603 14:10:31.172982 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHHostname
	I0603 14:10:31.175909 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:31.176278 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:c8:b7", ip: ""} in network mk-newest-cni-937150: {Iface:virbr2 ExpiryTime:2024-06-03 15:10:25 +0000 UTC Type:0 Mac:52:54:00:86:c8:b7 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:newest-cni-937150 Clientid:01:52:54:00:86:c8:b7}
	I0603 14:10:31.176319 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined IP address 192.168.50.117 and MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:31.176492 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHPort
	I0603 14:10:31.176724 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHKeyPath
	I0603 14:10:31.176927 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHKeyPath
	I0603 14:10:31.177075 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHUsername
	I0603 14:10:31.177265 1149858 main.go:141] libmachine: Using SSH client type: native
	I0603 14:10:31.177514 1149858 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.117 22 <nil> <nil>}
	I0603 14:10:31.177532 1149858 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-937150' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-937150/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-937150' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 14:10:31.301133 1149858 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 14:10:31.301174 1149858 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19011-1078924/.minikube CaCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19011-1078924/.minikube}
	I0603 14:10:31.301233 1149858 buildroot.go:174] setting up certificates
	I0603 14:10:31.301248 1149858 provision.go:84] configureAuth start
	I0603 14:10:31.301262 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetMachineName
	I0603 14:10:31.301691 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetIP
	I0603 14:10:31.304488 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:31.304830 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:c8:b7", ip: ""} in network mk-newest-cni-937150: {Iface:virbr2 ExpiryTime:2024-06-03 15:10:25 +0000 UTC Type:0 Mac:52:54:00:86:c8:b7 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:newest-cni-937150 Clientid:01:52:54:00:86:c8:b7}
	I0603 14:10:31.304850 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined IP address 192.168.50.117 and MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:31.305012 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHHostname
	I0603 14:10:31.307547 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:31.307892 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:c8:b7", ip: ""} in network mk-newest-cni-937150: {Iface:virbr2 ExpiryTime:2024-06-03 15:10:25 +0000 UTC Type:0 Mac:52:54:00:86:c8:b7 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:newest-cni-937150 Clientid:01:52:54:00:86:c8:b7}
	I0603 14:10:31.307926 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined IP address 192.168.50.117 and MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:31.308151 1149858 provision.go:143] copyHostCerts
	I0603 14:10:31.308221 1149858 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem, removing ...
	I0603 14:10:31.308242 1149858 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 14:10:31.308308 1149858 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem (1123 bytes)
	I0603 14:10:31.308442 1149858 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem, removing ...
	I0603 14:10:31.308453 1149858 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 14:10:31.308479 1149858 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem (1675 bytes)
	I0603 14:10:31.308563 1149858 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem, removing ...
	I0603 14:10:31.308571 1149858 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 14:10:31.308593 1149858 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem (1078 bytes)
	I0603 14:10:31.308639 1149858 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem org=jenkins.newest-cni-937150 san=[127.0.0.1 192.168.50.117 localhost minikube newest-cni-937150]
	I0603 14:10:31.706209 1149858 provision.go:177] copyRemoteCerts
	I0603 14:10:31.706272 1149858 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 14:10:31.706314 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHHostname
	I0603 14:10:31.709224 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:31.709619 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:c8:b7", ip: ""} in network mk-newest-cni-937150: {Iface:virbr2 ExpiryTime:2024-06-03 15:10:25 +0000 UTC Type:0 Mac:52:54:00:86:c8:b7 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:newest-cni-937150 Clientid:01:52:54:00:86:c8:b7}
	I0603 14:10:31.709657 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined IP address 192.168.50.117 and MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:31.709918 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHPort
	I0603 14:10:31.710161 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHKeyPath
	I0603 14:10:31.710341 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHUsername
	I0603 14:10:31.710557 1149858 sshutil.go:53] new ssh client: &{IP:192.168.50.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/newest-cni-937150/id_rsa Username:docker}
	I0603 14:10:31.795463 1149858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 14:10:31.823433 1149858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0603 14:10:31.849834 1149858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 14:10:31.875234 1149858 provision.go:87] duration metric: took 573.971415ms to configureAuth
	I0603 14:10:31.875263 1149858 buildroot.go:189] setting minikube options for container-runtime
	I0603 14:10:31.875484 1149858 config.go:182] Loaded profile config "newest-cni-937150": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 14:10:31.875587 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHHostname
	I0603 14:10:31.878457 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:31.878781 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:c8:b7", ip: ""} in network mk-newest-cni-937150: {Iface:virbr2 ExpiryTime:2024-06-03 15:10:25 +0000 UTC Type:0 Mac:52:54:00:86:c8:b7 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:newest-cni-937150 Clientid:01:52:54:00:86:c8:b7}
	I0603 14:10:31.878811 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined IP address 192.168.50.117 and MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:31.879029 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHPort
	I0603 14:10:31.879253 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHKeyPath
	I0603 14:10:31.879526 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHKeyPath
	I0603 14:10:31.879729 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHUsername
	I0603 14:10:31.879918 1149858 main.go:141] libmachine: Using SSH client type: native
	I0603 14:10:31.880107 1149858 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.117 22 <nil> <nil>}
	I0603 14:10:31.880129 1149858 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 14:10:32.183853 1149858 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 14:10:32.183921 1149858 main.go:141] libmachine: Checking connection to Docker...
	I0603 14:10:32.183935 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetURL
	I0603 14:10:32.185578 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | Using libvirt version 6000000
	I0603 14:10:32.188015 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:32.188388 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:c8:b7", ip: ""} in network mk-newest-cni-937150: {Iface:virbr2 ExpiryTime:2024-06-03 15:10:25 +0000 UTC Type:0 Mac:52:54:00:86:c8:b7 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:newest-cni-937150 Clientid:01:52:54:00:86:c8:b7}
	I0603 14:10:32.188422 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined IP address 192.168.50.117 and MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:32.188588 1149858 main.go:141] libmachine: Docker is up and running!
	I0603 14:10:32.188607 1149858 main.go:141] libmachine: Reticulating splines...
	I0603 14:10:32.188616 1149858 client.go:171] duration metric: took 21.71719522s to LocalClient.Create
	I0603 14:10:32.188653 1149858 start.go:167] duration metric: took 21.717275209s to libmachine.API.Create "newest-cni-937150"
	I0603 14:10:32.188666 1149858 start.go:293] postStartSetup for "newest-cni-937150" (driver="kvm2")
	I0603 14:10:32.188681 1149858 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 14:10:32.188705 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .DriverName
	I0603 14:10:32.188982 1149858 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 14:10:32.189007 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHHostname
	I0603 14:10:32.191264 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:32.191625 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:c8:b7", ip: ""} in network mk-newest-cni-937150: {Iface:virbr2 ExpiryTime:2024-06-03 15:10:25 +0000 UTC Type:0 Mac:52:54:00:86:c8:b7 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:newest-cni-937150 Clientid:01:52:54:00:86:c8:b7}
	I0603 14:10:32.191662 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined IP address 192.168.50.117 and MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:32.191801 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHPort
	I0603 14:10:32.191990 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHKeyPath
	I0603 14:10:32.192129 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHUsername
	I0603 14:10:32.192296 1149858 sshutil.go:53] new ssh client: &{IP:192.168.50.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/newest-cni-937150/id_rsa Username:docker}
	I0603 14:10:32.281048 1149858 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 14:10:32.285937 1149858 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 14:10:32.285968 1149858 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/addons for local assets ...
	I0603 14:10:32.286066 1149858 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/files for local assets ...
	I0603 14:10:32.286192 1149858 filesync.go:149] local asset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> 10862512.pem in /etc/ssl/certs
	I0603 14:10:32.286347 1149858 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 14:10:32.296925 1149858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 14:10:32.323191 1149858 start.go:296] duration metric: took 134.509724ms for postStartSetup
	I0603 14:10:32.323285 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetConfigRaw
	I0603 14:10:32.323977 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetIP
	I0603 14:10:32.326874 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:32.327266 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:c8:b7", ip: ""} in network mk-newest-cni-937150: {Iface:virbr2 ExpiryTime:2024-06-03 15:10:25 +0000 UTC Type:0 Mac:52:54:00:86:c8:b7 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:newest-cni-937150 Clientid:01:52:54:00:86:c8:b7}
	I0603 14:10:32.327297 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined IP address 192.168.50.117 and MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:32.327660 1149858 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/config.json ...
	I0603 14:10:32.327865 1149858 start.go:128] duration metric: took 21.875654351s to createHost
	I0603 14:10:32.327906 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHHostname
	I0603 14:10:32.330266 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:32.330570 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:c8:b7", ip: ""} in network mk-newest-cni-937150: {Iface:virbr2 ExpiryTime:2024-06-03 15:10:25 +0000 UTC Type:0 Mac:52:54:00:86:c8:b7 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:newest-cni-937150 Clientid:01:52:54:00:86:c8:b7}
	I0603 14:10:32.330600 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined IP address 192.168.50.117 and MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:32.330754 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHPort
	I0603 14:10:32.330950 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHKeyPath
	I0603 14:10:32.331103 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHKeyPath
	I0603 14:10:32.331236 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHUsername
	I0603 14:10:32.331391 1149858 main.go:141] libmachine: Using SSH client type: native
	I0603 14:10:32.331606 1149858 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.117 22 <nil> <nil>}
	I0603 14:10:32.331624 1149858 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 14:10:32.450769 1149858 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717423832.419930077
	
	I0603 14:10:32.450799 1149858 fix.go:216] guest clock: 1717423832.419930077
	I0603 14:10:32.450809 1149858 fix.go:229] Guest: 2024-06-03 14:10:32.419930077 +0000 UTC Remote: 2024-06-03 14:10:32.32788916 +0000 UTC m=+21.990339354 (delta=92.040917ms)
	I0603 14:10:32.450853 1149858 fix.go:200] guest clock delta is within tolerance: 92.040917ms
	I0603 14:10:32.450864 1149858 start.go:83] releasing machines lock for "newest-cni-937150", held for 21.998803352s
	I0603 14:10:32.450900 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .DriverName
	I0603 14:10:32.451205 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetIP
	I0603 14:10:32.454324 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:32.454704 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:c8:b7", ip: ""} in network mk-newest-cni-937150: {Iface:virbr2 ExpiryTime:2024-06-03 15:10:25 +0000 UTC Type:0 Mac:52:54:00:86:c8:b7 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:newest-cni-937150 Clientid:01:52:54:00:86:c8:b7}
	I0603 14:10:32.454735 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined IP address 192.168.50.117 and MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:32.454911 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .DriverName
	I0603 14:10:32.455517 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .DriverName
	I0603 14:10:32.455751 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .DriverName
	I0603 14:10:32.455834 1149858 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 14:10:32.455891 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHHostname
	I0603 14:10:32.456032 1149858 ssh_runner.go:195] Run: cat /version.json
	I0603 14:10:32.456063 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHHostname
	I0603 14:10:32.458708 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:32.458932 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:32.459076 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:c8:b7", ip: ""} in network mk-newest-cni-937150: {Iface:virbr2 ExpiryTime:2024-06-03 15:10:25 +0000 UTC Type:0 Mac:52:54:00:86:c8:b7 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:newest-cni-937150 Clientid:01:52:54:00:86:c8:b7}
	I0603 14:10:32.459106 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined IP address 192.168.50.117 and MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:32.459244 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:c8:b7", ip: ""} in network mk-newest-cni-937150: {Iface:virbr2 ExpiryTime:2024-06-03 15:10:25 +0000 UTC Type:0 Mac:52:54:00:86:c8:b7 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:newest-cni-937150 Clientid:01:52:54:00:86:c8:b7}
	I0603 14:10:32.459278 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHPort
	I0603 14:10:32.459284 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined IP address 192.168.50.117 and MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:32.459450 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHPort
	I0603 14:10:32.459565 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHKeyPath
	I0603 14:10:32.459675 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHKeyPath
	I0603 14:10:32.459722 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHUsername
	I0603 14:10:32.459834 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetSSHUsername
	I0603 14:10:32.459954 1149858 sshutil.go:53] new ssh client: &{IP:192.168.50.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/newest-cni-937150/id_rsa Username:docker}
	I0603 14:10:32.460007 1149858 sshutil.go:53] new ssh client: &{IP:192.168.50.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/newest-cni-937150/id_rsa Username:docker}
	I0603 14:10:32.571744 1149858 ssh_runner.go:195] Run: systemctl --version
	I0603 14:10:32.578153 1149858 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 14:10:32.734363 1149858 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 14:10:32.740408 1149858 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 14:10:32.740487 1149858 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 14:10:32.758408 1149858 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 14:10:32.758435 1149858 start.go:494] detecting cgroup driver to use...
	I0603 14:10:32.758533 1149858 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 14:10:32.776560 1149858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 14:10:32.792117 1149858 docker.go:217] disabling cri-docker service (if available) ...
	I0603 14:10:32.792177 1149858 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 14:10:32.808337 1149858 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 14:10:32.823964 1149858 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 14:10:32.949275 1149858 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 14:10:33.102685 1149858 docker.go:233] disabling docker service ...
	I0603 14:10:33.102768 1149858 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 14:10:33.117681 1149858 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 14:10:33.131293 1149858 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 14:10:33.254836 1149858 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 14:10:33.389135 1149858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 14:10:33.404530 1149858 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 14:10:33.425480 1149858 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 14:10:33.425550 1149858 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 14:10:33.437504 1149858 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 14:10:33.437578 1149858 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 14:10:33.449342 1149858 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 14:10:33.460206 1149858 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 14:10:33.470545 1149858 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 14:10:33.480990 1149858 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 14:10:33.491735 1149858 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 14:10:33.510331 1149858 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 14:10:33.521151 1149858 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 14:10:33.531116 1149858 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 14:10:33.531184 1149858 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 14:10:33.544277 1149858 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 14:10:33.554136 1149858 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 14:10:33.676654 1149858 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 14:10:33.834506 1149858 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 14:10:33.834595 1149858 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 14:10:33.840526 1149858 start.go:562] Will wait 60s for crictl version
	I0603 14:10:33.840595 1149858 ssh_runner.go:195] Run: which crictl
	I0603 14:10:33.845114 1149858 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 14:10:33.892075 1149858 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 14:10:33.892152 1149858 ssh_runner.go:195] Run: crio --version
	I0603 14:10:33.922949 1149858 ssh_runner.go:195] Run: crio --version
	I0603 14:10:33.955395 1149858 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 14:10:33.956703 1149858 main.go:141] libmachine: (newest-cni-937150) Calling .GetIP
	I0603 14:10:33.959426 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:33.959838 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:c8:b7", ip: ""} in network mk-newest-cni-937150: {Iface:virbr2 ExpiryTime:2024-06-03 15:10:25 +0000 UTC Type:0 Mac:52:54:00:86:c8:b7 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:newest-cni-937150 Clientid:01:52:54:00:86:c8:b7}
	I0603 14:10:33.959866 1149858 main.go:141] libmachine: (newest-cni-937150) DBG | domain newest-cni-937150 has defined IP address 192.168.50.117 and MAC address 52:54:00:86:c8:b7 in network mk-newest-cni-937150
	I0603 14:10:33.960150 1149858 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0603 14:10:33.964703 1149858 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 14:10:33.980474 1149858 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0603 14:10:33.981718 1149858 kubeadm.go:877] updating cluster {Name:newest-cni-937150 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:newest-cni-937150 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.117 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 14:10:33.981855 1149858 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 14:10:33.981929 1149858 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 14:10:34.019028 1149858 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0603 14:10:34.019122 1149858 ssh_runner.go:195] Run: which lz4
	I0603 14:10:34.023813 1149858 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 14:10:34.028406 1149858 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 14:10:34.028436 1149858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0603 14:10:35.555838 1149858 crio.go:462] duration metric: took 1.532063899s to copy over tarball
	I0603 14:10:35.555925 1149858 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 14:10:37.872818 1149858 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.316853217s)
	I0603 14:10:37.872853 1149858 crio.go:469] duration metric: took 2.316978379s to extract the tarball
	I0603 14:10:37.872864 1149858 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 14:10:37.913205 1149858 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 14:10:37.962131 1149858 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 14:10:37.962164 1149858 cache_images.go:84] Images are preloaded, skipping loading
	I0603 14:10:37.962180 1149858 kubeadm.go:928] updating node { 192.168.50.117 8443 v1.30.1 crio true true} ...
	I0603 14:10:37.962353 1149858 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-937150 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.117
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:newest-cni-937150 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 14:10:37.962440 1149858 ssh_runner.go:195] Run: crio config
	I0603 14:10:38.014940 1149858 cni.go:84] Creating CNI manager for ""
	I0603 14:10:38.014962 1149858 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 14:10:38.014971 1149858 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0603 14:10:38.014994 1149858 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.50.117 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-937150 NodeName:newest-cni-937150 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.117"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.50.117 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 14:10:38.015151 1149858 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.117
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-937150"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.117
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.117"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 14:10:38.015216 1149858 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 14:10:38.028856 1149858 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 14:10:38.028944 1149858 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 14:10:38.039665 1149858 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0603 14:10:38.057433 1149858 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 14:10:38.075806 1149858 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2285 bytes)
	I0603 14:10:38.095263 1149858 ssh_runner.go:195] Run: grep 192.168.50.117	control-plane.minikube.internal$ /etc/hosts
	I0603 14:10:38.099443 1149858 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.117	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 14:10:38.112429 1149858 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 14:10:38.248428 1149858 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 14:10:38.267105 1149858 certs.go:68] Setting up /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150 for IP: 192.168.50.117
	I0603 14:10:38.267138 1149858 certs.go:194] generating shared ca certs ...
	I0603 14:10:38.267159 1149858 certs.go:226] acquiring lock for ca certs: {Name:mkeec5aabce7c9540fcb31b78e4f96c2851d54f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 14:10:38.267393 1149858 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key
	I0603 14:10:38.267472 1149858 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key
	I0603 14:10:38.267492 1149858 certs.go:256] generating profile certs ...
	I0603 14:10:38.267580 1149858 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/client.key
	I0603 14:10:38.267608 1149858 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/client.crt with IP's: []
	I0603 14:10:38.388932 1149858 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/client.crt ...
	I0603 14:10:38.388966 1149858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/client.crt: {Name:mkeac1660cd9acdbde243d96ed0eaf6d3aafc544 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 14:10:38.389206 1149858 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/client.key ...
	I0603 14:10:38.389226 1149858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/client.key: {Name:mk3c5dcc647c1a29bae27c60136d087199f77c84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 14:10:38.389323 1149858 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/apiserver.key.095d4dda
	I0603 14:10:38.389340 1149858 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/apiserver.crt.095d4dda with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.117]
	I0603 14:10:38.553425 1149858 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/apiserver.crt.095d4dda ...
	I0603 14:10:38.553459 1149858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/apiserver.crt.095d4dda: {Name:mkc545fa3ddf8db013caa5ca7400370bda54bcf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 14:10:38.553631 1149858 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/apiserver.key.095d4dda ...
	I0603 14:10:38.553644 1149858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/apiserver.key.095d4dda: {Name:mkb5e93bb41b676b2b6fac31f5e42d8cf2dff975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 14:10:38.553731 1149858 certs.go:381] copying /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/apiserver.crt.095d4dda -> /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/apiserver.crt
	I0603 14:10:38.553811 1149858 certs.go:385] copying /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/apiserver.key.095d4dda -> /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/apiserver.key
	I0603 14:10:38.553865 1149858 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/proxy-client.key
	I0603 14:10:38.553881 1149858 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/proxy-client.crt with IP's: []
	I0603 14:10:38.779341 1149858 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/proxy-client.crt ...
	I0603 14:10:38.779375 1149858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/proxy-client.crt: {Name:mk2e008c1cb6dc4fb341b2e569324ba9b7ca2339 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 14:10:38.779605 1149858 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/proxy-client.key ...
	I0603 14:10:38.779624 1149858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/proxy-client.key: {Name:mkd05bb014dcd066b3793441b0da6f58fa48ffdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 14:10:38.779862 1149858 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem (1338 bytes)
	W0603 14:10:38.779906 1149858 certs.go:480] ignoring /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251_empty.pem, impossibly tiny 0 bytes
	I0603 14:10:38.779923 1149858 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 14:10:38.779957 1149858 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem (1078 bytes)
	I0603 14:10:38.779985 1149858 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem (1123 bytes)
	I0603 14:10:38.780025 1149858 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem (1675 bytes)
	I0603 14:10:38.780085 1149858 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 14:10:38.780753 1149858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 14:10:38.815194 1149858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 14:10:38.844375 1149858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 14:10:38.872853 1149858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0603 14:10:38.900181 1149858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0603 14:10:38.927682 1149858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 14:10:38.956080 1149858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 14:10:38.985021 1149858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/newest-cni-937150/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0603 14:10:39.013358 1149858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /usr/share/ca-certificates/10862512.pem (1708 bytes)
	I0603 14:10:39.042106 1149858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 14:10:39.072045 1149858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem --> /usr/share/ca-certificates/1086251.pem (1338 bytes)
	I0603 14:10:39.099885 1149858 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 14:10:39.118274 1149858 ssh_runner.go:195] Run: openssl version
	I0603 14:10:39.125064 1149858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10862512.pem && ln -fs /usr/share/ca-certificates/10862512.pem /etc/ssl/certs/10862512.pem"
	I0603 14:10:39.136385 1149858 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10862512.pem
	I0603 14:10:39.141326 1149858 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:37 /usr/share/ca-certificates/10862512.pem
	I0603 14:10:39.141383 1149858 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10862512.pem
	I0603 14:10:39.147900 1149858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10862512.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 14:10:39.159667 1149858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 14:10:39.171044 1149858 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 14:10:39.175817 1149858 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:24 /usr/share/ca-certificates/minikubeCA.pem
	I0603 14:10:39.175882 1149858 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 14:10:39.182138 1149858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 14:10:39.206779 1149858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1086251.pem && ln -fs /usr/share/ca-certificates/1086251.pem /etc/ssl/certs/1086251.pem"
	I0603 14:10:39.224524 1149858 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1086251.pem
	I0603 14:10:39.230510 1149858 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:37 /usr/share/ca-certificates/1086251.pem
	I0603 14:10:39.230584 1149858 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1086251.pem
	I0603 14:10:39.245290 1149858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1086251.pem /etc/ssl/certs/51391683.0"
	I0603 14:10:39.262292 1149858 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 14:10:39.266909 1149858 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0603 14:10:39.266964 1149858 kubeadm.go:391] StartCluster: {Name:newest-cni-937150 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:newest-cni-937150 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.117 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 14:10:39.267048 1149858 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 14:10:39.267111 1149858 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 14:10:39.311861 1149858 cri.go:89] found id: ""
	I0603 14:10:39.311950 1149858 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0603 14:10:39.328960 1149858 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 14:10:39.340618 1149858 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 14:10:39.351555 1149858 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 14:10:39.351581 1149858 kubeadm.go:156] found existing configuration files:
	
	I0603 14:10:39.351634 1149858 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 14:10:39.361659 1149858 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 14:10:39.361735 1149858 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 14:10:39.372490 1149858 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 14:10:39.382854 1149858 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 14:10:39.382918 1149858 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 14:10:39.394239 1149858 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 14:10:39.403819 1149858 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 14:10:39.403896 1149858 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 14:10:39.413875 1149858 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 14:10:39.424160 1149858 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 14:10:39.424235 1149858 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 14:10:39.434930 1149858 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 14:10:39.564729 1149858 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0603 14:10:39.564800 1149858 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 14:10:39.690185 1149858 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 14:10:39.690312 1149858 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 14:10:39.690452 1149858 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 14:10:39.910223 1149858 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 14:10:40.032509 1149858 out.go:204]   - Generating certificates and keys ...
	I0603 14:10:40.032624 1149858 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 14:10:40.032709 1149858 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 14:10:40.213537 1149858 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0603 14:10:40.408041 1149858 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0603 14:10:40.617896 1149858 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0603 14:10:40.771057 1149858 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0603 14:10:40.882541 1149858 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0603 14:10:40.882764 1149858 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-937150] and IPs [192.168.50.117 127.0.0.1 ::1]
	I0603 14:10:41.058241 1149858 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0603 14:10:41.058581 1149858 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-937150] and IPs [192.168.50.117 127.0.0.1 ::1]
	I0603 14:10:41.354995 1149858 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0603 14:10:41.591380 1149858 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0603 14:10:41.684429 1149858 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0603 14:10:41.684506 1149858 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 14:10:41.839182 1149858 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 14:10:42.189547 1149858 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0603 14:10:42.548626 1149858 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 14:10:42.721883 1149858 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 14:10:42.917578 1149858 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 14:10:42.918380 1149858 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 14:10:42.921799 1149858 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 14:10:42.923762 1149858 out.go:204]   - Booting up control plane ...
	I0603 14:10:42.923885 1149858 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 14:10:42.923973 1149858 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 14:10:42.927125 1149858 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 14:10:42.952275 1149858 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 14:10:42.953628 1149858 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 14:10:42.953701 1149858 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 14:10:43.084989 1149858 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0603 14:10:43.085144 1149858 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0603 14:10:43.605917 1149858 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 521.163342ms
	I0603 14:10:43.606026 1149858 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0603 14:10:49.103486 1149858 kubeadm.go:309] [api-check] The API server is healthy after 5.501823647s
	I0603 14:10:49.117564 1149858 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0603 14:10:49.133952 1149858 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0603 14:10:49.162476 1149858 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0603 14:10:49.162725 1149858 kubeadm.go:309] [mark-control-plane] Marking the node newest-cni-937150 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0603 14:10:49.176769 1149858 kubeadm.go:309] [bootstrap-token] Using token: 9iogr7.p4sgbr1j5c0rj9kv
	
	
	==> CRI-O <==
	Jun 03 14:10:50 no-preload-817450 crio[720]: time="2024-06-03 14:10:50.569668424Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717423850569639563,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99934,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4b30fdf0-06b6-4ef3-80f0-2fdc27451980 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:10:50 no-preload-817450 crio[720]: time="2024-06-03 14:10:50.570445838Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7bd1a5b9-23e1-42e9-ab64-2cd93e59f5a7 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:10:50 no-preload-817450 crio[720]: time="2024-06-03 14:10:50.570562681Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7bd1a5b9-23e1-42e9-ab64-2cd93e59f5a7 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:10:50 no-preload-817450 crio[720]: time="2024-06-03 14:10:50.570821791Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e396c54cedb357a57ac99faf2643088bd5f4ac32e3d087d6b89793d9ec4eeb08,PodSandboxId:3b302ef8b6487e80d6530924059a703e7464f8e33800431474b978710a47ed55,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717422983947690813,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f22655fc-5571-496e-a93f-3970d1693435,},Annotations:map[string]string{io.kubernetes.container.hash: 15a9a52d,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c27c4962bb8982cea91927cd93e77d3643624f2c8bdcea1c57eb199d8f543711,PodSandboxId:97cb74082cec07a03da7f17124d58169d0d5ce6aae5edd2354bac9172ac01fab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717422983345182653,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f8pbl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 201e687b-1c1b-4030-8b59-b0257a0f876c,},Annotations:map[string]string{io.kubernetes.container.hash: b7f28944,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84874b63b7fd20e3d3decf270af77e030db0b85759bd926f234f6279866448ef,PodSandboxId:5241ddb542f5c62f506f5b18a1941134c2de81ac18f5ae8821b307f4f5883d95,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717422983362215651,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jgk4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75
956644-426d-49a7-b80c-492c4284f438,},Annotations:map[string]string{io.kubernetes.container.hash: 351ca9b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:526214f62ac9877b8679bd74972c9e5a7fff1255392e765ade04863755dcd249,PodSandboxId:4abcb5707b6281b704951dd006b0fc5235c4a67bdea8cb5d9884e7caf44fe1a1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:
1717422982692339386,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t45fn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0578c151-2b36-4125-83f8-f4fbd62a1dc4,},Annotations:map[string]string{io.kubernetes.container.hash: 112a4db8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3712516987b541315caa42ec191de1f8490381a8905601bc18670783e5b465a5,PodSandboxId:70a59eaf41f06bbd52f930cfcd9094a16ee18caa9e889af1077f5514e16c8b59,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717422962836173073,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-817450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb44b3b772d07e11b206e5b0f01ae231,},Annotations:map[string]string{io.kubernetes.container.hash: 866818ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84ce44663e901258ef10d9f88a3641193e8adb038271772c2d6c42f3265d96a2,PodSandboxId:7706a8490750828708105889445e13e10bec6f79ba757e00264a76ca5a45746d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717422962784502292,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-817450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8375333af2ee4d43244d7eb8597636ed,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a2ab9144d51792c0c66d2253bdb0cd54218b45f89025eb3186e6d8a4cf17b13,PodSandboxId:32b672762a022aeb960a5dc333847256cd576a93185aa0f3fb8194f4ceec2629,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717422962753743904,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-817450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68c0344827ff03b6ad52446b7293abdc,},Annotations:map[string]string{io.kubernetes.container.hash: 64cf595c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87bd39c3f2658eb439cf6bf643a144708940ab9ca35268e56fb38dd0f2b2d0ba,PodSandboxId:2d0df016f48d0b036573ad32fbe476386c90d4c761148c4fccbbb88836f4a372,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717422962660373747,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-817450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7e13d65dd08a92b3dadcddfd215dae3,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08ddf7aff7fc5db99feefd13d7ac8694eade514d615f96858e0a149d53aeda96,PodSandboxId:22af9160ec5ca93dfc01af0b91c1583b0172a9efef2b983b6648117cb92bea4d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717422669211181789,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-817450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68c0344827ff03b6ad52446b7293abdc,},Annotations:map[string]string{io.kubernetes.container.hash: 64cf595c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7bd1a5b9-23e1-42e9-ab64-2cd93e59f5a7 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:10:50 no-preload-817450 crio[720]: time="2024-06-03 14:10:50.618159153Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4d7c84d4-2c50-48a3-be58-2d90a472deb2 name=/runtime.v1.RuntimeService/Version
	Jun 03 14:10:50 no-preload-817450 crio[720]: time="2024-06-03 14:10:50.618293322Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4d7c84d4-2c50-48a3-be58-2d90a472deb2 name=/runtime.v1.RuntimeService/Version
	Jun 03 14:10:50 no-preload-817450 crio[720]: time="2024-06-03 14:10:50.619757161Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b35316f4-fe1d-4481-ae27-419a17f7a0ac name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:10:50 no-preload-817450 crio[720]: time="2024-06-03 14:10:50.620503825Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717423850620475757,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99934,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b35316f4-fe1d-4481-ae27-419a17f7a0ac name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:10:50 no-preload-817450 crio[720]: time="2024-06-03 14:10:50.621166323Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6a58a94f-52fe-47d8-bd45-8972d7970a7c name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:10:50 no-preload-817450 crio[720]: time="2024-06-03 14:10:50.621219947Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6a58a94f-52fe-47d8-bd45-8972d7970a7c name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:10:50 no-preload-817450 crio[720]: time="2024-06-03 14:10:50.621698061Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e396c54cedb357a57ac99faf2643088bd5f4ac32e3d087d6b89793d9ec4eeb08,PodSandboxId:3b302ef8b6487e80d6530924059a703e7464f8e33800431474b978710a47ed55,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717422983947690813,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f22655fc-5571-496e-a93f-3970d1693435,},Annotations:map[string]string{io.kubernetes.container.hash: 15a9a52d,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c27c4962bb8982cea91927cd93e77d3643624f2c8bdcea1c57eb199d8f543711,PodSandboxId:97cb74082cec07a03da7f17124d58169d0d5ce6aae5edd2354bac9172ac01fab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717422983345182653,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f8pbl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 201e687b-1c1b-4030-8b59-b0257a0f876c,},Annotations:map[string]string{io.kubernetes.container.hash: b7f28944,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84874b63b7fd20e3d3decf270af77e030db0b85759bd926f234f6279866448ef,PodSandboxId:5241ddb542f5c62f506f5b18a1941134c2de81ac18f5ae8821b307f4f5883d95,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717422983362215651,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jgk4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75
956644-426d-49a7-b80c-492c4284f438,},Annotations:map[string]string{io.kubernetes.container.hash: 351ca9b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:526214f62ac9877b8679bd74972c9e5a7fff1255392e765ade04863755dcd249,PodSandboxId:4abcb5707b6281b704951dd006b0fc5235c4a67bdea8cb5d9884e7caf44fe1a1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:
1717422982692339386,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t45fn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0578c151-2b36-4125-83f8-f4fbd62a1dc4,},Annotations:map[string]string{io.kubernetes.container.hash: 112a4db8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3712516987b541315caa42ec191de1f8490381a8905601bc18670783e5b465a5,PodSandboxId:70a59eaf41f06bbd52f930cfcd9094a16ee18caa9e889af1077f5514e16c8b59,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717422962836173073,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-817450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb44b3b772d07e11b206e5b0f01ae231,},Annotations:map[string]string{io.kubernetes.container.hash: 866818ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84ce44663e901258ef10d9f88a3641193e8adb038271772c2d6c42f3265d96a2,PodSandboxId:7706a8490750828708105889445e13e10bec6f79ba757e00264a76ca5a45746d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717422962784502292,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-817450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8375333af2ee4d43244d7eb8597636ed,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a2ab9144d51792c0c66d2253bdb0cd54218b45f89025eb3186e6d8a4cf17b13,PodSandboxId:32b672762a022aeb960a5dc333847256cd576a93185aa0f3fb8194f4ceec2629,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717422962753743904,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-817450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68c0344827ff03b6ad52446b7293abdc,},Annotations:map[string]string{io.kubernetes.container.hash: 64cf595c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87bd39c3f2658eb439cf6bf643a144708940ab9ca35268e56fb38dd0f2b2d0ba,PodSandboxId:2d0df016f48d0b036573ad32fbe476386c90d4c761148c4fccbbb88836f4a372,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717422962660373747,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-817450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7e13d65dd08a92b3dadcddfd215dae3,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08ddf7aff7fc5db99feefd13d7ac8694eade514d615f96858e0a149d53aeda96,PodSandboxId:22af9160ec5ca93dfc01af0b91c1583b0172a9efef2b983b6648117cb92bea4d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717422669211181789,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-817450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68c0344827ff03b6ad52446b7293abdc,},Annotations:map[string]string{io.kubernetes.container.hash: 64cf595c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6a58a94f-52fe-47d8-bd45-8972d7970a7c name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:10:50 no-preload-817450 crio[720]: time="2024-06-03 14:10:50.666844165Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9c0c6c62-b7df-4c74-9312-6d4a049707c0 name=/runtime.v1.RuntimeService/Version
	Jun 03 14:10:50 no-preload-817450 crio[720]: time="2024-06-03 14:10:50.667005947Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9c0c6c62-b7df-4c74-9312-6d4a049707c0 name=/runtime.v1.RuntimeService/Version
	Jun 03 14:10:50 no-preload-817450 crio[720]: time="2024-06-03 14:10:50.668651753Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2ad527aa-c84a-48f5-9cb6-8e0ea021bcdc name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:10:50 no-preload-817450 crio[720]: time="2024-06-03 14:10:50.669216289Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717423850669182245,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99934,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2ad527aa-c84a-48f5-9cb6-8e0ea021bcdc name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:10:50 no-preload-817450 crio[720]: time="2024-06-03 14:10:50.670043488Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4f33507a-e4eb-45ea-8ae3-dc6cd0bb896e name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:10:50 no-preload-817450 crio[720]: time="2024-06-03 14:10:50.670115902Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4f33507a-e4eb-45ea-8ae3-dc6cd0bb896e name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:10:50 no-preload-817450 crio[720]: time="2024-06-03 14:10:50.670414682Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e396c54cedb357a57ac99faf2643088bd5f4ac32e3d087d6b89793d9ec4eeb08,PodSandboxId:3b302ef8b6487e80d6530924059a703e7464f8e33800431474b978710a47ed55,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717422983947690813,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f22655fc-5571-496e-a93f-3970d1693435,},Annotations:map[string]string{io.kubernetes.container.hash: 15a9a52d,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c27c4962bb8982cea91927cd93e77d3643624f2c8bdcea1c57eb199d8f543711,PodSandboxId:97cb74082cec07a03da7f17124d58169d0d5ce6aae5edd2354bac9172ac01fab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717422983345182653,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f8pbl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 201e687b-1c1b-4030-8b59-b0257a0f876c,},Annotations:map[string]string{io.kubernetes.container.hash: b7f28944,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84874b63b7fd20e3d3decf270af77e030db0b85759bd926f234f6279866448ef,PodSandboxId:5241ddb542f5c62f506f5b18a1941134c2de81ac18f5ae8821b307f4f5883d95,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717422983362215651,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jgk4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75
956644-426d-49a7-b80c-492c4284f438,},Annotations:map[string]string{io.kubernetes.container.hash: 351ca9b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:526214f62ac9877b8679bd74972c9e5a7fff1255392e765ade04863755dcd249,PodSandboxId:4abcb5707b6281b704951dd006b0fc5235c4a67bdea8cb5d9884e7caf44fe1a1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:
1717422982692339386,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t45fn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0578c151-2b36-4125-83f8-f4fbd62a1dc4,},Annotations:map[string]string{io.kubernetes.container.hash: 112a4db8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3712516987b541315caa42ec191de1f8490381a8905601bc18670783e5b465a5,PodSandboxId:70a59eaf41f06bbd52f930cfcd9094a16ee18caa9e889af1077f5514e16c8b59,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717422962836173073,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-817450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb44b3b772d07e11b206e5b0f01ae231,},Annotations:map[string]string{io.kubernetes.container.hash: 866818ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84ce44663e901258ef10d9f88a3641193e8adb038271772c2d6c42f3265d96a2,PodSandboxId:7706a8490750828708105889445e13e10bec6f79ba757e00264a76ca5a45746d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717422962784502292,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-817450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8375333af2ee4d43244d7eb8597636ed,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a2ab9144d51792c0c66d2253bdb0cd54218b45f89025eb3186e6d8a4cf17b13,PodSandboxId:32b672762a022aeb960a5dc333847256cd576a93185aa0f3fb8194f4ceec2629,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717422962753743904,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-817450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68c0344827ff03b6ad52446b7293abdc,},Annotations:map[string]string{io.kubernetes.container.hash: 64cf595c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87bd39c3f2658eb439cf6bf643a144708940ab9ca35268e56fb38dd0f2b2d0ba,PodSandboxId:2d0df016f48d0b036573ad32fbe476386c90d4c761148c4fccbbb88836f4a372,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717422962660373747,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-817450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7e13d65dd08a92b3dadcddfd215dae3,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08ddf7aff7fc5db99feefd13d7ac8694eade514d615f96858e0a149d53aeda96,PodSandboxId:22af9160ec5ca93dfc01af0b91c1583b0172a9efef2b983b6648117cb92bea4d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717422669211181789,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-817450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68c0344827ff03b6ad52446b7293abdc,},Annotations:map[string]string{io.kubernetes.container.hash: 64cf595c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4f33507a-e4eb-45ea-8ae3-dc6cd0bb896e name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:10:50 no-preload-817450 crio[720]: time="2024-06-03 14:10:50.712184039Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d7162cd2-fe82-4187-adb5-e67cc553e634 name=/runtime.v1.RuntimeService/Version
	Jun 03 14:10:50 no-preload-817450 crio[720]: time="2024-06-03 14:10:50.712290082Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d7162cd2-fe82-4187-adb5-e67cc553e634 name=/runtime.v1.RuntimeService/Version
	Jun 03 14:10:50 no-preload-817450 crio[720]: time="2024-06-03 14:10:50.714098902Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=919cdbcd-dc52-4523-adf2-6a6e49054c57 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:10:50 no-preload-817450 crio[720]: time="2024-06-03 14:10:50.714603591Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717423850714568418,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99934,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=919cdbcd-dc52-4523-adf2-6a6e49054c57 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:10:50 no-preload-817450 crio[720]: time="2024-06-03 14:10:50.715455535Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f3087a7a-9401-472c-a3ce-aeeb3806ce38 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:10:50 no-preload-817450 crio[720]: time="2024-06-03 14:10:50.715535218Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f3087a7a-9401-472c-a3ce-aeeb3806ce38 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:10:50 no-preload-817450 crio[720]: time="2024-06-03 14:10:50.715814430Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e396c54cedb357a57ac99faf2643088bd5f4ac32e3d087d6b89793d9ec4eeb08,PodSandboxId:3b302ef8b6487e80d6530924059a703e7464f8e33800431474b978710a47ed55,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717422983947690813,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f22655fc-5571-496e-a93f-3970d1693435,},Annotations:map[string]string{io.kubernetes.container.hash: 15a9a52d,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c27c4962bb8982cea91927cd93e77d3643624f2c8bdcea1c57eb199d8f543711,PodSandboxId:97cb74082cec07a03da7f17124d58169d0d5ce6aae5edd2354bac9172ac01fab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717422983345182653,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f8pbl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 201e687b-1c1b-4030-8b59-b0257a0f876c,},Annotations:map[string]string{io.kubernetes.container.hash: b7f28944,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84874b63b7fd20e3d3decf270af77e030db0b85759bd926f234f6279866448ef,PodSandboxId:5241ddb542f5c62f506f5b18a1941134c2de81ac18f5ae8821b307f4f5883d95,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717422983362215651,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jgk4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75
956644-426d-49a7-b80c-492c4284f438,},Annotations:map[string]string{io.kubernetes.container.hash: 351ca9b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:526214f62ac9877b8679bd74972c9e5a7fff1255392e765ade04863755dcd249,PodSandboxId:4abcb5707b6281b704951dd006b0fc5235c4a67bdea8cb5d9884e7caf44fe1a1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:
1717422982692339386,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t45fn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0578c151-2b36-4125-83f8-f4fbd62a1dc4,},Annotations:map[string]string{io.kubernetes.container.hash: 112a4db8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3712516987b541315caa42ec191de1f8490381a8905601bc18670783e5b465a5,PodSandboxId:70a59eaf41f06bbd52f930cfcd9094a16ee18caa9e889af1077f5514e16c8b59,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717422962836173073,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-817450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb44b3b772d07e11b206e5b0f01ae231,},Annotations:map[string]string{io.kubernetes.container.hash: 866818ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84ce44663e901258ef10d9f88a3641193e8adb038271772c2d6c42f3265d96a2,PodSandboxId:7706a8490750828708105889445e13e10bec6f79ba757e00264a76ca5a45746d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717422962784502292,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-817450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8375333af2ee4d43244d7eb8597636ed,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a2ab9144d51792c0c66d2253bdb0cd54218b45f89025eb3186e6d8a4cf17b13,PodSandboxId:32b672762a022aeb960a5dc333847256cd576a93185aa0f3fb8194f4ceec2629,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717422962753743904,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-817450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68c0344827ff03b6ad52446b7293abdc,},Annotations:map[string]string{io.kubernetes.container.hash: 64cf595c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87bd39c3f2658eb439cf6bf643a144708940ab9ca35268e56fb38dd0f2b2d0ba,PodSandboxId:2d0df016f48d0b036573ad32fbe476386c90d4c761148c4fccbbb88836f4a372,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717422962660373747,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-817450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7e13d65dd08a92b3dadcddfd215dae3,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08ddf7aff7fc5db99feefd13d7ac8694eade514d615f96858e0a149d53aeda96,PodSandboxId:22af9160ec5ca93dfc01af0b91c1583b0172a9efef2b983b6648117cb92bea4d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717422669211181789,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-817450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68c0344827ff03b6ad52446b7293abdc,},Annotations:map[string]string{io.kubernetes.container.hash: 64cf595c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f3087a7a-9401-472c-a3ce-aeeb3806ce38 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e396c54cedb35       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   3b302ef8b6487       storage-provisioner
	84874b63b7fd2       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   5241ddb542f5c       coredns-7db6d8ff4d-jgk4p
	c27c4962bb898       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   97cb74082cec0       coredns-7db6d8ff4d-f8pbl
	526214f62ac98       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   14 minutes ago      Running             kube-proxy                0                   4abcb5707b628       kube-proxy-t45fn
	3712516987b54       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   14 minutes ago      Running             etcd                      2                   70a59eaf41f06       etcd-no-preload-817450
	84ce44663e901       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   14 minutes ago      Running             kube-scheduler            2                   7706a84907508       kube-scheduler-no-preload-817450
	1a2ab9144d517       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   14 minutes ago      Running             kube-apiserver            2                   32b672762a022       kube-apiserver-no-preload-817450
	87bd39c3f2658       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   14 minutes ago      Running             kube-controller-manager   2                   2d0df016f48d0       kube-controller-manager-no-preload-817450
	08ddf7aff7fc5       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   19 minutes ago      Exited              kube-apiserver            1                   22af9160ec5ca       kube-apiserver-no-preload-817450
	
	
	==> coredns [84874b63b7fd20e3d3decf270af77e030db0b85759bd926f234f6279866448ef] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [c27c4962bb8982cea91927cd93e77d3643624f2c8bdcea1c57eb199d8f543711] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-817450
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-817450
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
	                    minikube.k8s.io/name=no-preload-817450
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_03T13_56_09_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 03 Jun 2024 13:56:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-817450
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 03 Jun 2024 14:10:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 03 Jun 2024 14:06:40 +0000   Mon, 03 Jun 2024 13:56:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 03 Jun 2024 14:06:40 +0000   Mon, 03 Jun 2024 13:56:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 03 Jun 2024 14:06:40 +0000   Mon, 03 Jun 2024 13:56:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 03 Jun 2024 14:06:40 +0000   Mon, 03 Jun 2024 13:56:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.125
	  Hostname:    no-preload-817450
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f556b2d5de8f43ba90b51bc125687665
	  System UUID:                f556b2d5-de8f-43ba-90b5-1bc125687665
	  Boot ID:                    4d33bd4d-32f2-4a4a-abf6-785601422159
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-f8pbl                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-7db6d8ff4d-jgk4p                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-no-preload-817450                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kube-apiserver-no-preload-817450             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-no-preload-817450    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-t45fn                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-no-preload-817450             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 metrics-server-569cc877fc-j2lpf              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node no-preload-817450 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node no-preload-817450 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node no-preload-817450 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node no-preload-817450 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node no-preload-817450 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node no-preload-817450 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                node-controller  Node no-preload-817450 event: Registered Node no-preload-817450 in Controller
	
	
	==> dmesg <==
	[  +0.043992] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.950479] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.516387] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.683132] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.786286] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.061639] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058477] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.189366] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +0.113044] systemd-fstab-generator[676]: Ignoring "noauto" option for root device
	[  +0.297563] systemd-fstab-generator[705]: Ignoring "noauto" option for root device
	[Jun 3 13:51] systemd-fstab-generator[1222]: Ignoring "noauto" option for root device
	[  +0.069704] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.137649] systemd-fstab-generator[1345]: Ignoring "noauto" option for root device
	[  +4.111210] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.914404] kauditd_printk_skb: 53 callbacks suppressed
	[  +7.346448] kauditd_printk_skb: 24 callbacks suppressed
	[Jun 3 13:55] kauditd_printk_skb: 3 callbacks suppressed
	[Jun 3 13:56] systemd-fstab-generator[3968]: Ignoring "noauto" option for root device
	[  +6.567762] systemd-fstab-generator[4288]: Ignoring "noauto" option for root device
	[  +0.105887] kauditd_printk_skb: 58 callbacks suppressed
	[ +13.824813] systemd-fstab-generator[4485]: Ignoring "noauto" option for root device
	[  +0.113315] kauditd_printk_skb: 12 callbacks suppressed
	[Jun 3 13:57] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [3712516987b541315caa42ec191de1f8490381a8905601bc18670783e5b465a5] <==
	{"level":"info","ts":"2024-06-03T13:56:03.204383Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.125:2380"}
	{"level":"info","ts":"2024-06-03T13:56:03.204409Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.125:2380"}
	{"level":"info","ts":"2024-06-03T13:56:03.214446Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2db48a961a30b16c switched to configuration voters=(3293409604803801452)"}
	{"level":"info","ts":"2024-06-03T13:56:03.214698Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1634120b80e66761","local-member-id":"2db48a961a30b16c","added-peer-id":"2db48a961a30b16c","added-peer-peer-urls":["https://192.168.72.125:2380"]}
	{"level":"info","ts":"2024-06-03T13:56:04.132995Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2db48a961a30b16c is starting a new election at term 1"}
	{"level":"info","ts":"2024-06-03T13:56:04.133122Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2db48a961a30b16c became pre-candidate at term 1"}
	{"level":"info","ts":"2024-06-03T13:56:04.133164Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2db48a961a30b16c received MsgPreVoteResp from 2db48a961a30b16c at term 1"}
	{"level":"info","ts":"2024-06-03T13:56:04.133193Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2db48a961a30b16c became candidate at term 2"}
	{"level":"info","ts":"2024-06-03T13:56:04.133217Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2db48a961a30b16c received MsgVoteResp from 2db48a961a30b16c at term 2"}
	{"level":"info","ts":"2024-06-03T13:56:04.133244Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2db48a961a30b16c became leader at term 2"}
	{"level":"info","ts":"2024-06-03T13:56:04.133272Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2db48a961a30b16c elected leader 2db48a961a30b16c at term 2"}
	{"level":"info","ts":"2024-06-03T13:56:04.138206Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"2db48a961a30b16c","local-member-attributes":"{Name:no-preload-817450 ClientURLs:[https://192.168.72.125:2379]}","request-path":"/0/members/2db48a961a30b16c/attributes","cluster-id":"1634120b80e66761","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-03T13:56:04.139973Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T13:56:04.140164Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-03T13:56:04.144388Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-03T13:56:04.153727Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-03T13:56:04.153765Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-03T13:56:04.156098Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1634120b80e66761","local-member-id":"2db48a961a30b16c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T13:56:04.156423Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T13:56:04.159949Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-03T13:56:04.163961Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-03T13:56:04.168137Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.125:2379"}
	{"level":"info","ts":"2024-06-03T14:06:04.220909Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":718}
	{"level":"info","ts":"2024-06-03T14:06:04.2302Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":718,"took":"8.868201ms","hash":4131856081,"current-db-size-bytes":2183168,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2183168,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-06-03T14:06:04.230259Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4131856081,"revision":718,"compact-revision":-1}
	
	
	==> kernel <==
	 14:10:51 up 20 min,  0 users,  load average: 0.09, 0.19, 0.13
	Linux no-preload-817450 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [08ddf7aff7fc5db99feefd13d7ac8694eade514d615f96858e0a149d53aeda96] <==
	W0603 13:55:55.815060       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 13:55:55.826085       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 13:55:55.849857       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 13:55:55.858186       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 13:55:55.862070       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 13:55:55.925954       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 13:55:56.125628       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 13:55:56.208409       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 13:55:56.220387       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 13:55:56.255041       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 13:55:56.271410       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 13:55:56.276365       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 13:55:56.282499       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 13:55:56.297186       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 13:55:56.393385       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 13:55:56.782558       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 13:55:56.804714       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 13:55:56.904627       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 13:55:56.944660       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 13:55:56.946990       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 13:55:56.951988       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 13:55:57.066575       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 13:55:57.080185       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 13:55:57.257218       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0603 13:55:57.370225       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [1a2ab9144d51792c0c66d2253bdb0cd54218b45f89025eb3186e6d8a4cf17b13] <==
	I0603 14:04:06.703221       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 14:06:05.706785       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 14:06:05.707276       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0603 14:06:06.708168       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 14:06:06.708229       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0603 14:06:06.708239       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 14:06:06.708291       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 14:06:06.708359       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0603 14:06:06.709496       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 14:07:06.709235       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 14:07:06.709472       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0603 14:07:06.709510       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 14:07:06.709607       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 14:07:06.709741       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0603 14:07:06.711404       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 14:09:06.710208       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 14:09:06.710561       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0603 14:09:06.710595       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0603 14:09:06.712413       1 handler_proxy.go:93] no RequestInfo found in the context
	E0603 14:09:06.712519       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0603 14:09:06.712527       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [87bd39c3f2658eb439cf6bf643a144708940ab9ca35268e56fb38dd0f2b2d0ba] <==
	I0603 14:04:52.496405       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 14:05:21.853073       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:05:22.505124       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 14:05:51.858296       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:05:52.518242       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 14:06:21.864198       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:06:22.526761       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 14:06:51.869328       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:06:52.535847       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 14:07:21.876710       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:07:22.543928       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0603 14:07:32.346147       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="167.352µs"
	I0603 14:07:46.342966       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="146.935µs"
	E0603 14:07:51.882489       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:07:52.560053       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 14:08:21.887820       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:08:22.568695       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 14:08:51.892814       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:08:52.578667       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 14:09:21.898184       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:09:22.586683       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 14:09:51.903564       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:09:52.602050       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0603 14:10:21.910721       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0603 14:10:22.610852       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [526214f62ac9877b8679bd74972c9e5a7fff1255392e765ade04863755dcd249] <==
	I0603 13:56:23.024212       1 server_linux.go:69] "Using iptables proxy"
	I0603 13:56:23.040278       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.125"]
	I0603 13:56:23.274188       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0603 13:56:23.274277       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0603 13:56:23.274293       1 server_linux.go:165] "Using iptables Proxier"
	I0603 13:56:23.279805       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0603 13:56:23.280058       1 server.go:872] "Version info" version="v1.30.1"
	I0603 13:56:23.280076       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0603 13:56:23.282741       1 config.go:192] "Starting service config controller"
	I0603 13:56:23.282773       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0603 13:56:23.282794       1 config.go:101] "Starting endpoint slice config controller"
	I0603 13:56:23.282797       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0603 13:56:23.283308       1 config.go:319] "Starting node config controller"
	I0603 13:56:23.283314       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0603 13:56:23.384416       1 shared_informer.go:320] Caches are synced for service config
	I0603 13:56:23.384458       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0603 13:56:23.384472       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [84ce44663e901258ef10d9f88a3641193e8adb038271772c2d6c42f3265d96a2] <==
	W0603 13:56:05.749841       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0603 13:56:05.751372       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0603 13:56:05.749051       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0603 13:56:05.751383       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0603 13:56:06.606062       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0603 13:56:06.606112       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0603 13:56:06.706050       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0603 13:56:06.706097       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0603 13:56:06.757931       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0603 13:56:06.758023       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0603 13:56:06.776279       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0603 13:56:06.776373       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0603 13:56:06.848588       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0603 13:56:06.848715       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0603 13:56:06.860770       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0603 13:56:06.860847       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0603 13:56:06.875185       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0603 13:56:06.875238       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0603 13:56:06.915410       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0603 13:56:06.915588       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0603 13:56:06.918373       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0603 13:56:06.918480       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0603 13:56:06.991037       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0603 13:56:06.992004       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0603 13:56:09.638054       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 03 14:08:08 no-preload-817450 kubelet[4295]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 14:08:08 no-preload-817450 kubelet[4295]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 14:08:11 no-preload-817450 kubelet[4295]: E0603 14:08:11.325089    4295 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j2lpf" podUID="4f776017-1575-4461-a7c8-656e5a170460"
	Jun 03 14:08:24 no-preload-817450 kubelet[4295]: E0603 14:08:24.325454    4295 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j2lpf" podUID="4f776017-1575-4461-a7c8-656e5a170460"
	Jun 03 14:08:35 no-preload-817450 kubelet[4295]: E0603 14:08:35.325032    4295 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j2lpf" podUID="4f776017-1575-4461-a7c8-656e5a170460"
	Jun 03 14:08:49 no-preload-817450 kubelet[4295]: E0603 14:08:49.326520    4295 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j2lpf" podUID="4f776017-1575-4461-a7c8-656e5a170460"
	Jun 03 14:09:02 no-preload-817450 kubelet[4295]: E0603 14:09:02.324813    4295 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j2lpf" podUID="4f776017-1575-4461-a7c8-656e5a170460"
	Jun 03 14:09:08 no-preload-817450 kubelet[4295]: E0603 14:09:08.360763    4295 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 14:09:08 no-preload-817450 kubelet[4295]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 14:09:08 no-preload-817450 kubelet[4295]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 14:09:08 no-preload-817450 kubelet[4295]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 14:09:08 no-preload-817450 kubelet[4295]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 14:09:16 no-preload-817450 kubelet[4295]: E0603 14:09:16.326463    4295 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j2lpf" podUID="4f776017-1575-4461-a7c8-656e5a170460"
	Jun 03 14:09:31 no-preload-817450 kubelet[4295]: E0603 14:09:31.323932    4295 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j2lpf" podUID="4f776017-1575-4461-a7c8-656e5a170460"
	Jun 03 14:09:45 no-preload-817450 kubelet[4295]: E0603 14:09:45.324856    4295 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j2lpf" podUID="4f776017-1575-4461-a7c8-656e5a170460"
	Jun 03 14:09:59 no-preload-817450 kubelet[4295]: E0603 14:09:59.325445    4295 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j2lpf" podUID="4f776017-1575-4461-a7c8-656e5a170460"
	Jun 03 14:10:08 no-preload-817450 kubelet[4295]: E0603 14:10:08.368039    4295 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 03 14:10:08 no-preload-817450 kubelet[4295]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 03 14:10:08 no-preload-817450 kubelet[4295]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 03 14:10:08 no-preload-817450 kubelet[4295]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 03 14:10:08 no-preload-817450 kubelet[4295]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 03 14:10:11 no-preload-817450 kubelet[4295]: E0603 14:10:11.324862    4295 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j2lpf" podUID="4f776017-1575-4461-a7c8-656e5a170460"
	Jun 03 14:10:26 no-preload-817450 kubelet[4295]: E0603 14:10:26.325212    4295 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j2lpf" podUID="4f776017-1575-4461-a7c8-656e5a170460"
	Jun 03 14:10:39 no-preload-817450 kubelet[4295]: E0603 14:10:39.325141    4295 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j2lpf" podUID="4f776017-1575-4461-a7c8-656e5a170460"
	Jun 03 14:10:51 no-preload-817450 kubelet[4295]: E0603 14:10:51.324685    4295 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j2lpf" podUID="4f776017-1575-4461-a7c8-656e5a170460"
	
	
	==> storage-provisioner [e396c54cedb357a57ac99faf2643088bd5f4ac32e3d087d6b89793d9ec4eeb08] <==
	I0603 13:56:24.054201       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0603 13:56:24.069760       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0603 13:56:24.069936       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0603 13:56:24.085538       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0603 13:56:24.085705       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-817450_f3e3ca56-c2e6-4354-9398-171f9ff71371!
	I0603 13:56:24.086815       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0c63b3e1-dae4-4baa-9434-99efb0ec2ea8", APIVersion:"v1", ResourceVersion:"442", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-817450_f3e3ca56-c2e6-4354-9398-171f9ff71371 became leader
	I0603 13:56:24.186237       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-817450_f3e3ca56-c2e6-4354-9398-171f9ff71371!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-817450 -n no-preload-817450
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-817450 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-j2lpf
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-817450 describe pod metrics-server-569cc877fc-j2lpf
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-817450 describe pod metrics-server-569cc877fc-j2lpf: exit status 1 (63.427107ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-j2lpf" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-817450 describe pod metrics-server-569cc877fc-j2lpf: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (321.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (139.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
E0603 14:08:01.280239 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/functional-093300/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
E0603 14:08:22.013618 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/calico-021279/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
E0603 14:09:13.231669 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/custom-flannel-021279/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
E0603 14:09:30.123564 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kindnet-021279/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
E0603 14:09:58.228165 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/functional-093300/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.65:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-151788 -n old-k8s-version-151788
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-151788 -n old-k8s-version-151788: exit status 2 (241.429737ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-151788" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-151788 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-151788 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.731µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-151788 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-151788 -n old-k8s-version-151788
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-151788 -n old-k8s-version-151788: exit status 2 (230.4671ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-151788 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-151788 logs -n 25: (1.765896067s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-021279 sudo cat                              | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-021279 sudo                                  | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-021279 sudo                                  | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-021279 sudo                                  | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-021279 sudo find                             | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-021279 sudo crio                             | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-021279                                       | bridge-021279                | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	| delete  | -p                                                     | disable-driver-mounts-069000 | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:41 UTC |
	|         | disable-driver-mounts-069000                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-030870 | jenkins | v1.33.1 | 03 Jun 24 13:41 UTC | 03 Jun 24 13:43 UTC |
	|         | default-k8s-diff-port-030870                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-817450             | no-preload-817450            | jenkins | v1.33.1 | 03 Jun 24 13:42 UTC | 03 Jun 24 13:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-817450                                   | no-preload-817450            | jenkins | v1.33.1 | 03 Jun 24 13:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-223260            | embed-certs-223260           | jenkins | v1.33.1 | 03 Jun 24 13:43 UTC | 03 Jun 24 13:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-223260                                  | embed-certs-223260           | jenkins | v1.33.1 | 03 Jun 24 13:43 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-030870  | default-k8s-diff-port-030870 | jenkins | v1.33.1 | 03 Jun 24 13:43 UTC | 03 Jun 24 13:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-030870 | jenkins | v1.33.1 | 03 Jun 24 13:43 UTC |                     |
	|         | default-k8s-diff-port-030870                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-817450                  | no-preload-817450            | jenkins | v1.33.1 | 03 Jun 24 13:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-151788        | old-k8s-version-151788       | jenkins | v1.33.1 | 03 Jun 24 13:44 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p no-preload-817450                                   | no-preload-817450            | jenkins | v1.33.1 | 03 Jun 24 13:44 UTC | 03 Jun 24 13:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-223260                 | embed-certs-223260           | jenkins | v1.33.1 | 03 Jun 24 13:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-223260                                  | embed-certs-223260           | jenkins | v1.33.1 | 03 Jun 24 13:45 UTC | 03 Jun 24 13:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-030870       | default-k8s-diff-port-030870 | jenkins | v1.33.1 | 03 Jun 24 13:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-030870 | jenkins | v1.33.1 | 03 Jun 24 13:46 UTC | 03 Jun 24 13:54 UTC |
	|         | default-k8s-diff-port-030870                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-151788                              | old-k8s-version-151788       | jenkins | v1.33.1 | 03 Jun 24 13:46 UTC | 03 Jun 24 13:46 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-151788             | old-k8s-version-151788       | jenkins | v1.33.1 | 03 Jun 24 13:46 UTC | 03 Jun 24 13:46 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-151788                              | old-k8s-version-151788       | jenkins | v1.33.1 | 03 Jun 24 13:46 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 13:46:22
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 13:46:22.347386 1143678 out.go:291] Setting OutFile to fd 1 ...
	I0603 13:46:22.347655 1143678 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:46:22.347666 1143678 out.go:304] Setting ErrFile to fd 2...
	I0603 13:46:22.347672 1143678 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:46:22.347855 1143678 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
	I0603 13:46:22.348458 1143678 out.go:298] Setting JSON to false
	I0603 13:46:22.349502 1143678 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":16129,"bootTime":1717406253,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 13:46:22.349571 1143678 start.go:139] virtualization: kvm guest
	I0603 13:46:22.351720 1143678 out.go:177] * [old-k8s-version-151788] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 13:46:22.353180 1143678 out.go:177]   - MINIKUBE_LOCATION=19011
	I0603 13:46:22.353235 1143678 notify.go:220] Checking for updates...
	I0603 13:46:22.354400 1143678 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 13:46:22.355680 1143678 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 13:46:22.356796 1143678 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 13:46:22.357952 1143678 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 13:46:22.359052 1143678 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 13:46:22.360807 1143678 config.go:182] Loaded profile config "old-k8s-version-151788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0603 13:46:22.361230 1143678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:46:22.361306 1143678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:46:22.376241 1143678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42729
	I0603 13:46:22.376679 1143678 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:46:22.377267 1143678 main.go:141] libmachine: Using API Version  1
	I0603 13:46:22.377292 1143678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:46:22.377663 1143678 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:46:22.377897 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:46:22.379705 1143678 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0603 13:46:22.380895 1143678 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 13:46:22.381188 1143678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:46:22.381222 1143678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:46:22.396163 1143678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43961
	I0603 13:46:22.396669 1143678 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:46:22.397158 1143678 main.go:141] libmachine: Using API Version  1
	I0603 13:46:22.397180 1143678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:46:22.397509 1143678 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:46:22.397693 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:46:22.433731 1143678 out.go:177] * Using the kvm2 driver based on existing profile
	I0603 13:46:22.434876 1143678 start.go:297] selected driver: kvm2
	I0603 13:46:22.434897 1143678 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-151788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-151788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:46:22.435028 1143678 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 13:46:22.435716 1143678 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 13:46:22.435807 1143678 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19011-1078924/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0603 13:46:22.451200 1143678 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0603 13:46:22.451663 1143678 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 13:46:22.451755 1143678 cni.go:84] Creating CNI manager for ""
	I0603 13:46:22.451773 1143678 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:46:22.451832 1143678 start.go:340] cluster config:
	{Name:old-k8s-version-151788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-151788 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:46:22.451961 1143678 iso.go:125] acquiring lock: {Name:mka26d6a83f88b83737ccc78b57cc462fbe70fe1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 13:46:22.454327 1143678 out.go:177] * Starting "old-k8s-version-151788" primary control-plane node in "old-k8s-version-151788" cluster
	I0603 13:46:22.057705 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:46:22.455453 1143678 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0603 13:46:22.455492 1143678 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0603 13:46:22.455501 1143678 cache.go:56] Caching tarball of preloaded images
	I0603 13:46:22.455591 1143678 preload.go:173] Found /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0603 13:46:22.455604 1143678 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0603 13:46:22.455685 1143678 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/config.json ...
	I0603 13:46:22.455860 1143678 start.go:360] acquireMachinesLock for old-k8s-version-151788: {Name:mk20baaab39609d00406b78ad309423511e633ec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 13:46:28.137725 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:46:31.209684 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:46:37.289692 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:46:40.361614 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:46:46.441692 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:46:49.513686 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:46:55.593727 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:46:58.665749 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:04.745752 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:07.817726 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:13.897702 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:16.969727 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:23.049716 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:26.121758 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:32.201765 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:35.273759 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:41.353716 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:44.425767 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:50.505743 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:53.577777 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:47:59.657729 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:02.729769 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:08.809709 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:11.881708 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:17.961759 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:21.033726 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:27.113698 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:30.185691 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:36.265722 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:39.337764 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:45.417711 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:48.489729 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:54.569746 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:48:57.641701 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:49:03.721772 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:49:06.793709 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:49:12.873710 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:49:15.945728 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:49:22.025678 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:49:25.097675 1142862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.125:22: connect: no route to host
	I0603 13:49:28.102218 1143252 start.go:364] duration metric: took 3m44.709006863s to acquireMachinesLock for "embed-certs-223260"
	I0603 13:49:28.102293 1143252 start.go:96] Skipping create...Using existing machine configuration
	I0603 13:49:28.102302 1143252 fix.go:54] fixHost starting: 
	I0603 13:49:28.102635 1143252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:49:28.102666 1143252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:49:28.118384 1143252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44729
	I0603 13:49:28.119014 1143252 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:49:28.119601 1143252 main.go:141] libmachine: Using API Version  1
	I0603 13:49:28.119630 1143252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:49:28.119930 1143252 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:49:28.120116 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:49:28.120302 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetState
	I0603 13:49:28.122003 1143252 fix.go:112] recreateIfNeeded on embed-certs-223260: state=Stopped err=<nil>
	I0603 13:49:28.122030 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	W0603 13:49:28.122167 1143252 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 13:49:28.123963 1143252 out.go:177] * Restarting existing kvm2 VM for "embed-certs-223260" ...
	I0603 13:49:28.125564 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .Start
	I0603 13:49:28.125750 1143252 main.go:141] libmachine: (embed-certs-223260) Ensuring networks are active...
	I0603 13:49:28.126598 1143252 main.go:141] libmachine: (embed-certs-223260) Ensuring network default is active
	I0603 13:49:28.126965 1143252 main.go:141] libmachine: (embed-certs-223260) Ensuring network mk-embed-certs-223260 is active
	I0603 13:49:28.127319 1143252 main.go:141] libmachine: (embed-certs-223260) Getting domain xml...
	I0603 13:49:28.128017 1143252 main.go:141] libmachine: (embed-certs-223260) Creating domain...
	I0603 13:49:28.099474 1142862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 13:49:28.099536 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetMachineName
	I0603 13:49:28.099883 1142862 buildroot.go:166] provisioning hostname "no-preload-817450"
	I0603 13:49:28.099915 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetMachineName
	I0603 13:49:28.100115 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:49:28.102052 1142862 machine.go:97] duration metric: took 4m37.409499751s to provisionDockerMachine
	I0603 13:49:28.102123 1142862 fix.go:56] duration metric: took 4m37.432963538s for fixHost
	I0603 13:49:28.102135 1142862 start.go:83] releasing machines lock for "no-preload-817450", held for 4m37.432994587s
	W0603 13:49:28.102158 1142862 start.go:713] error starting host: provision: host is not running
	W0603 13:49:28.102317 1142862 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0603 13:49:28.102332 1142862 start.go:728] Will try again in 5 seconds ...
	I0603 13:49:29.332986 1143252 main.go:141] libmachine: (embed-certs-223260) Waiting to get IP...
	I0603 13:49:29.333963 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:29.334430 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:29.334475 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:29.334403 1144333 retry.go:31] will retry after 203.681987ms: waiting for machine to come up
	I0603 13:49:29.539995 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:29.540496 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:29.540564 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:29.540457 1144333 retry.go:31] will retry after 368.548292ms: waiting for machine to come up
	I0603 13:49:29.911212 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:29.911632 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:29.911665 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:29.911566 1144333 retry.go:31] will retry after 402.690969ms: waiting for machine to come up
	I0603 13:49:30.316480 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:30.316889 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:30.316920 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:30.316852 1144333 retry.go:31] will retry after 500.397867ms: waiting for machine to come up
	I0603 13:49:30.818653 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:30.819082 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:30.819107 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:30.819026 1144333 retry.go:31] will retry after 663.669804ms: waiting for machine to come up
	I0603 13:49:31.483776 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:31.484117 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:31.484144 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:31.484079 1144333 retry.go:31] will retry after 938.433137ms: waiting for machine to come up
	I0603 13:49:32.424128 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:32.424609 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:32.424640 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:32.424548 1144333 retry.go:31] will retry after 919.793328ms: waiting for machine to come up
	I0603 13:49:33.103895 1142862 start.go:360] acquireMachinesLock for no-preload-817450: {Name:mk20baaab39609d00406b78ad309423511e633ec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0603 13:49:33.346091 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:33.346549 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:33.346574 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:33.346511 1144333 retry.go:31] will retry after 1.115349726s: waiting for machine to come up
	I0603 13:49:34.463875 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:34.464588 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:34.464616 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:34.464529 1144333 retry.go:31] will retry after 1.153940362s: waiting for machine to come up
	I0603 13:49:35.619844 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:35.620243 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:35.620275 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:35.620176 1144333 retry.go:31] will retry after 1.514504154s: waiting for machine to come up
	I0603 13:49:37.135961 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:37.136409 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:37.136431 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:37.136382 1144333 retry.go:31] will retry after 2.757306897s: waiting for machine to come up
	I0603 13:49:39.895589 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:39.895942 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:39.895970 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:39.895881 1144333 retry.go:31] will retry after 3.019503072s: waiting for machine to come up
	I0603 13:49:42.919177 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:42.919640 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | unable to find current IP address of domain embed-certs-223260 in network mk-embed-certs-223260
	I0603 13:49:42.919670 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | I0603 13:49:42.919588 1144333 retry.go:31] will retry after 3.150730989s: waiting for machine to come up
	I0603 13:49:47.494462 1143450 start.go:364] duration metric: took 3m37.207410663s to acquireMachinesLock for "default-k8s-diff-port-030870"
	I0603 13:49:47.494544 1143450 start.go:96] Skipping create...Using existing machine configuration
	I0603 13:49:47.494557 1143450 fix.go:54] fixHost starting: 
	I0603 13:49:47.494876 1143450 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:49:47.494918 1143450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:49:47.511570 1143450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44939
	I0603 13:49:47.512072 1143450 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:49:47.512568 1143450 main.go:141] libmachine: Using API Version  1
	I0603 13:49:47.512593 1143450 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:49:47.512923 1143450 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:49:47.513117 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:49:47.513276 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetState
	I0603 13:49:47.514783 1143450 fix.go:112] recreateIfNeeded on default-k8s-diff-port-030870: state=Stopped err=<nil>
	I0603 13:49:47.514817 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	W0603 13:49:47.514999 1143450 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 13:49:47.517441 1143450 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-030870" ...
	I0603 13:49:46.071609 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.072094 1143252 main.go:141] libmachine: (embed-certs-223260) Found IP for machine: 192.168.83.246
	I0603 13:49:46.072117 1143252 main.go:141] libmachine: (embed-certs-223260) Reserving static IP address...
	I0603 13:49:46.072132 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has current primary IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.072552 1143252 main.go:141] libmachine: (embed-certs-223260) Reserved static IP address: 192.168.83.246
	I0603 13:49:46.072585 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "embed-certs-223260", mac: "52:54:00:8e:14:a8", ip: "192.168.83.246"} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.072593 1143252 main.go:141] libmachine: (embed-certs-223260) Waiting for SSH to be available...
	I0603 13:49:46.072632 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | skip adding static IP to network mk-embed-certs-223260 - found existing host DHCP lease matching {name: "embed-certs-223260", mac: "52:54:00:8e:14:a8", ip: "192.168.83.246"}
	I0603 13:49:46.072655 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | Getting to WaitForSSH function...
	I0603 13:49:46.074738 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.075059 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.075091 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.075189 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | Using SSH client type: external
	I0603 13:49:46.075213 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | Using SSH private key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa (-rw-------)
	I0603 13:49:46.075249 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.246 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 13:49:46.075271 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | About to run SSH command:
	I0603 13:49:46.075283 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | exit 0
	I0603 13:49:46.197971 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | SSH cmd err, output: <nil>: 
	I0603 13:49:46.198498 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetConfigRaw
	I0603 13:49:46.199179 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetIP
	I0603 13:49:46.201821 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.202239 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.202277 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.202533 1143252 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260/config.json ...
	I0603 13:49:46.202727 1143252 machine.go:94] provisionDockerMachine start ...
	I0603 13:49:46.202745 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:49:46.202964 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:46.205259 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.205636 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.205663 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.205773 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:46.205954 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.206100 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.206318 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:46.206538 1143252 main.go:141] libmachine: Using SSH client type: native
	I0603 13:49:46.206819 1143252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.246 22 <nil> <nil>}
	I0603 13:49:46.206837 1143252 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 13:49:46.310241 1143252 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 13:49:46.310277 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetMachineName
	I0603 13:49:46.310583 1143252 buildroot.go:166] provisioning hostname "embed-certs-223260"
	I0603 13:49:46.310616 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetMachineName
	I0603 13:49:46.310836 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:46.313692 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.314078 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.314116 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.314222 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:46.314446 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.314631 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.314800 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:46.314969 1143252 main.go:141] libmachine: Using SSH client type: native
	I0603 13:49:46.315166 1143252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.246 22 <nil> <nil>}
	I0603 13:49:46.315183 1143252 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-223260 && echo "embed-certs-223260" | sudo tee /etc/hostname
	I0603 13:49:46.428560 1143252 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-223260
	
	I0603 13:49:46.428600 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:46.431381 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.431757 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.431784 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.432021 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:46.432283 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.432477 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.432609 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:46.432785 1143252 main.go:141] libmachine: Using SSH client type: native
	I0603 13:49:46.432960 1143252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.246 22 <nil> <nil>}
	I0603 13:49:46.432976 1143252 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-223260' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-223260/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-223260' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 13:49:46.542400 1143252 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 13:49:46.542446 1143252 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19011-1078924/.minikube CaCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19011-1078924/.minikube}
	I0603 13:49:46.542536 1143252 buildroot.go:174] setting up certificates
	I0603 13:49:46.542557 1143252 provision.go:84] configureAuth start
	I0603 13:49:46.542576 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetMachineName
	I0603 13:49:46.542913 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetIP
	I0603 13:49:46.545940 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.546339 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.546368 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.546499 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:46.548715 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.549097 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.549127 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.549294 1143252 provision.go:143] copyHostCerts
	I0603 13:49:46.549382 1143252 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem, removing ...
	I0603 13:49:46.549397 1143252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 13:49:46.549486 1143252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem (1078 bytes)
	I0603 13:49:46.549578 1143252 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem, removing ...
	I0603 13:49:46.549587 1143252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 13:49:46.549613 1143252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem (1123 bytes)
	I0603 13:49:46.549664 1143252 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem, removing ...
	I0603 13:49:46.549671 1143252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 13:49:46.549690 1143252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem (1675 bytes)
	I0603 13:49:46.549740 1143252 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem org=jenkins.embed-certs-223260 san=[127.0.0.1 192.168.83.246 embed-certs-223260 localhost minikube]
	I0603 13:49:46.807050 1143252 provision.go:177] copyRemoteCerts
	I0603 13:49:46.807111 1143252 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 13:49:46.807140 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:46.809916 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.810303 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.810347 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.810513 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:46.810758 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.810929 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:46.811168 1143252 sshutil.go:53] new ssh client: &{IP:192.168.83.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa Username:docker}
	I0603 13:49:46.892182 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 13:49:46.916657 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0603 13:49:46.941896 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 13:49:46.967292 1143252 provision.go:87] duration metric: took 424.714334ms to configureAuth
	I0603 13:49:46.967331 1143252 buildroot.go:189] setting minikube options for container-runtime
	I0603 13:49:46.967539 1143252 config.go:182] Loaded profile config "embed-certs-223260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:49:46.967626 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:46.970350 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.970668 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:46.970703 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:46.970870 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:46.971115 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.971314 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:46.971454 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:46.971625 1143252 main.go:141] libmachine: Using SSH client type: native
	I0603 13:49:46.971809 1143252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.246 22 <nil> <nil>}
	I0603 13:49:46.971831 1143252 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 13:49:47.264894 1143252 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 13:49:47.264922 1143252 machine.go:97] duration metric: took 1.062182146s to provisionDockerMachine
	I0603 13:49:47.264935 1143252 start.go:293] postStartSetup for "embed-certs-223260" (driver="kvm2")
	I0603 13:49:47.264946 1143252 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 13:49:47.264963 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:49:47.265368 1143252 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 13:49:47.265398 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:47.268412 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.268765 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:47.268796 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.268989 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:47.269223 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:47.269455 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:47.269625 1143252 sshutil.go:53] new ssh client: &{IP:192.168.83.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa Username:docker}
	I0603 13:49:47.348583 1143252 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 13:49:47.352828 1143252 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 13:49:47.352867 1143252 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/addons for local assets ...
	I0603 13:49:47.352949 1143252 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/files for local assets ...
	I0603 13:49:47.353046 1143252 filesync.go:149] local asset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> 10862512.pem in /etc/ssl/certs
	I0603 13:49:47.353164 1143252 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 13:49:47.363222 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:49:47.388132 1143252 start.go:296] duration metric: took 123.177471ms for postStartSetup
	I0603 13:49:47.388202 1143252 fix.go:56] duration metric: took 19.285899119s for fixHost
	I0603 13:49:47.388233 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:47.390960 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.391414 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:47.391477 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.391681 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:47.391937 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:47.392127 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:47.392266 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:47.392436 1143252 main.go:141] libmachine: Using SSH client type: native
	I0603 13:49:47.392670 1143252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.246 22 <nil> <nil>}
	I0603 13:49:47.392687 1143252 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 13:49:47.494294 1143252 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717422587.469729448
	
	I0603 13:49:47.494320 1143252 fix.go:216] guest clock: 1717422587.469729448
	I0603 13:49:47.494328 1143252 fix.go:229] Guest: 2024-06-03 13:49:47.469729448 +0000 UTC Remote: 2024-06-03 13:49:47.388208749 +0000 UTC m=+244.138441135 (delta=81.520699ms)
	I0603 13:49:47.494354 1143252 fix.go:200] guest clock delta is within tolerance: 81.520699ms
	I0603 13:49:47.494361 1143252 start.go:83] releasing machines lock for "embed-certs-223260", held for 19.392103897s
	I0603 13:49:47.494394 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:49:47.494686 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetIP
	I0603 13:49:47.497515 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.497930 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:47.497976 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.498110 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:49:47.498672 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:49:47.498859 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:49:47.498934 1143252 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 13:49:47.498988 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:47.499062 1143252 ssh_runner.go:195] Run: cat /version.json
	I0603 13:49:47.499082 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:49:47.501788 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.502075 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.502131 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:47.502156 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.502291 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:47.502390 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:47.502427 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:47.502550 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:47.502647 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:49:47.502738 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:47.502806 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:49:47.502942 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:49:47.502955 1143252 sshutil.go:53] new ssh client: &{IP:192.168.83.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa Username:docker}
	I0603 13:49:47.503078 1143252 sshutil.go:53] new ssh client: &{IP:192.168.83.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa Username:docker}
	I0603 13:49:47.612706 1143252 ssh_runner.go:195] Run: systemctl --version
	I0603 13:49:47.618922 1143252 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 13:49:47.764749 1143252 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 13:49:47.770936 1143252 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 13:49:47.771023 1143252 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 13:49:47.788401 1143252 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 13:49:47.788427 1143252 start.go:494] detecting cgroup driver to use...
	I0603 13:49:47.788486 1143252 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 13:49:47.805000 1143252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 13:49:47.822258 1143252 docker.go:217] disabling cri-docker service (if available) ...
	I0603 13:49:47.822315 1143252 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 13:49:47.837826 1143252 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 13:49:47.853818 1143252 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 13:49:47.978204 1143252 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 13:49:48.106302 1143252 docker.go:233] disabling docker service ...
	I0603 13:49:48.106366 1143252 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 13:49:48.120974 1143252 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 13:49:48.134911 1143252 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 13:49:48.278103 1143252 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 13:49:48.398238 1143252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 13:49:48.413207 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 13:49:48.432211 1143252 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 13:49:48.432281 1143252 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:49:48.443668 1143252 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 13:49:48.443746 1143252 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:49:48.454990 1143252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:49:48.467119 1143252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:49:48.479875 1143252 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 13:49:48.496767 1143252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:49:48.508872 1143252 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:49:48.530972 1143252 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:49:48.542631 1143252 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 13:49:48.552775 1143252 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 13:49:48.552836 1143252 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 13:49:48.566528 1143252 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 13:49:48.582917 1143252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:49:48.716014 1143252 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 13:49:48.860157 1143252 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 13:49:48.860283 1143252 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 13:49:48.865046 1143252 start.go:562] Will wait 60s for crictl version
	I0603 13:49:48.865121 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:49:48.869520 1143252 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 13:49:48.909721 1143252 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 13:49:48.909819 1143252 ssh_runner.go:195] Run: crio --version
	I0603 13:49:48.939080 1143252 ssh_runner.go:195] Run: crio --version
	I0603 13:49:48.970595 1143252 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 13:49:47.518807 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .Start
	I0603 13:49:47.518981 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Ensuring networks are active...
	I0603 13:49:47.519623 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Ensuring network default is active
	I0603 13:49:47.519926 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Ensuring network mk-default-k8s-diff-port-030870 is active
	I0603 13:49:47.520408 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Getting domain xml...
	I0603 13:49:47.521014 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Creating domain...
	I0603 13:49:48.798483 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting to get IP...
	I0603 13:49:48.799695 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:48.800174 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:48.800305 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:48.800165 1144471 retry.go:31] will retry after 204.161843ms: waiting for machine to come up
	I0603 13:49:49.005669 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:49.006143 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:49.006180 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:49.006091 1144471 retry.go:31] will retry after 382.751679ms: waiting for machine to come up
	I0603 13:49:49.391162 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:49.391717 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:49.391750 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:49.391670 1144471 retry.go:31] will retry after 314.248576ms: waiting for machine to come up
	I0603 13:49:49.707349 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:49.707957 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:49.707990 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:49.707856 1144471 retry.go:31] will retry after 446.461931ms: waiting for machine to come up
	I0603 13:49:50.155616 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:50.156238 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:50.156274 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:50.156174 1144471 retry.go:31] will retry after 712.186964ms: waiting for machine to come up
	I0603 13:49:48.971971 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetIP
	I0603 13:49:48.975079 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:48.975439 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:49:48.975471 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:49:48.975721 1143252 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0603 13:49:48.980114 1143252 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:49:48.993380 1143252 kubeadm.go:877] updating cluster {Name:embed-certs-223260 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:embed-certs-223260 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.246 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 13:49:48.993543 1143252 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 13:49:48.993636 1143252 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:49:49.032289 1143252 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0603 13:49:49.032364 1143252 ssh_runner.go:195] Run: which lz4
	I0603 13:49:49.036707 1143252 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 13:49:49.040973 1143252 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 13:49:49.041000 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0603 13:49:50.554295 1143252 crio.go:462] duration metric: took 1.517623353s to copy over tarball
	I0603 13:49:50.554387 1143252 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 13:49:52.823733 1143252 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.269303423s)
	I0603 13:49:52.823785 1143252 crio.go:469] duration metric: took 2.269454274s to extract the tarball
	I0603 13:49:52.823799 1143252 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 13:49:52.862060 1143252 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:49:52.906571 1143252 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 13:49:52.906602 1143252 cache_images.go:84] Images are preloaded, skipping loading
	I0603 13:49:52.906618 1143252 kubeadm.go:928] updating node { 192.168.83.246 8443 v1.30.1 crio true true} ...
	I0603 13:49:52.906774 1143252 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-223260 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:embed-certs-223260 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 13:49:52.906866 1143252 ssh_runner.go:195] Run: crio config
	I0603 13:49:52.954082 1143252 cni.go:84] Creating CNI manager for ""
	I0603 13:49:52.954111 1143252 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:49:52.954129 1143252 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 13:49:52.954159 1143252 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.246 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-223260 NodeName:embed-certs-223260 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.246"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.246 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 13:49:52.954355 1143252 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.246
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-223260"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.246
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.246"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 13:49:52.954446 1143252 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 13:49:52.964488 1143252 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 13:49:52.964582 1143252 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 13:49:52.974118 1143252 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0603 13:49:52.990701 1143252 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 13:49:53.007539 1143252 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0603 13:49:53.024943 1143252 ssh_runner.go:195] Run: grep 192.168.83.246	control-plane.minikube.internal$ /etc/hosts
	I0603 13:49:53.029097 1143252 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.246	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:49:53.041234 1143252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:49:53.178449 1143252 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:49:53.195718 1143252 certs.go:68] Setting up /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260 for IP: 192.168.83.246
	I0603 13:49:53.195750 1143252 certs.go:194] generating shared ca certs ...
	I0603 13:49:53.195769 1143252 certs.go:226] acquiring lock for ca certs: {Name:mkeec5aabce7c9540fcb31b78e4f96c2851d54f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:49:53.195954 1143252 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key
	I0603 13:49:53.196021 1143252 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key
	I0603 13:49:53.196035 1143252 certs.go:256] generating profile certs ...
	I0603 13:49:53.196256 1143252 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260/client.key
	I0603 13:49:53.196341 1143252 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260/apiserver.key.90d43877
	I0603 13:49:53.196437 1143252 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260/proxy-client.key
	I0603 13:49:53.196605 1143252 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem (1338 bytes)
	W0603 13:49:53.196663 1143252 certs.go:480] ignoring /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251_empty.pem, impossibly tiny 0 bytes
	I0603 13:49:53.196678 1143252 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 13:49:53.196708 1143252 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem (1078 bytes)
	I0603 13:49:53.196756 1143252 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem (1123 bytes)
	I0603 13:49:53.196787 1143252 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem (1675 bytes)
	I0603 13:49:53.196838 1143252 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:49:53.197895 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 13:49:53.231612 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 13:49:53.263516 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 13:49:50.870317 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:50.870816 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:50.870841 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:50.870781 1144471 retry.go:31] will retry after 855.15183ms: waiting for machine to come up
	I0603 13:49:51.727393 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:51.727926 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:51.727960 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:51.727869 1144471 retry.go:31] will retry after 997.293541ms: waiting for machine to come up
	I0603 13:49:52.726578 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:52.727036 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:52.727073 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:52.726953 1144471 retry.go:31] will retry after 1.4233414s: waiting for machine to come up
	I0603 13:49:54.151594 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:54.152072 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:54.152099 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:54.152021 1144471 retry.go:31] will retry after 1.348888248s: waiting for machine to come up
	I0603 13:49:53.303724 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0603 13:49:53.334700 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0603 13:49:53.371594 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 13:49:53.396381 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 13:49:53.420985 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/embed-certs-223260/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0603 13:49:53.445334 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem --> /usr/share/ca-certificates/1086251.pem (1338 bytes)
	I0603 13:49:53.469632 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /usr/share/ca-certificates/10862512.pem (1708 bytes)
	I0603 13:49:53.495720 1143252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 13:49:53.522416 1143252 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 13:49:53.541593 1143252 ssh_runner.go:195] Run: openssl version
	I0603 13:49:53.547653 1143252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1086251.pem && ln -fs /usr/share/ca-certificates/1086251.pem /etc/ssl/certs/1086251.pem"
	I0603 13:49:53.558802 1143252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1086251.pem
	I0603 13:49:53.563511 1143252 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:37 /usr/share/ca-certificates/1086251.pem
	I0603 13:49:53.563579 1143252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1086251.pem
	I0603 13:49:53.569691 1143252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1086251.pem /etc/ssl/certs/51391683.0"
	I0603 13:49:53.582814 1143252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10862512.pem && ln -fs /usr/share/ca-certificates/10862512.pem /etc/ssl/certs/10862512.pem"
	I0603 13:49:53.595684 1143252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10862512.pem
	I0603 13:49:53.600613 1143252 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:37 /usr/share/ca-certificates/10862512.pem
	I0603 13:49:53.600675 1143252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10862512.pem
	I0603 13:49:53.607008 1143252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10862512.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 13:49:53.619919 1143252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 13:49:53.632663 1143252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:49:53.637604 1143252 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:24 /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:49:53.637675 1143252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:49:53.643844 1143252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 13:49:53.655934 1143252 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 13:49:53.660801 1143252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 13:49:53.667391 1143252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 13:49:53.674382 1143252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 13:49:53.681121 1143252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 13:49:53.687496 1143252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 13:49:53.693623 1143252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 13:49:53.699764 1143252 kubeadm.go:391] StartCluster: {Name:embed-certs-223260 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:embed-certs-223260 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.246 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:49:53.699871 1143252 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 13:49:53.699928 1143252 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:49:53.736588 1143252 cri.go:89] found id: ""
	I0603 13:49:53.736662 1143252 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0603 13:49:53.750620 1143252 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 13:49:53.750644 1143252 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 13:49:53.750652 1143252 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 13:49:53.750716 1143252 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 13:49:53.765026 1143252 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 13:49:53.766297 1143252 kubeconfig.go:125] found "embed-certs-223260" server: "https://192.168.83.246:8443"
	I0603 13:49:53.768662 1143252 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 13:49:53.779583 1143252 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.83.246
	I0603 13:49:53.779625 1143252 kubeadm.go:1154] stopping kube-system containers ...
	I0603 13:49:53.779639 1143252 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0603 13:49:53.779695 1143252 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:49:53.820312 1143252 cri.go:89] found id: ""
	I0603 13:49:53.820398 1143252 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 13:49:53.838446 1143252 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 13:49:53.849623 1143252 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 13:49:53.849643 1143252 kubeadm.go:156] found existing configuration files:
	
	I0603 13:49:53.849700 1143252 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 13:49:53.859379 1143252 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 13:49:53.859451 1143252 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 13:49:53.869939 1143252 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 13:49:53.880455 1143252 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 13:49:53.880527 1143252 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 13:49:53.890918 1143252 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 13:49:53.900841 1143252 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 13:49:53.900894 1143252 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 13:49:53.910968 1143252 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 13:49:53.921064 1143252 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 13:49:53.921121 1143252 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 13:49:53.931550 1143252 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 13:49:53.942309 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:49:54.078959 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:49:54.842079 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:49:55.043420 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:49:55.111164 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:49:55.220384 1143252 api_server.go:52] waiting for apiserver process to appear ...
	I0603 13:49:55.220475 1143252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:49:55.721612 1143252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:49:56.221513 1143252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:49:56.257801 1143252 api_server.go:72] duration metric: took 1.037411844s to wait for apiserver process to appear ...
	I0603 13:49:56.257845 1143252 api_server.go:88] waiting for apiserver healthz status ...
	I0603 13:49:56.257874 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:49:55.502734 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:55.503282 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:55.503313 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:55.503226 1144471 retry.go:31] will retry after 1.733012887s: waiting for machine to come up
	I0603 13:49:57.238544 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:57.238975 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:57.239006 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:57.238917 1144471 retry.go:31] will retry after 2.565512625s: waiting for machine to come up
	I0603 13:49:59.806662 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:49:59.807077 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:49:59.807105 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:49:59.807024 1144471 retry.go:31] will retry after 2.759375951s: waiting for machine to come up
	I0603 13:49:59.684015 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 13:49:59.684058 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 13:49:59.684078 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:49:59.757751 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 13:49:59.757791 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 13:49:59.758846 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:49:59.779923 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:49:59.779974 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:00.258098 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:50:00.265061 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:00.265089 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:00.758643 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:50:00.764364 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:00.764400 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:01.257950 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:50:01.262846 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:01.262875 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:01.758078 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:50:01.763269 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:01.763301 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:02.258641 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:50:02.263628 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:02.263658 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:02.758205 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:50:02.765436 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:02.765470 1143252 api_server.go:103] status: https://192.168.83.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:03.258663 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:50:03.263141 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 200:
	ok
	I0603 13:50:03.269787 1143252 api_server.go:141] control plane version: v1.30.1
	I0603 13:50:03.269817 1143252 api_server.go:131] duration metric: took 7.011964721s to wait for apiserver health ...
	I0603 13:50:03.269827 1143252 cni.go:84] Creating CNI manager for ""
	I0603 13:50:03.269833 1143252 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:50:03.271812 1143252 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 13:50:03.273154 1143252 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 13:50:03.285329 1143252 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 13:50:03.305480 1143252 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 13:50:03.317546 1143252 system_pods.go:59] 8 kube-system pods found
	I0603 13:50:03.317601 1143252 system_pods.go:61] "coredns-7db6d8ff4d-qdjrv" [9a490ea5-c189-4d28-bd6b-509610d35f37] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 13:50:03.317614 1143252 system_pods.go:61] "etcd-embed-certs-223260" [97807b62-195b-4d94-a7f8-754f68ad4f03] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0603 13:50:03.317627 1143252 system_pods.go:61] "kube-apiserver-embed-certs-223260" [df2f6cde-407c-4ed2-8fec-5fa61a428a88] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0603 13:50:03.317637 1143252 system_pods.go:61] "kube-controller-manager-embed-certs-223260" [9b8bc1b7-3f43-4626-b9ee-37f5176b7fd6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0603 13:50:03.317645 1143252 system_pods.go:61] "kube-proxy-s5vdl" [4c515f67-d265-4140-82ec-ba9ac4ddda80] Running
	I0603 13:50:03.317658 1143252 system_pods.go:61] "kube-scheduler-embed-certs-223260" [d23001bf-d971-42d2-a901-b2ec4b4db649] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0603 13:50:03.317667 1143252 system_pods.go:61] "metrics-server-569cc877fc-v7d9t" [e89c698d-7aab-4acd-a9b3-5ba0315ad681] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:50:03.317677 1143252 system_pods.go:61] "storage-provisioner" [6ff65744-2d90-4589-a97f-d6b4d792eab4] Running
	I0603 13:50:03.317686 1143252 system_pods.go:74] duration metric: took 12.177585ms to wait for pod list to return data ...
	I0603 13:50:03.317695 1143252 node_conditions.go:102] verifying NodePressure condition ...
	I0603 13:50:03.321445 1143252 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 13:50:03.321479 1143252 node_conditions.go:123] node cpu capacity is 2
	I0603 13:50:03.321493 1143252 node_conditions.go:105] duration metric: took 3.787651ms to run NodePressure ...
	I0603 13:50:03.321512 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:03.598576 1143252 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0603 13:50:03.604196 1143252 kubeadm.go:733] kubelet initialised
	I0603 13:50:03.604219 1143252 kubeadm.go:734] duration metric: took 5.606021ms waiting for restarted kubelet to initialise ...
	I0603 13:50:03.604236 1143252 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:50:03.611441 1143252 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-qdjrv" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:03.615911 1143252 pod_ready.go:97] node "embed-certs-223260" hosting pod "coredns-7db6d8ff4d-qdjrv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:03.615936 1143252 pod_ready.go:81] duration metric: took 4.468017ms for pod "coredns-7db6d8ff4d-qdjrv" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:03.615945 1143252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-223260" hosting pod "coredns-7db6d8ff4d-qdjrv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:03.615955 1143252 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:03.620663 1143252 pod_ready.go:97] node "embed-certs-223260" hosting pod "etcd-embed-certs-223260" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:03.620683 1143252 pod_ready.go:81] duration metric: took 4.71967ms for pod "etcd-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:03.620691 1143252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-223260" hosting pod "etcd-embed-certs-223260" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:03.620697 1143252 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:03.624894 1143252 pod_ready.go:97] node "embed-certs-223260" hosting pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:03.624917 1143252 pod_ready.go:81] duration metric: took 4.212227ms for pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:03.624925 1143252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-223260" hosting pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:03.624933 1143252 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:03.708636 1143252 pod_ready.go:97] node "embed-certs-223260" hosting pod "kube-controller-manager-embed-certs-223260" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:03.708665 1143252 pod_ready.go:81] duration metric: took 83.72445ms for pod "kube-controller-manager-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:03.708675 1143252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-223260" hosting pod "kube-controller-manager-embed-certs-223260" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:03.708681 1143252 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-s5vdl" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:04.109391 1143252 pod_ready.go:97] node "embed-certs-223260" hosting pod "kube-proxy-s5vdl" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:04.109454 1143252 pod_ready.go:81] duration metric: took 400.761651ms for pod "kube-proxy-s5vdl" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:04.109469 1143252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-223260" hosting pod "kube-proxy-s5vdl" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:04.109478 1143252 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:04.509683 1143252 pod_ready.go:97] node "embed-certs-223260" hosting pod "kube-scheduler-embed-certs-223260" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:04.509712 1143252 pod_ready.go:81] duration metric: took 400.226435ms for pod "kube-scheduler-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:04.509723 1143252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-223260" hosting pod "kube-scheduler-embed-certs-223260" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:04.509730 1143252 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:04.909629 1143252 pod_ready.go:97] node "embed-certs-223260" hosting pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:04.909659 1143252 pod_ready.go:81] duration metric: took 399.917901ms for pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:04.909669 1143252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-223260" hosting pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:04.909679 1143252 pod_ready.go:38] duration metric: took 1.30543039s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:50:04.909697 1143252 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 13:50:04.921682 1143252 ops.go:34] apiserver oom_adj: -16
	I0603 13:50:04.921708 1143252 kubeadm.go:591] duration metric: took 11.171050234s to restartPrimaryControlPlane
	I0603 13:50:04.921717 1143252 kubeadm.go:393] duration metric: took 11.221962831s to StartCluster
	I0603 13:50:04.921737 1143252 settings.go:142] acquiring lock: {Name:mka7155af15d143794eb08b8670f7d850f44839e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:50:04.921807 1143252 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 13:50:04.923342 1143252 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/kubeconfig: {Name:mk082a4c41fd0f4876b4085806e1bc5ef6533b14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:50:04.923628 1143252 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.83.246 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 13:50:04.927063 1143252 out.go:177] * Verifying Kubernetes components...
	I0603 13:50:04.923693 1143252 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 13:50:04.923865 1143252 config.go:182] Loaded profile config "embed-certs-223260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:50:04.928850 1143252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:50:04.928873 1143252 addons.go:69] Setting default-storageclass=true in profile "embed-certs-223260"
	I0603 13:50:04.928872 1143252 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-223260"
	I0603 13:50:04.928889 1143252 addons.go:69] Setting metrics-server=true in profile "embed-certs-223260"
	I0603 13:50:04.928906 1143252 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-223260"
	I0603 13:50:04.928923 1143252 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-223260"
	I0603 13:50:04.928935 1143252 addons.go:234] Setting addon metrics-server=true in "embed-certs-223260"
	W0603 13:50:04.928938 1143252 addons.go:243] addon storage-provisioner should already be in state true
	W0603 13:50:04.928945 1143252 addons.go:243] addon metrics-server should already be in state true
	I0603 13:50:04.928980 1143252 host.go:66] Checking if "embed-certs-223260" exists ...
	I0603 13:50:04.928980 1143252 host.go:66] Checking if "embed-certs-223260" exists ...
	I0603 13:50:04.929307 1143252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:04.929346 1143252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:04.929352 1143252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:04.929372 1143252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:04.929597 1143252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:04.929630 1143252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:04.944948 1143252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38847
	I0603 13:50:04.945071 1143252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44523
	I0603 13:50:04.945489 1143252 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:04.945571 1143252 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:04.946137 1143252 main.go:141] libmachine: Using API Version  1
	I0603 13:50:04.946166 1143252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:04.946299 1143252 main.go:141] libmachine: Using API Version  1
	I0603 13:50:04.946319 1143252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:04.946589 1143252 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:04.946650 1143252 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:04.946798 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetState
	I0603 13:50:04.947022 1143252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35157
	I0603 13:50:04.947210 1143252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:04.947250 1143252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:04.947517 1143252 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:04.948043 1143252 main.go:141] libmachine: Using API Version  1
	I0603 13:50:04.948069 1143252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:04.948437 1143252 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:04.949064 1143252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:04.949107 1143252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:04.950532 1143252 addons.go:234] Setting addon default-storageclass=true in "embed-certs-223260"
	W0603 13:50:04.950558 1143252 addons.go:243] addon default-storageclass should already be in state true
	I0603 13:50:04.950589 1143252 host.go:66] Checking if "embed-certs-223260" exists ...
	I0603 13:50:04.950951 1143252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:04.951008 1143252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:04.964051 1143252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37589
	I0603 13:50:04.964078 1143252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35097
	I0603 13:50:04.964513 1143252 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:04.964562 1143252 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:04.965062 1143252 main.go:141] libmachine: Using API Version  1
	I0603 13:50:04.965088 1143252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:04.965128 1143252 main.go:141] libmachine: Using API Version  1
	I0603 13:50:04.965153 1143252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:04.965473 1143252 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:04.965532 1143252 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:04.965652 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetState
	I0603 13:50:04.965740 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetState
	I0603 13:50:04.967606 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:50:04.967739 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:50:04.969783 1143252 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:04.971193 1143252 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0603 13:50:02.567560 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:02.567988 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | unable to find current IP address of domain default-k8s-diff-port-030870 in network mk-default-k8s-diff-port-030870
	I0603 13:50:02.568020 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | I0603 13:50:02.567915 1144471 retry.go:31] will retry after 3.955051362s: waiting for machine to come up
	I0603 13:50:04.972568 1143252 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0603 13:50:04.972588 1143252 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0603 13:50:04.972606 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:50:04.971275 1143252 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 13:50:04.972634 1143252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 13:50:04.972658 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:50:04.971495 1143252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42379
	I0603 13:50:04.973108 1143252 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:04.973575 1143252 main.go:141] libmachine: Using API Version  1
	I0603 13:50:04.973599 1143252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:04.973931 1143252 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:04.974623 1143252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:04.974658 1143252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:04.976128 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:50:04.976251 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:50:04.976535 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:50:04.976559 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:50:04.976709 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:50:04.976724 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:50:04.976768 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:50:04.976915 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:50:04.976989 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:50:04.977099 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:50:04.977156 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:50:04.977242 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:50:04.977305 1143252 sshutil.go:53] new ssh client: &{IP:192.168.83.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa Username:docker}
	I0603 13:50:04.977500 1143252 sshutil.go:53] new ssh client: &{IP:192.168.83.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa Username:docker}
	I0603 13:50:04.990810 1143252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36607
	I0603 13:50:04.991293 1143252 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:04.991844 1143252 main.go:141] libmachine: Using API Version  1
	I0603 13:50:04.991875 1143252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:04.992279 1143252 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:04.992499 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetState
	I0603 13:50:04.994225 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .DriverName
	I0603 13:50:04.994456 1143252 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 13:50:04.994476 1143252 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 13:50:04.994490 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHHostname
	I0603 13:50:04.997771 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:50:04.998210 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:14:a8", ip: ""} in network mk-embed-certs-223260: {Iface:virbr5 ExpiryTime:2024-06-03 14:49:38 +0000 UTC Type:0 Mac:52:54:00:8e:14:a8 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:embed-certs-223260 Clientid:01:52:54:00:8e:14:a8}
	I0603 13:50:04.998239 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | domain embed-certs-223260 has defined IP address 192.168.83.246 and MAC address 52:54:00:8e:14:a8 in network mk-embed-certs-223260
	I0603 13:50:04.998418 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHPort
	I0603 13:50:04.998627 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHKeyPath
	I0603 13:50:04.998811 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .GetSSHUsername
	I0603 13:50:04.998941 1143252 sshutil.go:53] new ssh client: &{IP:192.168.83.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/embed-certs-223260/id_rsa Username:docker}
	I0603 13:50:05.119962 1143252 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:50:05.140880 1143252 node_ready.go:35] waiting up to 6m0s for node "embed-certs-223260" to be "Ready" ...
	I0603 13:50:05.271863 1143252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 13:50:05.275815 1143252 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0603 13:50:05.275843 1143252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0603 13:50:05.294572 1143252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 13:50:05.346520 1143252 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0603 13:50:05.346553 1143252 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0603 13:50:05.417100 1143252 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 13:50:05.417141 1143252 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0603 13:50:05.496250 1143252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 13:50:06.207746 1143252 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:06.207781 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .Close
	I0603 13:50:06.207849 1143252 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:06.207873 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .Close
	I0603 13:50:06.208103 1143252 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:06.208152 1143252 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:06.208161 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | Closing plugin on server side
	I0603 13:50:06.208182 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | Closing plugin on server side
	I0603 13:50:06.208157 1143252 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:06.208197 1143252 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:06.208200 1143252 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:06.208216 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .Close
	I0603 13:50:06.208208 1143252 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:06.208284 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .Close
	I0603 13:50:06.208572 1143252 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:06.208590 1143252 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:06.208691 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | Closing plugin on server side
	I0603 13:50:06.208703 1143252 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:06.208724 1143252 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:06.216764 1143252 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:06.216783 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .Close
	I0603 13:50:06.217095 1143252 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:06.217111 1143252 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:06.374254 1143252 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:06.374281 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .Close
	I0603 13:50:06.374603 1143252 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:06.374623 1143252 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:06.374634 1143252 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:06.374638 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | Closing plugin on server side
	I0603 13:50:06.374644 1143252 main.go:141] libmachine: (embed-certs-223260) Calling .Close
	I0603 13:50:06.374901 1143252 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:06.374916 1143252 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:06.374933 1143252 addons.go:475] Verifying addon metrics-server=true in "embed-certs-223260"
	I0603 13:50:06.374948 1143252 main.go:141] libmachine: (embed-certs-223260) DBG | Closing plugin on server side
	I0603 13:50:06.377491 1143252 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0603 13:50:08.083130 1143678 start.go:364] duration metric: took 3m45.627229097s to acquireMachinesLock for "old-k8s-version-151788"
	I0603 13:50:08.083256 1143678 start.go:96] Skipping create...Using existing machine configuration
	I0603 13:50:08.083266 1143678 fix.go:54] fixHost starting: 
	I0603 13:50:08.083762 1143678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:08.083812 1143678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:08.103187 1143678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36483
	I0603 13:50:08.103693 1143678 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:08.104269 1143678 main.go:141] libmachine: Using API Version  1
	I0603 13:50:08.104299 1143678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:08.104746 1143678 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:08.105115 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:50:08.105347 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetState
	I0603 13:50:08.107125 1143678 fix.go:112] recreateIfNeeded on old-k8s-version-151788: state=Stopped err=<nil>
	I0603 13:50:08.107173 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	W0603 13:50:08.107340 1143678 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 13:50:08.109207 1143678 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-151788" ...
	I0603 13:50:06.378684 1143252 addons.go:510] duration metric: took 1.4549999s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0603 13:50:07.145643 1143252 node_ready.go:53] node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:06.526793 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.527302 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Found IP for machine: 192.168.39.177
	I0603 13:50:06.527341 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has current primary IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.527366 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Reserving static IP address...
	I0603 13:50:06.527822 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Reserved static IP address: 192.168.39.177
	I0603 13:50:06.527857 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Waiting for SSH to be available...
	I0603 13:50:06.527902 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-030870", mac: "52:54:00:62:09:d4", ip: "192.168.39.177"} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:06.527956 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | skip adding static IP to network mk-default-k8s-diff-port-030870 - found existing host DHCP lease matching {name: "default-k8s-diff-port-030870", mac: "52:54:00:62:09:d4", ip: "192.168.39.177"}
	I0603 13:50:06.527973 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | Getting to WaitForSSH function...
	I0603 13:50:06.530287 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.530662 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:06.530696 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.530802 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | Using SSH client type: external
	I0603 13:50:06.530827 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | Using SSH private key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa (-rw-------)
	I0603 13:50:06.530849 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.177 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 13:50:06.530866 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | About to run SSH command:
	I0603 13:50:06.530877 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | exit 0
	I0603 13:50:06.653910 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | SSH cmd err, output: <nil>: 
	I0603 13:50:06.654259 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetConfigRaw
	I0603 13:50:06.654981 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetIP
	I0603 13:50:06.658094 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.658561 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:06.658600 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.658921 1143450 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870/config.json ...
	I0603 13:50:06.659144 1143450 machine.go:94] provisionDockerMachine start ...
	I0603 13:50:06.659168 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:06.659486 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:06.662534 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.662915 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:06.662959 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.663059 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:06.663258 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:06.663476 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:06.663660 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:06.663866 1143450 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:06.664103 1143450 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0603 13:50:06.664115 1143450 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 13:50:06.766054 1143450 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 13:50:06.766083 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetMachineName
	I0603 13:50:06.766406 1143450 buildroot.go:166] provisioning hostname "default-k8s-diff-port-030870"
	I0603 13:50:06.766440 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetMachineName
	I0603 13:50:06.766708 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:06.769445 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.769820 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:06.769871 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.770029 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:06.770244 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:06.770423 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:06.770670 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:06.770893 1143450 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:06.771057 1143450 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0603 13:50:06.771070 1143450 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-030870 && echo "default-k8s-diff-port-030870" | sudo tee /etc/hostname
	I0603 13:50:06.889997 1143450 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-030870
	
	I0603 13:50:06.890029 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:06.893778 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.894260 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:06.894297 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:06.894614 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:06.894826 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:06.895029 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:06.895211 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:06.895423 1143450 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:06.895608 1143450 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0603 13:50:06.895625 1143450 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-030870' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-030870/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-030870' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 13:50:07.007930 1143450 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 13:50:07.007971 1143450 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19011-1078924/.minikube CaCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19011-1078924/.minikube}
	I0603 13:50:07.008009 1143450 buildroot.go:174] setting up certificates
	I0603 13:50:07.008020 1143450 provision.go:84] configureAuth start
	I0603 13:50:07.008034 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetMachineName
	I0603 13:50:07.008433 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetIP
	I0603 13:50:07.011208 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.011607 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:07.011640 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.011774 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:07.013986 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.014431 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:07.014462 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.014655 1143450 provision.go:143] copyHostCerts
	I0603 13:50:07.014726 1143450 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem, removing ...
	I0603 13:50:07.014737 1143450 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 13:50:07.014787 1143450 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem (1078 bytes)
	I0603 13:50:07.014874 1143450 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem, removing ...
	I0603 13:50:07.014882 1143450 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 13:50:07.014902 1143450 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem (1123 bytes)
	I0603 13:50:07.014952 1143450 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem, removing ...
	I0603 13:50:07.014959 1143450 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 13:50:07.014974 1143450 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem (1675 bytes)
	I0603 13:50:07.015020 1143450 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-030870 san=[127.0.0.1 192.168.39.177 default-k8s-diff-port-030870 localhost minikube]
	I0603 13:50:07.402535 1143450 provision.go:177] copyRemoteCerts
	I0603 13:50:07.402595 1143450 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 13:50:07.402626 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:07.405891 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.406240 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:07.406272 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.406484 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:07.406718 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:07.406943 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:07.407132 1143450 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa Username:docker}
	I0603 13:50:07.489480 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 13:50:07.517212 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0603 13:50:07.543510 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 13:50:07.570284 1143450 provision.go:87] duration metric: took 562.244781ms to configureAuth
	I0603 13:50:07.570318 1143450 buildroot.go:189] setting minikube options for container-runtime
	I0603 13:50:07.570537 1143450 config.go:182] Loaded profile config "default-k8s-diff-port-030870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:50:07.570629 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:07.574171 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.574706 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:07.574739 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.574948 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:07.575262 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:07.575549 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:07.575781 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:07.575965 1143450 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:07.576217 1143450 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0603 13:50:07.576247 1143450 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 13:50:07.839415 1143450 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 13:50:07.839455 1143450 machine.go:97] duration metric: took 1.180296439s to provisionDockerMachine
	I0603 13:50:07.839468 1143450 start.go:293] postStartSetup for "default-k8s-diff-port-030870" (driver="kvm2")
	I0603 13:50:07.839482 1143450 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 13:50:07.839506 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:07.839843 1143450 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 13:50:07.839872 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:07.842547 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.842884 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:07.842918 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.843234 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:07.843471 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:07.843708 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:07.843952 1143450 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa Username:docker}
	I0603 13:50:07.927654 1143450 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 13:50:07.932965 1143450 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 13:50:07.932997 1143450 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/addons for local assets ...
	I0603 13:50:07.933082 1143450 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/files for local assets ...
	I0603 13:50:07.933202 1143450 filesync.go:149] local asset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> 10862512.pem in /etc/ssl/certs
	I0603 13:50:07.933343 1143450 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 13:50:07.945059 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:50:07.975774 1143450 start.go:296] duration metric: took 136.280559ms for postStartSetup
	I0603 13:50:07.975822 1143450 fix.go:56] duration metric: took 20.481265153s for fixHost
	I0603 13:50:07.975848 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:07.979035 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.979436 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:07.979486 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:07.979737 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:07.980012 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:07.980228 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:07.980452 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:07.980691 1143450 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:07.980935 1143450 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0603 13:50:07.980954 1143450 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 13:50:08.082946 1143450 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717422608.057620379
	
	I0603 13:50:08.082978 1143450 fix.go:216] guest clock: 1717422608.057620379
	I0603 13:50:08.082988 1143450 fix.go:229] Guest: 2024-06-03 13:50:08.057620379 +0000 UTC Remote: 2024-06-03 13:50:07.975826846 +0000 UTC m=+237.845886752 (delta=81.793533ms)
	I0603 13:50:08.083018 1143450 fix.go:200] guest clock delta is within tolerance: 81.793533ms
	I0603 13:50:08.083025 1143450 start.go:83] releasing machines lock for "default-k8s-diff-port-030870", held for 20.588515063s
	I0603 13:50:08.083060 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:08.083369 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetIP
	I0603 13:50:08.086674 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:08.087202 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:08.087285 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:08.087508 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:08.088324 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:08.088575 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:08.088673 1143450 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 13:50:08.088758 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:08.088823 1143450 ssh_runner.go:195] Run: cat /version.json
	I0603 13:50:08.088852 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:08.092020 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:08.092175 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:08.092406 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:08.092485 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:08.092863 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:08.092893 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:08.092916 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:08.092924 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:08.093273 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:08.093276 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:08.093522 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:08.093541 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:08.093708 1143450 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa Username:docker}
	I0603 13:50:08.093710 1143450 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa Username:docker}
	I0603 13:50:08.176292 1143450 ssh_runner.go:195] Run: systemctl --version
	I0603 13:50:08.204977 1143450 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 13:50:08.367121 1143450 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 13:50:08.376347 1143450 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 13:50:08.376431 1143450 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 13:50:08.398639 1143450 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 13:50:08.398672 1143450 start.go:494] detecting cgroup driver to use...
	I0603 13:50:08.398750 1143450 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 13:50:08.422776 1143450 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 13:50:08.443035 1143450 docker.go:217] disabling cri-docker service (if available) ...
	I0603 13:50:08.443108 1143450 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 13:50:08.459853 1143450 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 13:50:08.482009 1143450 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 13:50:08.631237 1143450 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 13:50:08.806623 1143450 docker.go:233] disabling docker service ...
	I0603 13:50:08.806715 1143450 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 13:50:08.827122 1143450 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 13:50:08.842457 1143450 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 13:50:08.999795 1143450 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 13:50:09.148706 1143450 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 13:50:09.167314 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 13:50:09.188867 1143450 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 13:50:09.188959 1143450 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:09.202239 1143450 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 13:50:09.202319 1143450 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:09.216228 1143450 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:09.231140 1143450 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:09.246767 1143450 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 13:50:09.260418 1143450 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:09.274349 1143450 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:09.300588 1143450 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:09.314659 1143450 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 13:50:09.326844 1143450 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 13:50:09.326919 1143450 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 13:50:09.344375 1143450 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 13:50:09.357955 1143450 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:50:09.504105 1143450 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 13:50:09.685468 1143450 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 13:50:09.685562 1143450 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 13:50:09.690863 1143450 start.go:562] Will wait 60s for crictl version
	I0603 13:50:09.690943 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:50:09.696532 1143450 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 13:50:09.742785 1143450 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 13:50:09.742891 1143450 ssh_runner.go:195] Run: crio --version
	I0603 13:50:09.782137 1143450 ssh_runner.go:195] Run: crio --version
	I0603 13:50:09.816251 1143450 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 13:50:09.817854 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetIP
	I0603 13:50:09.821049 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:09.821555 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:09.821595 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:09.821855 1143450 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0603 13:50:09.826658 1143450 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:50:09.841351 1143450 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-030870 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:default-k8s-diff-port-030870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.177 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 13:50:09.841521 1143450 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 13:50:09.841586 1143450 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:50:09.883751 1143450 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0603 13:50:09.883825 1143450 ssh_runner.go:195] Run: which lz4
	I0603 13:50:09.888383 1143450 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 13:50:09.893662 1143450 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 13:50:09.893704 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0603 13:50:08.110706 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .Start
	I0603 13:50:08.110954 1143678 main.go:141] libmachine: (old-k8s-version-151788) Ensuring networks are active...
	I0603 13:50:08.111890 1143678 main.go:141] libmachine: (old-k8s-version-151788) Ensuring network default is active
	I0603 13:50:08.112291 1143678 main.go:141] libmachine: (old-k8s-version-151788) Ensuring network mk-old-k8s-version-151788 is active
	I0603 13:50:08.112708 1143678 main.go:141] libmachine: (old-k8s-version-151788) Getting domain xml...
	I0603 13:50:08.113547 1143678 main.go:141] libmachine: (old-k8s-version-151788) Creating domain...
	I0603 13:50:09.528855 1143678 main.go:141] libmachine: (old-k8s-version-151788) Waiting to get IP...
	I0603 13:50:09.529978 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:09.530410 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:09.530453 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:09.530382 1144654 retry.go:31] will retry after 208.935457ms: waiting for machine to come up
	I0603 13:50:09.741245 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:09.741816 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:09.741864 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:09.741769 1144654 retry.go:31] will retry after 376.532154ms: waiting for machine to come up
	I0603 13:50:10.120533 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:10.121261 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:10.121337 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:10.121239 1144654 retry.go:31] will retry after 339.126643ms: waiting for machine to come up
	I0603 13:50:10.461708 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:10.462488 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:10.462514 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:10.462425 1144654 retry.go:31] will retry after 490.057426ms: waiting for machine to come up
	I0603 13:50:10.954107 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:10.954887 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:10.954921 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:10.954840 1144654 retry.go:31] will retry after 711.209001ms: waiting for machine to come up
	I0603 13:50:11.667459 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:11.668198 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:11.668231 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:11.668135 1144654 retry.go:31] will retry after 928.879285ms: waiting for machine to come up
	I0603 13:50:09.645006 1143252 node_ready.go:53] node "embed-certs-223260" has status "Ready":"False"
	I0603 13:50:10.146403 1143252 node_ready.go:49] node "embed-certs-223260" has status "Ready":"True"
	I0603 13:50:10.146438 1143252 node_ready.go:38] duration metric: took 5.005510729s for node "embed-certs-223260" to be "Ready" ...
	I0603 13:50:10.146453 1143252 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:50:10.154249 1143252 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-qdjrv" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:10.164361 1143252 pod_ready.go:92] pod "coredns-7db6d8ff4d-qdjrv" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:10.164401 1143252 pod_ready.go:81] duration metric: took 10.115855ms for pod "coredns-7db6d8ff4d-qdjrv" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:10.164419 1143252 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:11.675214 1143252 pod_ready.go:92] pod "etcd-embed-certs-223260" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:11.675243 1143252 pod_ready.go:81] duration metric: took 1.510815036s for pod "etcd-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:11.675254 1143252 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:11.522734 1143450 crio.go:462] duration metric: took 1.634406537s to copy over tarball
	I0603 13:50:11.522837 1143450 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 13:50:13.983446 1143450 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.460564522s)
	I0603 13:50:13.983484 1143450 crio.go:469] duration metric: took 2.460706596s to extract the tarball
	I0603 13:50:13.983503 1143450 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 13:50:14.029942 1143450 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:50:14.083084 1143450 crio.go:514] all images are preloaded for cri-o runtime.
	I0603 13:50:14.083113 1143450 cache_images.go:84] Images are preloaded, skipping loading
	I0603 13:50:14.083122 1143450 kubeadm.go:928] updating node { 192.168.39.177 8444 v1.30.1 crio true true} ...
	I0603 13:50:14.083247 1143450 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-030870 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.177
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-030870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 13:50:14.083319 1143450 ssh_runner.go:195] Run: crio config
	I0603 13:50:14.142320 1143450 cni.go:84] Creating CNI manager for ""
	I0603 13:50:14.142344 1143450 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:50:14.142354 1143450 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 13:50:14.142379 1143450 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.177 APIServerPort:8444 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-030870 NodeName:default-k8s-diff-port-030870 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.177"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.177 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 13:50:14.142517 1143450 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.177
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-030870"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.177
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.177"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 13:50:14.142577 1143450 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 13:50:14.153585 1143450 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 13:50:14.153687 1143450 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 13:50:14.164499 1143450 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0603 13:50:14.186564 1143450 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 13:50:14.205489 1143450 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0603 13:50:14.227005 1143450 ssh_runner.go:195] Run: grep 192.168.39.177	control-plane.minikube.internal$ /etc/hosts
	I0603 13:50:14.231782 1143450 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.177	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:50:14.247433 1143450 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:50:14.368336 1143450 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:50:14.391791 1143450 certs.go:68] Setting up /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870 for IP: 192.168.39.177
	I0603 13:50:14.391816 1143450 certs.go:194] generating shared ca certs ...
	I0603 13:50:14.391840 1143450 certs.go:226] acquiring lock for ca certs: {Name:mkeec5aabce7c9540fcb31b78e4f96c2851d54f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:50:14.392015 1143450 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key
	I0603 13:50:14.392075 1143450 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key
	I0603 13:50:14.392090 1143450 certs.go:256] generating profile certs ...
	I0603 13:50:14.392282 1143450 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870/client.key
	I0603 13:50:14.392373 1143450 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870/apiserver.key.7a30187e
	I0603 13:50:14.392428 1143450 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870/proxy-client.key
	I0603 13:50:14.392545 1143450 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem (1338 bytes)
	W0603 13:50:14.392602 1143450 certs.go:480] ignoring /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251_empty.pem, impossibly tiny 0 bytes
	I0603 13:50:14.392616 1143450 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 13:50:14.392650 1143450 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem (1078 bytes)
	I0603 13:50:14.392687 1143450 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem (1123 bytes)
	I0603 13:50:14.392722 1143450 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem (1675 bytes)
	I0603 13:50:14.392780 1143450 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:50:14.393706 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 13:50:14.424354 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 13:50:14.476267 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 13:50:14.514457 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0603 13:50:14.548166 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0603 13:50:14.584479 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0603 13:50:14.626894 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 13:50:14.663103 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/default-k8s-diff-port-030870/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0603 13:50:14.696750 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /usr/share/ca-certificates/10862512.pem (1708 bytes)
	I0603 13:50:14.725770 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 13:50:14.755779 1143450 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem --> /usr/share/ca-certificates/1086251.pem (1338 bytes)
	I0603 13:50:14.786060 1143450 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 13:50:14.805976 1143450 ssh_runner.go:195] Run: openssl version
	I0603 13:50:14.812737 1143450 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10862512.pem && ln -fs /usr/share/ca-certificates/10862512.pem /etc/ssl/certs/10862512.pem"
	I0603 13:50:14.824707 1143450 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10862512.pem
	I0603 13:50:14.831139 1143450 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:37 /usr/share/ca-certificates/10862512.pem
	I0603 13:50:14.831255 1143450 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10862512.pem
	I0603 13:50:14.838855 1143450 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10862512.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 13:50:14.850974 1143450 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 13:50:14.865613 1143450 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:50:14.871431 1143450 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:24 /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:50:14.871518 1143450 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:50:14.878919 1143450 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 13:50:14.891371 1143450 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1086251.pem && ln -fs /usr/share/ca-certificates/1086251.pem /etc/ssl/certs/1086251.pem"
	I0603 13:50:14.903721 1143450 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1086251.pem
	I0603 13:50:14.909069 1143450 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:37 /usr/share/ca-certificates/1086251.pem
	I0603 13:50:14.909180 1143450 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1086251.pem
	I0603 13:50:14.915904 1143450 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1086251.pem /etc/ssl/certs/51391683.0"
	I0603 13:50:14.928622 1143450 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 13:50:14.934466 1143450 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 13:50:14.941321 1143450 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 13:50:14.947960 1143450 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 13:50:14.955629 1143450 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 13:50:14.962761 1143450 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 13:50:14.970396 1143450 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 13:50:14.977381 1143450 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-030870 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.1 ClusterName:default-k8s-diff-port-030870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.177 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:50:14.977543 1143450 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 13:50:14.977599 1143450 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:50:15.042628 1143450 cri.go:89] found id: ""
	I0603 13:50:15.042733 1143450 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0603 13:50:15.055439 1143450 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 13:50:15.055469 1143450 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 13:50:15.055476 1143450 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 13:50:15.055535 1143450 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 13:50:15.067250 1143450 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 13:50:15.068159 1143450 kubeconfig.go:125] found "default-k8s-diff-port-030870" server: "https://192.168.39.177:8444"
	I0603 13:50:15.070060 1143450 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 13:50:15.082723 1143450 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.177
	I0603 13:50:15.082788 1143450 kubeadm.go:1154] stopping kube-system containers ...
	I0603 13:50:15.082809 1143450 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0603 13:50:15.082972 1143450 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:50:15.124369 1143450 cri.go:89] found id: ""
	I0603 13:50:15.124509 1143450 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 13:50:15.144064 1143450 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 13:50:15.156148 1143450 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 13:50:15.156174 1143450 kubeadm.go:156] found existing configuration files:
	
	I0603 13:50:15.156240 1143450 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0603 13:50:15.166927 1143450 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 13:50:15.167006 1143450 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 13:50:12.598536 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:12.598972 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:12.599008 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:12.598948 1144654 retry.go:31] will retry after 882.970422ms: waiting for machine to come up
	I0603 13:50:13.483171 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:13.483723 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:13.483758 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:13.483640 1144654 retry.go:31] will retry after 1.215665556s: waiting for machine to come up
	I0603 13:50:14.701392 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:14.701960 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:14.701991 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:14.701899 1144654 retry.go:31] will retry after 1.614371992s: waiting for machine to come up
	I0603 13:50:16.318708 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:16.319127 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:16.319148 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:16.319103 1144654 retry.go:31] will retry after 2.146267337s: waiting for machine to come up
	I0603 13:50:13.683419 1143252 pod_ready.go:102] pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:15.684744 1143252 pod_ready.go:102] pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:16.792510 1143252 pod_ready.go:92] pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:16.792538 1143252 pod_ready.go:81] duration metric: took 5.117277447s for pod "kube-apiserver-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:16.792549 1143252 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:16.798083 1143252 pod_ready.go:92] pod "kube-controller-manager-embed-certs-223260" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:16.798112 1143252 pod_ready.go:81] duration metric: took 5.554915ms for pod "kube-controller-manager-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:16.798126 1143252 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s5vdl" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:16.804217 1143252 pod_ready.go:92] pod "kube-proxy-s5vdl" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:16.804247 1143252 pod_ready.go:81] duration metric: took 6.113411ms for pod "kube-proxy-s5vdl" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:16.804262 1143252 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:16.810317 1143252 pod_ready.go:92] pod "kube-scheduler-embed-certs-223260" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:16.810343 1143252 pod_ready.go:81] duration metric: took 6.073098ms for pod "kube-scheduler-embed-certs-223260" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:16.810357 1143252 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:15.178645 1143450 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0603 13:50:15.486524 1143450 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 13:50:15.486608 1143450 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 13:50:15.497694 1143450 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0603 13:50:15.509586 1143450 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 13:50:15.509665 1143450 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 13:50:15.521976 1143450 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0603 13:50:15.533446 1143450 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 13:50:15.533535 1143450 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 13:50:15.545525 1143450 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 13:50:15.557558 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:15.710109 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:16.725380 1143450 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.015227554s)
	I0603 13:50:16.725452 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:16.964275 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:17.061586 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:17.183665 1143450 api_server.go:52] waiting for apiserver process to appear ...
	I0603 13:50:17.183764 1143450 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:17.684365 1143450 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:18.184269 1143450 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:18.254733 1143450 api_server.go:72] duration metric: took 1.07106398s to wait for apiserver process to appear ...
	I0603 13:50:18.254769 1143450 api_server.go:88] waiting for apiserver healthz status ...
	I0603 13:50:18.254797 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:18.466825 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:18.467260 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:18.467292 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:18.467187 1144654 retry.go:31] will retry after 2.752334209s: waiting for machine to come up
	I0603 13:50:21.220813 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:21.221235 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:21.221267 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:21.221182 1144654 retry.go:31] will retry after 3.082080728s: waiting for machine to come up
	I0603 13:50:18.819188 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:21.323790 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:21.193140 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 13:50:21.193177 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 13:50:21.193193 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:21.265534 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 13:50:21.265580 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 13:50:21.265602 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:21.277669 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 13:50:21.277703 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 13:50:21.754973 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:21.761802 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:21.761841 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:22.255071 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:22.262166 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:22.262227 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:22.755128 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:22.759896 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:22.759936 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:23.255520 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:23.262093 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:23.262128 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:23.755784 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:23.760053 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:23.760079 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:24.255534 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:24.259793 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:50:24.259820 1143450 api_server.go:103] status: https://192.168.39.177:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:50:24.755365 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:50:24.759964 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 200:
	ok
	I0603 13:50:24.768830 1143450 api_server.go:141] control plane version: v1.30.1
	I0603 13:50:24.768862 1143450 api_server.go:131] duration metric: took 6.51408552s to wait for apiserver health ...
	I0603 13:50:24.768872 1143450 cni.go:84] Creating CNI manager for ""
	I0603 13:50:24.768879 1143450 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:50:24.771099 1143450 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 13:50:24.772806 1143450 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 13:50:24.784204 1143450 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 13:50:24.805572 1143450 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 13:50:24.816944 1143450 system_pods.go:59] 8 kube-system pods found
	I0603 13:50:24.816988 1143450 system_pods.go:61] "coredns-7db6d8ff4d-flxqj" [a116f363-ca50-4e2d-8c77-e99498c81e36] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 13:50:24.816997 1143450 system_pods.go:61] "etcd-default-k8s-diff-port-030870" [4134b8e4-b7c4-4571-ae7f-f1eff2be2427] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0603 13:50:24.817008 1143450 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-030870" [38fe3d48-9d20-448a-b8d1-7c3af8ab1d2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0603 13:50:24.817021 1143450 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-030870" [5c8f2fc4-fc4f-48f8-8d81-3b64aa9a93c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0603 13:50:24.817028 1143450 system_pods.go:61] "kube-proxy-thsrx" [96df5442-b343-47c8-a561-681a2d568d50] Running
	I0603 13:50:24.817037 1143450 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-030870" [1f2c23a1-1c2c-463f-a5f0-e8f1bb8956f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0603 13:50:24.817044 1143450 system_pods.go:61] "metrics-server-569cc877fc-8xw9v" [4ab08177-2171-493b-928c-456d8a21fd68] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:50:24.817050 1143450 system_pods.go:61] "storage-provisioner" [64d080e5-d582-4ee5-adbc-a652e8e2b820] Running
	I0603 13:50:24.817060 1143450 system_pods.go:74] duration metric: took 11.461696ms to wait for pod list to return data ...
	I0603 13:50:24.817069 1143450 node_conditions.go:102] verifying NodePressure condition ...
	I0603 13:50:24.820804 1143450 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 13:50:24.820834 1143450 node_conditions.go:123] node cpu capacity is 2
	I0603 13:50:24.820846 1143450 node_conditions.go:105] duration metric: took 3.771492ms to run NodePressure ...
	I0603 13:50:24.820865 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:25.098472 1143450 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0603 13:50:25.103237 1143450 kubeadm.go:733] kubelet initialised
	I0603 13:50:25.103263 1143450 kubeadm.go:734] duration metric: took 4.763539ms waiting for restarted kubelet to initialise ...
	I0603 13:50:25.103274 1143450 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:50:25.109364 1143450 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-flxqj" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:25.114629 1143450 pod_ready.go:97] node "default-k8s-diff-port-030870" hosting pod "coredns-7db6d8ff4d-flxqj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.114662 1143450 pod_ready.go:81] duration metric: took 5.268473ms for pod "coredns-7db6d8ff4d-flxqj" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:25.114676 1143450 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-030870" hosting pod "coredns-7db6d8ff4d-flxqj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.114687 1143450 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:25.118734 1143450 pod_ready.go:97] node "default-k8s-diff-port-030870" hosting pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.118777 1143450 pod_ready.go:81] duration metric: took 4.079659ms for pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:25.118790 1143450 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-030870" hosting pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.118810 1143450 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:25.123298 1143450 pod_ready.go:97] node "default-k8s-diff-port-030870" hosting pod "kube-apiserver-default-k8s-diff-port-030870" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.123334 1143450 pod_ready.go:81] duration metric: took 4.509948ms for pod "kube-apiserver-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:25.123351 1143450 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-030870" hosting pod "kube-apiserver-default-k8s-diff-port-030870" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.123361 1143450 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:25.210283 1143450 pod_ready.go:97] node "default-k8s-diff-port-030870" hosting pod "kube-controller-manager-default-k8s-diff-port-030870" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.210316 1143450 pod_ready.go:81] duration metric: took 86.945898ms for pod "kube-controller-manager-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:25.210329 1143450 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-030870" hosting pod "kube-controller-manager-default-k8s-diff-port-030870" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.210338 1143450 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-thsrx" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:25.609043 1143450 pod_ready.go:97] node "default-k8s-diff-port-030870" hosting pod "kube-proxy-thsrx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.609074 1143450 pod_ready.go:81] duration metric: took 398.728553ms for pod "kube-proxy-thsrx" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:25.609084 1143450 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-030870" hosting pod "kube-proxy-thsrx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:25.609091 1143450 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:26.009831 1143450 pod_ready.go:97] node "default-k8s-diff-port-030870" hosting pod "kube-scheduler-default-k8s-diff-port-030870" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:26.009866 1143450 pod_ready.go:81] duration metric: took 400.766037ms for pod "kube-scheduler-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:26.009880 1143450 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-030870" hosting pod "kube-scheduler-default-k8s-diff-port-030870" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:26.009888 1143450 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:26.410271 1143450 pod_ready.go:97] node "default-k8s-diff-port-030870" hosting pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:26.410301 1143450 pod_ready.go:81] duration metric: took 400.402293ms for pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace to be "Ready" ...
	E0603 13:50:26.410315 1143450 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-030870" hosting pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:26.410326 1143450 pod_ready.go:38] duration metric: took 1.307039933s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:50:26.410347 1143450 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 13:50:26.422726 1143450 ops.go:34] apiserver oom_adj: -16
	I0603 13:50:26.422753 1143450 kubeadm.go:591] duration metric: took 11.367271168s to restartPrimaryControlPlane
	I0603 13:50:26.422763 1143450 kubeadm.go:393] duration metric: took 11.445396197s to StartCluster
	I0603 13:50:26.422784 1143450 settings.go:142] acquiring lock: {Name:mka7155af15d143794eb08b8670f7d850f44839e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:50:26.422866 1143450 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 13:50:26.424423 1143450 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/kubeconfig: {Name:mk082a4c41fd0f4876b4085806e1bc5ef6533b14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:50:26.424744 1143450 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.177 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 13:50:26.426628 1143450 out.go:177] * Verifying Kubernetes components...
	I0603 13:50:26.424855 1143450 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 13:50:26.424985 1143450 config.go:182] Loaded profile config "default-k8s-diff-port-030870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:50:26.428227 1143450 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:50:26.428239 1143450 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-030870"
	I0603 13:50:26.428241 1143450 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-030870"
	I0603 13:50:26.428275 1143450 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-030870"
	I0603 13:50:26.428285 1143450 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-030870"
	W0603 13:50:26.428297 1143450 addons.go:243] addon storage-provisioner should already be in state true
	I0603 13:50:26.428243 1143450 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-030870"
	I0603 13:50:26.428338 1143450 host.go:66] Checking if "default-k8s-diff-port-030870" exists ...
	I0603 13:50:26.428404 1143450 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-030870"
	W0603 13:50:26.428428 1143450 addons.go:243] addon metrics-server should already be in state true
	I0603 13:50:26.428501 1143450 host.go:66] Checking if "default-k8s-diff-port-030870" exists ...
	I0603 13:50:26.428650 1143450 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:26.428676 1143450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:26.428724 1143450 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:26.428751 1143450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:26.428948 1143450 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:26.429001 1143450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:26.445709 1143450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33951
	I0603 13:50:26.446187 1143450 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:26.446719 1143450 main.go:141] libmachine: Using API Version  1
	I0603 13:50:26.446743 1143450 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:26.447152 1143450 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:26.447817 1143450 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:26.447852 1143450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:26.449660 1143450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46507
	I0603 13:50:26.449721 1143450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37871
	I0603 13:50:26.450120 1143450 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:26.450161 1143450 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:26.450735 1143450 main.go:141] libmachine: Using API Version  1
	I0603 13:50:26.450755 1143450 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:26.450906 1143450 main.go:141] libmachine: Using API Version  1
	I0603 13:50:26.450930 1143450 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:26.451177 1143450 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:26.451333 1143450 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:26.451421 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetState
	I0603 13:50:26.451909 1143450 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:26.451951 1143450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:26.455458 1143450 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-030870"
	W0603 13:50:26.455484 1143450 addons.go:243] addon default-storageclass should already be in state true
	I0603 13:50:26.455523 1143450 host.go:66] Checking if "default-k8s-diff-port-030870" exists ...
	I0603 13:50:26.455776 1143450 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:26.455825 1143450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:26.470807 1143450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36503
	I0603 13:50:26.471179 1143450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44131
	I0603 13:50:26.471763 1143450 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:26.471921 1143450 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:26.472042 1143450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34451
	I0603 13:50:26.472471 1143450 main.go:141] libmachine: Using API Version  1
	I0603 13:50:26.472501 1143450 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:26.472575 1143450 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:26.472750 1143450 main.go:141] libmachine: Using API Version  1
	I0603 13:50:26.472760 1143450 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:26.472966 1143450 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:26.473095 1143450 main.go:141] libmachine: Using API Version  1
	I0603 13:50:26.473118 1143450 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:26.473132 1143450 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:26.473134 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetState
	I0603 13:50:26.473357 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetState
	I0603 13:50:26.473486 1143450 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:26.474129 1143450 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:26.474183 1143450 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:26.475437 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:26.475594 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:26.477911 1143450 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:26.479474 1143450 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0603 13:50:24.304462 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:24.305104 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | unable to find current IP address of domain old-k8s-version-151788 in network mk-old-k8s-version-151788
	I0603 13:50:24.305175 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | I0603 13:50:24.305099 1144654 retry.go:31] will retry after 4.178596743s: waiting for machine to come up
	I0603 13:50:26.480998 1143450 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0603 13:50:26.481021 1143450 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0603 13:50:26.481047 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:26.479556 1143450 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 13:50:26.481095 1143450 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 13:50:26.481116 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:26.484634 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:26.484694 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:26.485083 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:26.485116 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:26.485147 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:26.485160 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:26.485538 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:26.485628 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:26.485729 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:26.485829 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:26.485856 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:26.485993 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:26.486040 1143450 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa Username:docker}
	I0603 13:50:26.486158 1143450 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa Username:docker}
	I0603 13:50:26.496035 1143450 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45527
	I0603 13:50:26.496671 1143450 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:26.497270 1143450 main.go:141] libmachine: Using API Version  1
	I0603 13:50:26.497290 1143450 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:26.497719 1143450 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:26.497989 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetState
	I0603 13:50:26.500018 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .DriverName
	I0603 13:50:26.500280 1143450 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 13:50:26.500298 1143450 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 13:50:26.500318 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHHostname
	I0603 13:50:26.503226 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:26.503732 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:09:d4", ip: ""} in network mk-default-k8s-diff-port-030870: {Iface:virbr1 ExpiryTime:2024-06-03 14:49:58 +0000 UTC Type:0 Mac:52:54:00:62:09:d4 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:default-k8s-diff-port-030870 Clientid:01:52:54:00:62:09:d4}
	I0603 13:50:26.503768 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | domain default-k8s-diff-port-030870 has defined IP address 192.168.39.177 and MAC address 52:54:00:62:09:d4 in network mk-default-k8s-diff-port-030870
	I0603 13:50:26.503967 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHPort
	I0603 13:50:26.504212 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHKeyPath
	I0603 13:50:26.504399 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .GetSSHUsername
	I0603 13:50:26.504556 1143450 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/default-k8s-diff-port-030870/id_rsa Username:docker}
	I0603 13:50:26.608774 1143450 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:50:26.629145 1143450 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-030870" to be "Ready" ...
	I0603 13:50:26.692164 1143450 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 13:50:26.784756 1143450 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 13:50:26.788686 1143450 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0603 13:50:26.788711 1143450 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0603 13:50:26.841094 1143450 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0603 13:50:26.841129 1143450 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0603 13:50:26.907657 1143450 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 13:50:26.907688 1143450 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0603 13:50:26.963244 1143450 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:26.963280 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .Close
	I0603 13:50:26.963618 1143450 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:26.963641 1143450 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:26.963649 1143450 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:26.963653 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | Closing plugin on server side
	I0603 13:50:26.963657 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .Close
	I0603 13:50:26.963962 1143450 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:26.963980 1143450 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:26.963982 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | Closing plugin on server side
	I0603 13:50:26.971726 1143450 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:26.971748 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .Close
	I0603 13:50:26.972101 1143450 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:26.972125 1143450 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:26.975238 1143450 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 13:50:27.653643 1143450 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:27.653689 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .Close
	I0603 13:50:27.654037 1143450 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:27.654061 1143450 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:27.654078 1143450 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:27.654087 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .Close
	I0603 13:50:27.654429 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | Closing plugin on server side
	I0603 13:50:27.654484 1143450 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:27.654507 1143450 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:27.847367 1143450 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:27.847397 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .Close
	I0603 13:50:27.847745 1143450 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:27.847770 1143450 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:27.847779 1143450 main.go:141] libmachine: Making call to close driver server
	I0603 13:50:27.847785 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) DBG | Closing plugin on server side
	I0603 13:50:27.847793 1143450 main.go:141] libmachine: (default-k8s-diff-port-030870) Calling .Close
	I0603 13:50:27.848112 1143450 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:50:27.848130 1143450 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:50:27.848144 1143450 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-030870"
	I0603 13:50:27.851386 1143450 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0603 13:50:23.817272 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:25.818013 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:27.818160 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:29.798777 1142862 start.go:364] duration metric: took 56.694826675s to acquireMachinesLock for "no-preload-817450"
	I0603 13:50:29.798855 1142862 start.go:96] Skipping create...Using existing machine configuration
	I0603 13:50:29.798866 1142862 fix.go:54] fixHost starting: 
	I0603 13:50:29.799329 1142862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:50:29.799369 1142862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:50:29.817787 1142862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46057
	I0603 13:50:29.818396 1142862 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:50:29.819003 1142862 main.go:141] libmachine: Using API Version  1
	I0603 13:50:29.819025 1142862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:50:29.819450 1142862 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:50:29.819617 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:50:29.819782 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetState
	I0603 13:50:29.821742 1142862 fix.go:112] recreateIfNeeded on no-preload-817450: state=Stopped err=<nil>
	I0603 13:50:29.821777 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	W0603 13:50:29.821973 1142862 fix.go:138] unexpected machine state, will restart: <nil>
	I0603 13:50:29.823915 1142862 out.go:177] * Restarting existing kvm2 VM for "no-preload-817450" ...
	I0603 13:50:27.852929 1143450 addons.go:510] duration metric: took 1.428071927s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0603 13:50:28.633355 1143450 node_ready.go:53] node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:29.825584 1142862 main.go:141] libmachine: (no-preload-817450) Calling .Start
	I0603 13:50:29.825783 1142862 main.go:141] libmachine: (no-preload-817450) Ensuring networks are active...
	I0603 13:50:29.826746 1142862 main.go:141] libmachine: (no-preload-817450) Ensuring network default is active
	I0603 13:50:29.827116 1142862 main.go:141] libmachine: (no-preload-817450) Ensuring network mk-no-preload-817450 is active
	I0603 13:50:29.827617 1142862 main.go:141] libmachine: (no-preload-817450) Getting domain xml...
	I0603 13:50:29.828419 1142862 main.go:141] libmachine: (no-preload-817450) Creating domain...
	I0603 13:50:28.485041 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.485598 1143678 main.go:141] libmachine: (old-k8s-version-151788) Found IP for machine: 192.168.50.65
	I0603 13:50:28.485624 1143678 main.go:141] libmachine: (old-k8s-version-151788) Reserving static IP address...
	I0603 13:50:28.485639 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has current primary IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.486053 1143678 main.go:141] libmachine: (old-k8s-version-151788) Reserved static IP address: 192.168.50.65
	I0603 13:50:28.486109 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "old-k8s-version-151788", mac: "52:54:00:56:4e:c1", ip: "192.168.50.65"} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.486123 1143678 main.go:141] libmachine: (old-k8s-version-151788) Waiting for SSH to be available...
	I0603 13:50:28.486144 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | skip adding static IP to network mk-old-k8s-version-151788 - found existing host DHCP lease matching {name: "old-k8s-version-151788", mac: "52:54:00:56:4e:c1", ip: "192.168.50.65"}
	I0603 13:50:28.486156 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | Getting to WaitForSSH function...
	I0603 13:50:28.488305 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.488754 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.488788 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.489025 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | Using SSH client type: external
	I0603 13:50:28.489048 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | Using SSH private key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/id_rsa (-rw-------)
	I0603 13:50:28.489114 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.65 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 13:50:28.489147 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | About to run SSH command:
	I0603 13:50:28.489167 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | exit 0
	I0603 13:50:28.613732 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | SSH cmd err, output: <nil>: 
	I0603 13:50:28.614183 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetConfigRaw
	I0603 13:50:28.614879 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetIP
	I0603 13:50:28.617742 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.618235 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.618270 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.618481 1143678 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/config.json ...
	I0603 13:50:28.618699 1143678 machine.go:94] provisionDockerMachine start ...
	I0603 13:50:28.618719 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:50:28.618967 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:28.621356 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.621655 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.621685 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.621897 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:28.622117 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:28.622321 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:28.622511 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:28.622750 1143678 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:28.622946 1143678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0603 13:50:28.622958 1143678 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 13:50:28.726383 1143678 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 13:50:28.726419 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetMachineName
	I0603 13:50:28.726740 1143678 buildroot.go:166] provisioning hostname "old-k8s-version-151788"
	I0603 13:50:28.726777 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetMachineName
	I0603 13:50:28.727042 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:28.729901 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.730372 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.730402 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.730599 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:28.730824 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:28.731031 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:28.731205 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:28.731403 1143678 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:28.731585 1143678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0603 13:50:28.731599 1143678 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-151788 && echo "old-k8s-version-151788" | sudo tee /etc/hostname
	I0603 13:50:28.848834 1143678 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-151788
	
	I0603 13:50:28.848867 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:28.852250 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.852698 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.852721 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.852980 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:28.853239 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:28.853536 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:28.853819 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:28.854093 1143678 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:28.854338 1143678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0603 13:50:28.854367 1143678 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-151788' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-151788/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-151788' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 13:50:28.967427 1143678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 13:50:28.967461 1143678 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19011-1078924/.minikube CaCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19011-1078924/.minikube}
	I0603 13:50:28.967520 1143678 buildroot.go:174] setting up certificates
	I0603 13:50:28.967538 1143678 provision.go:84] configureAuth start
	I0603 13:50:28.967550 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetMachineName
	I0603 13:50:28.967946 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetIP
	I0603 13:50:28.970841 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.971226 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.971256 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.971449 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:28.974316 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.974702 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:28.974732 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:28.974911 1143678 provision.go:143] copyHostCerts
	I0603 13:50:28.974994 1143678 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem, removing ...
	I0603 13:50:28.975010 1143678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 13:50:28.975068 1143678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem (1675 bytes)
	I0603 13:50:28.975247 1143678 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem, removing ...
	I0603 13:50:28.975260 1143678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 13:50:28.975283 1143678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem (1078 bytes)
	I0603 13:50:28.975354 1143678 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem, removing ...
	I0603 13:50:28.975362 1143678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 13:50:28.975385 1143678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem (1123 bytes)
	I0603 13:50:28.975463 1143678 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-151788 san=[127.0.0.1 192.168.50.65 localhost minikube old-k8s-version-151788]
	I0603 13:50:29.096777 1143678 provision.go:177] copyRemoteCerts
	I0603 13:50:29.096835 1143678 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 13:50:29.096865 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:29.099989 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.100408 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:29.100434 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.100644 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:29.100831 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.100975 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:29.101144 1143678 sshutil.go:53] new ssh client: &{IP:192.168.50.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/id_rsa Username:docker}
	I0603 13:50:29.184886 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 13:50:29.211432 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0603 13:50:29.238552 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0603 13:50:29.266803 1143678 provision.go:87] duration metric: took 299.247567ms to configureAuth
	I0603 13:50:29.266844 1143678 buildroot.go:189] setting minikube options for container-runtime
	I0603 13:50:29.267107 1143678 config.go:182] Loaded profile config "old-k8s-version-151788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0603 13:50:29.267220 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:29.270966 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.271417 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:29.271472 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.271688 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:29.271893 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.272121 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.272327 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:29.272544 1143678 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:29.272787 1143678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0603 13:50:29.272811 1143678 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 13:50:29.548407 1143678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 13:50:29.548437 1143678 machine.go:97] duration metric: took 929.724002ms to provisionDockerMachine
	I0603 13:50:29.548449 1143678 start.go:293] postStartSetup for "old-k8s-version-151788" (driver="kvm2")
	I0603 13:50:29.548461 1143678 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 13:50:29.548486 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:50:29.548924 1143678 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 13:50:29.548992 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:29.552127 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.552531 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:29.552571 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.552756 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:29.552974 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.553166 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:29.553364 1143678 sshutil.go:53] new ssh client: &{IP:192.168.50.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/id_rsa Username:docker}
	I0603 13:50:29.637026 1143678 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 13:50:29.641264 1143678 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 13:50:29.641293 1143678 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/addons for local assets ...
	I0603 13:50:29.641376 1143678 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/files for local assets ...
	I0603 13:50:29.641509 1143678 filesync.go:149] local asset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> 10862512.pem in /etc/ssl/certs
	I0603 13:50:29.641600 1143678 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 13:50:29.657273 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:50:29.688757 1143678 start.go:296] duration metric: took 140.291954ms for postStartSetup
	I0603 13:50:29.688806 1143678 fix.go:56] duration metric: took 21.605539652s for fixHost
	I0603 13:50:29.688843 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:29.691764 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.692170 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:29.692216 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.692356 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:29.692623 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.692814 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.692996 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:29.693180 1143678 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:29.693372 1143678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.65 22 <nil> <nil>}
	I0603 13:50:29.693384 1143678 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 13:50:29.798629 1143678 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717422629.770375968
	
	I0603 13:50:29.798655 1143678 fix.go:216] guest clock: 1717422629.770375968
	I0603 13:50:29.798662 1143678 fix.go:229] Guest: 2024-06-03 13:50:29.770375968 +0000 UTC Remote: 2024-06-03 13:50:29.688811675 +0000 UTC m=+247.377673500 (delta=81.564293ms)
	I0603 13:50:29.798683 1143678 fix.go:200] guest clock delta is within tolerance: 81.564293ms
	I0603 13:50:29.798688 1143678 start.go:83] releasing machines lock for "old-k8s-version-151788", held for 21.715483341s
	I0603 13:50:29.798712 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:50:29.799019 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetIP
	I0603 13:50:29.802078 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.802479 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:29.802522 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.802674 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:50:29.803271 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:50:29.803496 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .DriverName
	I0603 13:50:29.803584 1143678 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 13:50:29.803646 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:29.803961 1143678 ssh_runner.go:195] Run: cat /version.json
	I0603 13:50:29.803988 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHHostname
	I0603 13:50:29.806505 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.806863 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.806926 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:29.806961 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.807093 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:29.807299 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.807345 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:29.807386 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:29.807476 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:29.807670 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHPort
	I0603 13:50:29.807669 1143678 sshutil.go:53] new ssh client: &{IP:192.168.50.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/id_rsa Username:docker}
	I0603 13:50:29.807841 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHKeyPath
	I0603 13:50:29.807947 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetSSHUsername
	I0603 13:50:29.808183 1143678 sshutil.go:53] new ssh client: &{IP:192.168.50.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/old-k8s-version-151788/id_rsa Username:docker}
	I0603 13:50:29.890622 1143678 ssh_runner.go:195] Run: systemctl --version
	I0603 13:50:29.918437 1143678 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 13:50:30.064471 1143678 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 13:50:30.073881 1143678 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 13:50:30.073969 1143678 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 13:50:30.097037 1143678 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 13:50:30.097070 1143678 start.go:494] detecting cgroup driver to use...
	I0603 13:50:30.097147 1143678 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 13:50:30.114374 1143678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 13:50:30.132000 1143678 docker.go:217] disabling cri-docker service (if available) ...
	I0603 13:50:30.132075 1143678 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 13:50:30.148156 1143678 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 13:50:30.164601 1143678 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 13:50:30.303125 1143678 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 13:50:30.475478 1143678 docker.go:233] disabling docker service ...
	I0603 13:50:30.475578 1143678 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 13:50:30.494632 1143678 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 13:50:30.513383 1143678 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 13:50:30.691539 1143678 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 13:50:30.849280 1143678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 13:50:30.869107 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 13:50:30.893451 1143678 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0603 13:50:30.893528 1143678 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:30.909358 1143678 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 13:50:30.909465 1143678 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:30.926891 1143678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:30.941879 1143678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:30.957985 1143678 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 13:50:30.971349 1143678 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 13:50:30.984948 1143678 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 13:50:30.985023 1143678 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 13:50:30.999255 1143678 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 13:50:31.011615 1143678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:50:31.162848 1143678 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 13:50:31.352121 1143678 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 13:50:31.352190 1143678 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 13:50:31.357946 1143678 start.go:562] Will wait 60s for crictl version
	I0603 13:50:31.358032 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:31.362540 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 13:50:31.410642 1143678 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 13:50:31.410757 1143678 ssh_runner.go:195] Run: crio --version
	I0603 13:50:31.444750 1143678 ssh_runner.go:195] Run: crio --version
	I0603 13:50:31.482404 1143678 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0603 13:50:31.484218 1143678 main.go:141] libmachine: (old-k8s-version-151788) Calling .GetIP
	I0603 13:50:31.488049 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:31.488663 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:4e:c1", ip: ""} in network mk-old-k8s-version-151788: {Iface:virbr2 ExpiryTime:2024-06-03 14:50:20 +0000 UTC Type:0 Mac:52:54:00:56:4e:c1 Iaid: IPaddr:192.168.50.65 Prefix:24 Hostname:old-k8s-version-151788 Clientid:01:52:54:00:56:4e:c1}
	I0603 13:50:31.488695 1143678 main.go:141] libmachine: (old-k8s-version-151788) DBG | domain old-k8s-version-151788 has defined IP address 192.168.50.65 and MAC address 52:54:00:56:4e:c1 in network mk-old-k8s-version-151788
	I0603 13:50:31.488985 1143678 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0603 13:50:31.494813 1143678 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:50:31.511436 1143678 kubeadm.go:877] updating cluster {Name:old-k8s-version-151788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-151788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 13:50:31.511597 1143678 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0603 13:50:31.511659 1143678 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:50:31.571733 1143678 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0603 13:50:31.571819 1143678 ssh_runner.go:195] Run: which lz4
	I0603 13:50:31.577765 1143678 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0603 13:50:31.583983 1143678 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0603 13:50:31.584025 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0603 13:50:30.319230 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:32.824874 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:30.633456 1143450 node_ready.go:53] node "default-k8s-diff-port-030870" has status "Ready":"False"
	I0603 13:50:32.134192 1143450 node_ready.go:49] node "default-k8s-diff-port-030870" has status "Ready":"True"
	I0603 13:50:32.134227 1143450 node_ready.go:38] duration metric: took 5.505047986s for node "default-k8s-diff-port-030870" to be "Ready" ...
	I0603 13:50:32.134241 1143450 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:50:32.143157 1143450 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-flxqj" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:32.150075 1143450 pod_ready.go:92] pod "coredns-7db6d8ff4d-flxqj" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:32.150113 1143450 pod_ready.go:81] duration metric: took 6.922006ms for pod "coredns-7db6d8ff4d-flxqj" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:32.150128 1143450 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:34.157758 1143450 pod_ready.go:102] pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:31.283193 1142862 main.go:141] libmachine: (no-preload-817450) Waiting to get IP...
	I0603 13:50:31.284191 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:31.284681 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:31.284757 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:31.284641 1144889 retry.go:31] will retry after 246.139268ms: waiting for machine to come up
	I0603 13:50:31.532345 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:31.533024 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:31.533056 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:31.532956 1144889 retry.go:31] will retry after 283.586657ms: waiting for machine to come up
	I0603 13:50:31.818610 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:31.819271 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:31.819302 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:31.819235 1144889 retry.go:31] will retry after 345.327314ms: waiting for machine to come up
	I0603 13:50:32.165948 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:32.166532 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:32.166585 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:32.166485 1144889 retry.go:31] will retry after 567.370644ms: waiting for machine to come up
	I0603 13:50:32.735409 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:32.736074 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:32.736118 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:32.735978 1144889 retry.go:31] will retry after 523.349811ms: waiting for machine to come up
	I0603 13:50:33.261023 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:33.261738 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:33.261769 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:33.261685 1144889 retry.go:31] will retry after 617.256992ms: waiting for machine to come up
	I0603 13:50:33.880579 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:33.881159 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:33.881188 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:33.881113 1144889 retry.go:31] will retry after 975.807438ms: waiting for machine to come up
	I0603 13:50:34.858935 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:34.859418 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:34.859447 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:34.859365 1144889 retry.go:31] will retry after 1.257722281s: waiting for machine to come up
	I0603 13:50:33.399678 1143678 crio.go:462] duration metric: took 1.821959808s to copy over tarball
	I0603 13:50:33.399768 1143678 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0603 13:50:36.631033 1143678 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.231219364s)
	I0603 13:50:36.631081 1143678 crio.go:469] duration metric: took 3.231364789s to extract the tarball
	I0603 13:50:36.631092 1143678 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0603 13:50:36.677954 1143678 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:50:36.718160 1143678 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0603 13:50:36.718197 1143678 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0603 13:50:36.718295 1143678 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 13:50:36.718335 1143678 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 13:50:36.718295 1143678 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:36.718456 1143678 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0603 13:50:36.718302 1143678 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 13:50:36.718343 1143678 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0603 13:50:36.718335 1143678 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0603 13:50:36.718858 1143678 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 13:50:36.720574 1143678 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 13:50:36.720644 1143678 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:36.720573 1143678 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0603 13:50:36.720574 1143678 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 13:50:36.720576 1143678 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 13:50:36.720603 1143678 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0603 13:50:36.720608 1143678 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 13:50:36.721118 1143678 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0603 13:50:36.907182 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0603 13:50:36.907179 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0603 13:50:36.910017 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:36.920969 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 13:50:36.925739 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0603 13:50:36.935710 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0603 13:50:36.946767 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0603 13:50:36.973425 1143678 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0603 13:50:37.050763 1143678 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0603 13:50:37.050817 1143678 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0603 13:50:37.050846 1143678 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0603 13:50:37.050876 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:37.050880 1143678 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0603 13:50:37.050906 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:37.162505 1143678 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0603 13:50:37.162561 1143678 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 13:50:37.162608 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:37.162706 1143678 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0603 13:50:37.162727 1143678 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0603 13:50:37.162754 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:37.162858 1143678 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0603 13:50:37.162898 1143678 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0603 13:50:37.162922 1143678 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0603 13:50:37.162965 1143678 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0603 13:50:37.163001 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:37.162943 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:37.164963 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0603 13:50:37.165019 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0603 13:50:37.165136 1143678 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0603 13:50:37.165260 1143678 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0603 13:50:37.165295 1143678 ssh_runner.go:195] Run: which crictl
	I0603 13:50:37.188179 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0603 13:50:37.188292 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0603 13:50:37.188315 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0603 13:50:37.188371 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0603 13:50:37.188561 1143678 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0603 13:50:37.300592 1143678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0603 13:50:37.300642 1143678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0603 13:50:35.317483 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:37.318105 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:36.160066 1143450 pod_ready.go:102] pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:37.334685 1143450 pod_ready.go:92] pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:37.334719 1143450 pod_ready.go:81] duration metric: took 5.184582613s for pod "etcd-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.334732 1143450 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.341104 1143450 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-030870" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:37.341140 1143450 pod_ready.go:81] duration metric: took 6.399805ms for pod "kube-apiserver-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.341154 1143450 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.347174 1143450 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-030870" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:37.347208 1143450 pod_ready.go:81] duration metric: took 6.044519ms for pod "kube-controller-manager-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.347220 1143450 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-thsrx" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.356909 1143450 pod_ready.go:92] pod "kube-proxy-thsrx" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:37.356949 1143450 pod_ready.go:81] duration metric: took 9.72108ms for pod "kube-proxy-thsrx" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.356962 1143450 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.363891 1143450 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-030870" in "kube-system" namespace has status "Ready":"True"
	I0603 13:50:37.363915 1143450 pod_ready.go:81] duration metric: took 6.9442ms for pod "kube-scheduler-default-k8s-diff-port-030870" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:37.363927 1143450 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace to be "Ready" ...
	I0603 13:50:39.372092 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:36.118754 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:36.119214 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:36.119251 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:36.119148 1144889 retry.go:31] will retry after 1.380813987s: waiting for machine to come up
	I0603 13:50:37.501464 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:37.501889 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:37.501937 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:37.501849 1144889 retry.go:31] will retry after 2.144177789s: waiting for machine to come up
	I0603 13:50:39.648238 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:39.648744 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:39.648768 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:39.648693 1144889 retry.go:31] will retry after 1.947487062s: waiting for machine to come up
	I0603 13:50:37.360149 1143678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0603 13:50:37.360196 1143678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0603 13:50:37.360346 1143678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0603 13:50:37.360371 1143678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0603 13:50:37.360436 1143678 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0603 13:50:37.543409 1143678 cache_images.go:92] duration metric: took 825.189409ms to LoadCachedImages
	W0603 13:50:37.543559 1143678 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0603 13:50:37.543581 1143678 kubeadm.go:928] updating node { 192.168.50.65 8443 v1.20.0 crio true true} ...
	I0603 13:50:37.543723 1143678 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-151788 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-151788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 13:50:37.543804 1143678 ssh_runner.go:195] Run: crio config
	I0603 13:50:37.601388 1143678 cni.go:84] Creating CNI manager for ""
	I0603 13:50:37.601428 1143678 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:50:37.601445 1143678 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 13:50:37.601471 1143678 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.65 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-151788 NodeName:old-k8s-version-151788 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.65"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.65 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0603 13:50:37.601664 1143678 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.65
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-151788"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.65
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.65"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 13:50:37.601746 1143678 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0603 13:50:37.613507 1143678 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 13:50:37.613588 1143678 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 13:50:37.623853 1143678 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0603 13:50:37.642298 1143678 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 13:50:37.660863 1143678 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0603 13:50:37.679974 1143678 ssh_runner.go:195] Run: grep 192.168.50.65	control-plane.minikube.internal$ /etc/hosts
	I0603 13:50:37.685376 1143678 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.65	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:50:37.702732 1143678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:50:37.859343 1143678 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:50:37.880684 1143678 certs.go:68] Setting up /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788 for IP: 192.168.50.65
	I0603 13:50:37.880714 1143678 certs.go:194] generating shared ca certs ...
	I0603 13:50:37.880737 1143678 certs.go:226] acquiring lock for ca certs: {Name:mkeec5aabce7c9540fcb31b78e4f96c2851d54f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:50:37.880952 1143678 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key
	I0603 13:50:37.881012 1143678 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key
	I0603 13:50:37.881024 1143678 certs.go:256] generating profile certs ...
	I0603 13:50:37.881179 1143678 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/client.key
	I0603 13:50:37.881279 1143678 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/apiserver.key.9bfe4cc3
	I0603 13:50:37.881334 1143678 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/proxy-client.key
	I0603 13:50:37.881554 1143678 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem (1338 bytes)
	W0603 13:50:37.881602 1143678 certs.go:480] ignoring /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251_empty.pem, impossibly tiny 0 bytes
	I0603 13:50:37.881629 1143678 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 13:50:37.881667 1143678 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem (1078 bytes)
	I0603 13:50:37.881698 1143678 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem (1123 bytes)
	I0603 13:50:37.881730 1143678 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem (1675 bytes)
	I0603 13:50:37.881805 1143678 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:50:37.882741 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 13:50:37.919377 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 13:50:37.957218 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 13:50:37.987016 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0603 13:50:38.024442 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0603 13:50:38.051406 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0603 13:50:38.094816 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 13:50:38.143689 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/old-k8s-version-151788/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 13:50:38.171488 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem --> /usr/share/ca-certificates/1086251.pem (1338 bytes)
	I0603 13:50:38.197296 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /usr/share/ca-certificates/10862512.pem (1708 bytes)
	I0603 13:50:38.224025 1143678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 13:50:38.250728 1143678 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 13:50:38.270485 1143678 ssh_runner.go:195] Run: openssl version
	I0603 13:50:38.276995 1143678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10862512.pem && ln -fs /usr/share/ca-certificates/10862512.pem /etc/ssl/certs/10862512.pem"
	I0603 13:50:38.288742 1143678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10862512.pem
	I0603 13:50:38.293880 1143678 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:37 /usr/share/ca-certificates/10862512.pem
	I0603 13:50:38.293955 1143678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10862512.pem
	I0603 13:50:38.300456 1143678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10862512.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 13:50:38.312180 1143678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 13:50:38.324349 1143678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:50:38.329812 1143678 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:24 /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:50:38.329881 1143678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:50:38.337560 1143678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 13:50:38.350229 1143678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1086251.pem && ln -fs /usr/share/ca-certificates/1086251.pem /etc/ssl/certs/1086251.pem"
	I0603 13:50:38.362635 1143678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1086251.pem
	I0603 13:50:38.368842 1143678 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:37 /usr/share/ca-certificates/1086251.pem
	I0603 13:50:38.368920 1143678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1086251.pem
	I0603 13:50:38.376029 1143678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1086251.pem /etc/ssl/certs/51391683.0"
	I0603 13:50:38.387703 1143678 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 13:50:38.393071 1143678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 13:50:38.399760 1143678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 13:50:38.406332 1143678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 13:50:38.413154 1143678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 13:50:38.419162 1143678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 13:50:38.425818 1143678 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 13:50:38.432495 1143678 kubeadm.go:391] StartCluster: {Name:old-k8s-version-151788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-151788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:50:38.432659 1143678 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 13:50:38.432718 1143678 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:50:38.479889 1143678 cri.go:89] found id: ""
	I0603 13:50:38.479975 1143678 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0603 13:50:38.490549 1143678 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 13:50:38.490574 1143678 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 13:50:38.490580 1143678 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 13:50:38.490637 1143678 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 13:50:38.501024 1143678 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 13:50:38.503665 1143678 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-151788" does not appear in /home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 13:50:38.504563 1143678 kubeconfig.go:62] /home/jenkins/minikube-integration/19011-1078924/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-151788" cluster setting kubeconfig missing "old-k8s-version-151788" context setting]
	I0603 13:50:38.505614 1143678 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/kubeconfig: {Name:mk082a4c41fd0f4876b4085806e1bc5ef6533b14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:50:38.562691 1143678 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 13:50:38.573839 1143678 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.65
	I0603 13:50:38.573889 1143678 kubeadm.go:1154] stopping kube-system containers ...
	I0603 13:50:38.573905 1143678 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0603 13:50:38.573987 1143678 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:50:38.615876 1143678 cri.go:89] found id: ""
	I0603 13:50:38.615972 1143678 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 13:50:38.633568 1143678 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 13:50:38.645197 1143678 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 13:50:38.645229 1143678 kubeadm.go:156] found existing configuration files:
	
	I0603 13:50:38.645291 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 13:50:38.655344 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 13:50:38.655423 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 13:50:38.665789 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 13:50:38.674765 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 13:50:38.674842 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 13:50:38.684268 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 13:50:38.693586 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 13:50:38.693650 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 13:50:38.703313 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 13:50:38.712523 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 13:50:38.712597 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 13:50:38.722362 1143678 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 13:50:38.732190 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:38.875545 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:39.722534 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:39.970226 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:40.090817 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:50:40.193178 1143678 api_server.go:52] waiting for apiserver process to appear ...
	I0603 13:50:40.193485 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:40.693580 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:41.193579 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:41.693608 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:42.193587 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:39.318177 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:41.818337 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:41.373738 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:43.870381 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:41.597745 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:41.598343 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:41.598372 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:41.598280 1144889 retry.go:31] will retry after 2.47307834s: waiting for machine to come up
	I0603 13:50:44.074548 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:44.075009 1142862 main.go:141] libmachine: (no-preload-817450) DBG | unable to find current IP address of domain no-preload-817450 in network mk-no-preload-817450
	I0603 13:50:44.075037 1142862 main.go:141] libmachine: (no-preload-817450) DBG | I0603 13:50:44.074970 1144889 retry.go:31] will retry after 3.055733752s: waiting for machine to come up
	I0603 13:50:42.693593 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:43.194448 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:43.693645 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:44.193587 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:44.694583 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:45.194065 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:45.694138 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:46.194173 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:46.694344 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:47.194063 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:44.316348 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:46.317245 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:47.133727 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.134266 1142862 main.go:141] libmachine: (no-preload-817450) Found IP for machine: 192.168.72.125
	I0603 13:50:47.134301 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has current primary IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.134308 1142862 main.go:141] libmachine: (no-preload-817450) Reserving static IP address...
	I0603 13:50:47.134745 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "no-preload-817450", mac: "52:54:00:8f:cc:be", ip: "192.168.72.125"} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.134777 1142862 main.go:141] libmachine: (no-preload-817450) Reserved static IP address: 192.168.72.125
	I0603 13:50:47.134797 1142862 main.go:141] libmachine: (no-preload-817450) DBG | skip adding static IP to network mk-no-preload-817450 - found existing host DHCP lease matching {name: "no-preload-817450", mac: "52:54:00:8f:cc:be", ip: "192.168.72.125"}
	I0603 13:50:47.134816 1142862 main.go:141] libmachine: (no-preload-817450) DBG | Getting to WaitForSSH function...
	I0603 13:50:47.134858 1142862 main.go:141] libmachine: (no-preload-817450) Waiting for SSH to be available...
	I0603 13:50:47.137239 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.137669 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.137705 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.137810 1142862 main.go:141] libmachine: (no-preload-817450) DBG | Using SSH client type: external
	I0603 13:50:47.137835 1142862 main.go:141] libmachine: (no-preload-817450) DBG | Using SSH private key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa (-rw-------)
	I0603 13:50:47.137870 1142862 main.go:141] libmachine: (no-preload-817450) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0603 13:50:47.137879 1142862 main.go:141] libmachine: (no-preload-817450) DBG | About to run SSH command:
	I0603 13:50:47.137889 1142862 main.go:141] libmachine: (no-preload-817450) DBG | exit 0
	I0603 13:50:47.265932 1142862 main.go:141] libmachine: (no-preload-817450) DBG | SSH cmd err, output: <nil>: 
	I0603 13:50:47.266268 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetConfigRaw
	I0603 13:50:47.267007 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetIP
	I0603 13:50:47.269463 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.269849 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.269885 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.270135 1142862 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450/config.json ...
	I0603 13:50:47.270355 1142862 machine.go:94] provisionDockerMachine start ...
	I0603 13:50:47.270375 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:50:47.270589 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:47.272915 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.273307 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.273341 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.273543 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:47.273737 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.273905 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.274061 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:47.274242 1142862 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:47.274417 1142862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.125 22 <nil> <nil>}
	I0603 13:50:47.274429 1142862 main.go:141] libmachine: About to run SSH command:
	hostname
	I0603 13:50:47.380760 1142862 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0603 13:50:47.380789 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetMachineName
	I0603 13:50:47.381068 1142862 buildroot.go:166] provisioning hostname "no-preload-817450"
	I0603 13:50:47.381095 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetMachineName
	I0603 13:50:47.381314 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:47.384093 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.384460 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.384482 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.384627 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:47.384798 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.384938 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.385099 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:47.385276 1142862 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:47.385533 1142862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.125 22 <nil> <nil>}
	I0603 13:50:47.385562 1142862 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-817450 && echo "no-preload-817450" | sudo tee /etc/hostname
	I0603 13:50:47.505203 1142862 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-817450
	
	I0603 13:50:47.505231 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:47.508267 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.508696 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.508721 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.508877 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:47.509066 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.509281 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.509437 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:47.509606 1142862 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:47.509780 1142862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.125 22 <nil> <nil>}
	I0603 13:50:47.509795 1142862 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-817450' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-817450/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-817450' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0603 13:50:47.618705 1142862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0603 13:50:47.618757 1142862 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19011-1078924/.minikube CaCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19011-1078924/.minikube}
	I0603 13:50:47.618822 1142862 buildroot.go:174] setting up certificates
	I0603 13:50:47.618835 1142862 provision.go:84] configureAuth start
	I0603 13:50:47.618854 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetMachineName
	I0603 13:50:47.619166 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetIP
	I0603 13:50:47.621974 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.622512 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.622548 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.622652 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:47.624950 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.625275 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.625302 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.625419 1142862 provision.go:143] copyHostCerts
	I0603 13:50:47.625504 1142862 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem, removing ...
	I0603 13:50:47.625520 1142862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem
	I0603 13:50:47.625591 1142862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem (1675 bytes)
	I0603 13:50:47.625697 1142862 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem, removing ...
	I0603 13:50:47.625706 1142862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem
	I0603 13:50:47.625725 1142862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem (1078 bytes)
	I0603 13:50:47.625790 1142862 exec_runner.go:144] found /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem, removing ...
	I0603 13:50:47.625800 1142862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem
	I0603 13:50:47.625826 1142862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem (1123 bytes)
	I0603 13:50:47.625891 1142862 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem org=jenkins.no-preload-817450 san=[127.0.0.1 192.168.72.125 localhost minikube no-preload-817450]
	I0603 13:50:47.733710 1142862 provision.go:177] copyRemoteCerts
	I0603 13:50:47.733769 1142862 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0603 13:50:47.733801 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:47.736326 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.736657 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.736686 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.736844 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:47.737036 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.737222 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:47.737341 1142862 sshutil.go:53] new ssh client: &{IP:192.168.72.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa Username:docker}
	I0603 13:50:47.821893 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0603 13:50:47.848085 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0603 13:50:47.875891 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0603 13:50:47.900761 1142862 provision.go:87] duration metric: took 281.906702ms to configureAuth
	I0603 13:50:47.900795 1142862 buildroot.go:189] setting minikube options for container-runtime
	I0603 13:50:47.900986 1142862 config.go:182] Loaded profile config "no-preload-817450": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:50:47.901072 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:47.904128 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.904551 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:47.904581 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:47.904802 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:47.905018 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.905203 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:47.905413 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:47.905609 1142862 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:47.905816 1142862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.125 22 <nil> <nil>}
	I0603 13:50:47.905839 1142862 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0603 13:50:48.176290 1142862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0603 13:50:48.176321 1142862 machine.go:97] duration metric: took 905.950732ms to provisionDockerMachine
	I0603 13:50:48.176333 1142862 start.go:293] postStartSetup for "no-preload-817450" (driver="kvm2")
	I0603 13:50:48.176344 1142862 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0603 13:50:48.176361 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:50:48.176689 1142862 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0603 13:50:48.176712 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:48.179595 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.179994 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:48.180020 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.180186 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:48.180398 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:48.180561 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:48.180704 1142862 sshutil.go:53] new ssh client: &{IP:192.168.72.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa Username:docker}
	I0603 13:50:48.267996 1142862 ssh_runner.go:195] Run: cat /etc/os-release
	I0603 13:50:48.272936 1142862 info.go:137] Remote host: Buildroot 2023.02.9
	I0603 13:50:48.272970 1142862 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/addons for local assets ...
	I0603 13:50:48.273044 1142862 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/files for local assets ...
	I0603 13:50:48.273141 1142862 filesync.go:149] local asset: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem -> 10862512.pem in /etc/ssl/certs
	I0603 13:50:48.273285 1142862 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0603 13:50:48.283984 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:50:48.310846 1142862 start.go:296] duration metric: took 134.495139ms for postStartSetup
	I0603 13:50:48.310899 1142862 fix.go:56] duration metric: took 18.512032449s for fixHost
	I0603 13:50:48.310928 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:48.313969 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.314331 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:48.314358 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.314627 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:48.314896 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:48.315086 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:48.315258 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:48.315442 1142862 main.go:141] libmachine: Using SSH client type: native
	I0603 13:50:48.315681 1142862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.125 22 <nil> <nil>}
	I0603 13:50:48.315698 1142862 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0603 13:50:48.422576 1142862 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717422648.390814282
	
	I0603 13:50:48.422599 1142862 fix.go:216] guest clock: 1717422648.390814282
	I0603 13:50:48.422606 1142862 fix.go:229] Guest: 2024-06-03 13:50:48.390814282 +0000 UTC Remote: 2024-06-03 13:50:48.310904217 +0000 UTC m=+357.796105522 (delta=79.910065ms)
	I0603 13:50:48.422636 1142862 fix.go:200] guest clock delta is within tolerance: 79.910065ms
	I0603 13:50:48.422642 1142862 start.go:83] releasing machines lock for "no-preload-817450", held for 18.623816039s
	I0603 13:50:48.422659 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:50:48.422954 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetIP
	I0603 13:50:48.426261 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.426671 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:48.426701 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.426864 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:50:48.427460 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:50:48.427661 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:50:48.427762 1142862 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0603 13:50:48.427827 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:48.427878 1142862 ssh_runner.go:195] Run: cat /version.json
	I0603 13:50:48.427914 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:50:48.430586 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.430830 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.430965 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:48.430993 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.431177 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:48.431326 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:48.431355 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:48.431387 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:48.431516 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:50:48.431584 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:48.431676 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:50:48.431751 1142862 sshutil.go:53] new ssh client: &{IP:192.168.72.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa Username:docker}
	I0603 13:50:48.431798 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:50:48.431936 1142862 sshutil.go:53] new ssh client: &{IP:192.168.72.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa Username:docker}
	I0603 13:50:48.506899 1142862 ssh_runner.go:195] Run: systemctl --version
	I0603 13:50:48.545903 1142862 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0603 13:50:48.700235 1142862 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0603 13:50:48.706614 1142862 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0603 13:50:48.706704 1142862 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0603 13:50:48.724565 1142862 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0603 13:50:48.724592 1142862 start.go:494] detecting cgroup driver to use...
	I0603 13:50:48.724664 1142862 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0603 13:50:48.741006 1142862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0603 13:50:48.758824 1142862 docker.go:217] disabling cri-docker service (if available) ...
	I0603 13:50:48.758899 1142862 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0603 13:50:48.773280 1142862 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0603 13:50:48.791049 1142862 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0603 13:50:48.917847 1142862 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0603 13:50:49.081837 1142862 docker.go:233] disabling docker service ...
	I0603 13:50:49.081927 1142862 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0603 13:50:49.097577 1142862 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0603 13:50:49.112592 1142862 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0603 13:50:49.228447 1142862 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0603 13:50:49.350782 1142862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0603 13:50:49.366017 1142862 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0603 13:50:49.385685 1142862 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0603 13:50:49.385765 1142862 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:49.396361 1142862 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0603 13:50:49.396432 1142862 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:49.408606 1142862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:49.419642 1142862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:49.430431 1142862 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0603 13:50:49.441378 1142862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:49.451810 1142862 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:49.469080 1142862 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0603 13:50:49.480054 1142862 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0603 13:50:49.489742 1142862 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0603 13:50:49.489814 1142862 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0603 13:50:49.502889 1142862 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0603 13:50:49.512414 1142862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:50:49.639903 1142862 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0603 13:50:49.786388 1142862 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0603 13:50:49.786486 1142862 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0603 13:50:49.791642 1142862 start.go:562] Will wait 60s for crictl version
	I0603 13:50:49.791711 1142862 ssh_runner.go:195] Run: which crictl
	I0603 13:50:49.796156 1142862 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0603 13:50:49.841667 1142862 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0603 13:50:49.841765 1142862 ssh_runner.go:195] Run: crio --version
	I0603 13:50:49.872213 1142862 ssh_runner.go:195] Run: crio --version
	I0603 13:50:49.910979 1142862 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0603 13:50:46.370749 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:48.870860 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:49.912417 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetIP
	I0603 13:50:49.915368 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:49.915731 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:50:49.915759 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:50:49.915913 1142862 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0603 13:50:49.920247 1142862 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:50:49.933231 1142862 kubeadm.go:877] updating cluster {Name:no-preload-817450 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:no-preload-817450 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.125 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0603 13:50:49.933358 1142862 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0603 13:50:49.933388 1142862 ssh_runner.go:195] Run: sudo crictl images --output json
	I0603 13:50:49.970029 1142862 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0603 13:50:49.970059 1142862 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.1 registry.k8s.io/kube-controller-manager:v1.30.1 registry.k8s.io/kube-scheduler:v1.30.1 registry.k8s.io/kube-proxy:v1.30.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0603 13:50:49.970118 1142862 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:49.970147 1142862 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.1
	I0603 13:50:49.970163 1142862 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0603 13:50:49.970198 1142862 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 13:50:49.970239 1142862 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.1
	I0603 13:50:49.970316 1142862 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.1
	I0603 13:50:49.970328 1142862 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0603 13:50:49.970379 1142862 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0603 13:50:49.971809 1142862 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0603 13:50:49.971837 1142862 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0603 13:50:49.971841 1142862 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.1
	I0603 13:50:49.971809 1142862 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0603 13:50:49.971808 1142862 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.1
	I0603 13:50:49.971876 1142862 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 13:50:49.971816 1142862 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.1
	I0603 13:50:49.971813 1142862 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:50.126557 1142862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0603 13:50:50.146394 1142862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.1
	I0603 13:50:50.149455 1142862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.1
	I0603 13:50:50.149755 1142862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0603 13:50:50.154990 1142862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0603 13:50:50.162983 1142862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:50.177520 1142862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 13:50:50.188703 1142862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.1
	I0603 13:50:50.299288 1142862 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.1" does not exist at hash "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a" in container runtime
	I0603 13:50:50.299312 1142862 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.1" does not exist at hash "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035" in container runtime
	I0603 13:50:50.299345 1142862 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.1
	I0603 13:50:50.299350 1142862 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.1
	I0603 13:50:50.299389 1142862 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0603 13:50:50.299406 1142862 ssh_runner.go:195] Run: which crictl
	I0603 13:50:50.299413 1142862 ssh_runner.go:195] Run: which crictl
	I0603 13:50:50.299422 1142862 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0603 13:50:50.299488 1142862 ssh_runner.go:195] Run: which crictl
	I0603 13:50:50.353368 1142862 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0603 13:50:50.353431 1142862 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:50.353485 1142862 ssh_runner.go:195] Run: which crictl
	I0603 13:50:50.353506 1142862 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0603 13:50:50.353543 1142862 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0603 13:50:50.353591 1142862 ssh_runner.go:195] Run: which crictl
	I0603 13:50:50.379011 1142862 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.1" needs transfer: "registry.k8s.io/kube-proxy:v1.30.1" does not exist at hash "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd" in container runtime
	I0603 13:50:50.379028 1142862 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.1" does not exist at hash "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c" in container runtime
	I0603 13:50:50.379054 1142862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.1
	I0603 13:50:50.379062 1142862 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.1
	I0603 13:50:50.379105 1142862 ssh_runner.go:195] Run: which crictl
	I0603 13:50:50.379075 1142862 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 13:50:50.379146 1142862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.1
	I0603 13:50:50.379181 1142862 ssh_runner.go:195] Run: which crictl
	I0603 13:50:50.379212 1142862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0603 13:50:50.379229 1142862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0603 13:50:50.379239 1142862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:50:50.482204 1142862 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1
	I0603 13:50:50.482210 1142862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.1
	I0603 13:50:50.482332 1142862 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0603 13:50:50.511560 1142862 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0603 13:50:50.511671 1142862 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1
	I0603 13:50:50.511721 1142862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.1
	I0603 13:50:50.511769 1142862 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0603 13:50:50.511772 1142862 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0603 13:50:50.511682 1142862 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0603 13:50:50.511868 1142862 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0603 13:50:50.512290 1142862 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0603 13:50:50.512360 1142862 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0603 13:50:50.549035 1142862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.1 (exists)
	I0603 13:50:50.549061 1142862 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1
	I0603 13:50:50.549066 1142862 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0603 13:50:50.549156 1142862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0603 13:50:50.549166 1142862 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.1
	I0603 13:50:47.693584 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:48.193894 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:48.694053 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:49.193587 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:49.694081 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:50.194053 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:50.694265 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:51.193572 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:51.694283 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:52.194444 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:48.321194 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:50.816679 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:52.818121 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:51.372716 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:53.372880 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:50.573615 1142862 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1
	I0603 13:50:50.573661 1142862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0603 13:50:50.573708 1142862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.1 (exists)
	I0603 13:50:50.573737 1142862 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0603 13:50:50.573754 1142862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0603 13:50:50.573816 1142862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0603 13:50:50.573839 1142862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.1 (exists)
	I0603 13:50:52.739312 1142862 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1: (2.190102069s)
	I0603 13:50:52.739333 1142862 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.1: (2.165569436s)
	I0603 13:50:52.739354 1142862 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1 from cache
	I0603 13:50:52.739365 1142862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.1 (exists)
	I0603 13:50:52.739372 1142862 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0603 13:50:52.739420 1142862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0603 13:50:54.995960 1142862 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1: (2.256502953s)
	I0603 13:50:54.996000 1142862 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1 from cache
	I0603 13:50:54.996019 1142862 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0603 13:50:54.996076 1142862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0603 13:50:52.694071 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:53.193597 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:53.694503 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:54.193609 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:54.694446 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:55.193856 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:55.693583 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:56.194271 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:56.693558 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:57.194427 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:55.317668 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:57.318423 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:55.872030 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:58.376034 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:50:55.844775 1142862 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0603 13:50:55.844853 1142862 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0603 13:50:55.844967 1142862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0603 13:50:58.110074 1142862 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1: (2.265068331s)
	I0603 13:50:58.110103 1142862 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1 from cache
	I0603 13:50:58.110115 1142862 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0603 13:50:58.110169 1142862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0603 13:50:59.979789 1142862 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.869594477s)
	I0603 13:50:59.979817 1142862 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0603 13:50:59.979832 1142862 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0603 13:50:59.979875 1142862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0603 13:50:57.694027 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:58.193718 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:58.693488 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:59.193725 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:59.694310 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:00.194455 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:00.694182 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:01.193916 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:01.693504 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:02.194236 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:50:59.816444 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:01.817757 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:00.872105 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:03.373427 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:04.067476 1142862 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.087571936s)
	I0603 13:51:04.067529 1142862 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0603 13:51:04.067549 1142862 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.1
	I0603 13:51:04.067605 1142862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1
	I0603 13:51:02.694248 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:03.194094 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:03.694072 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:04.194494 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:04.693899 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:05.193578 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:05.693584 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:06.193934 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:06.693586 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:07.193993 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:04.316979 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:06.318105 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:05.871061 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:08.371377 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:05.819264 1142862 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1: (1.75162069s)
	I0603 13:51:05.819302 1142862 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1 from cache
	I0603 13:51:05.819334 1142862 cache_images.go:123] Successfully loaded all cached images
	I0603 13:51:05.819341 1142862 cache_images.go:92] duration metric: took 15.849267186s to LoadCachedImages
	I0603 13:51:05.819352 1142862 kubeadm.go:928] updating node { 192.168.72.125 8443 v1.30.1 crio true true} ...
	I0603 13:51:05.819549 1142862 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-817450 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:no-preload-817450 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0603 13:51:05.819636 1142862 ssh_runner.go:195] Run: crio config
	I0603 13:51:05.874089 1142862 cni.go:84] Creating CNI manager for ""
	I0603 13:51:05.874114 1142862 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:51:05.874127 1142862 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0603 13:51:05.874152 1142862 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.125 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-817450 NodeName:no-preload-817450 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0603 13:51:05.874339 1142862 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.125
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-817450"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0603 13:51:05.874411 1142862 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0603 13:51:05.886116 1142862 binaries.go:44] Found k8s binaries, skipping transfer
	I0603 13:51:05.886185 1142862 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0603 13:51:05.896269 1142862 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0603 13:51:05.914746 1142862 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0603 13:51:05.931936 1142862 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0603 13:51:05.949151 1142862 ssh_runner.go:195] Run: grep 192.168.72.125	control-plane.minikube.internal$ /etc/hosts
	I0603 13:51:05.953180 1142862 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.125	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0603 13:51:05.966675 1142862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:51:06.107517 1142862 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:51:06.129233 1142862 certs.go:68] Setting up /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450 for IP: 192.168.72.125
	I0603 13:51:06.129264 1142862 certs.go:194] generating shared ca certs ...
	I0603 13:51:06.129280 1142862 certs.go:226] acquiring lock for ca certs: {Name:mkeec5aabce7c9540fcb31b78e4f96c2851d54f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:51:06.129517 1142862 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key
	I0603 13:51:06.129583 1142862 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key
	I0603 13:51:06.129597 1142862 certs.go:256] generating profile certs ...
	I0603 13:51:06.129686 1142862 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450/client.key
	I0603 13:51:06.129746 1142862 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450/apiserver.key.e8ec030b
	I0603 13:51:06.129779 1142862 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450/proxy-client.key
	I0603 13:51:06.129885 1142862 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem (1338 bytes)
	W0603 13:51:06.129912 1142862 certs.go:480] ignoring /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251_empty.pem, impossibly tiny 0 bytes
	I0603 13:51:06.129919 1142862 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem (1679 bytes)
	I0603 13:51:06.129939 1142862 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem (1078 bytes)
	I0603 13:51:06.129965 1142862 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem (1123 bytes)
	I0603 13:51:06.129991 1142862 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem (1675 bytes)
	I0603 13:51:06.130028 1142862 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem (1708 bytes)
	I0603 13:51:06.130817 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0603 13:51:06.171348 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0603 13:51:06.206270 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0603 13:51:06.240508 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0603 13:51:06.292262 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0603 13:51:06.320406 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0603 13:51:06.346655 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0603 13:51:06.375908 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/no-preload-817450/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0603 13:51:06.401723 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0603 13:51:06.425992 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/1086251.pem --> /usr/share/ca-certificates/1086251.pem (1338 bytes)
	I0603 13:51:06.450484 1142862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/ssl/certs/10862512.pem --> /usr/share/ca-certificates/10862512.pem (1708 bytes)
	I0603 13:51:06.475206 1142862 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0603 13:51:06.492795 1142862 ssh_runner.go:195] Run: openssl version
	I0603 13:51:06.499759 1142862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0603 13:51:06.511760 1142862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:51:06.516690 1142862 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  3 12:24 /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:51:06.516763 1142862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0603 13:51:06.523284 1142862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0603 13:51:06.535250 1142862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1086251.pem && ln -fs /usr/share/ca-certificates/1086251.pem /etc/ssl/certs/1086251.pem"
	I0603 13:51:06.545921 1142862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1086251.pem
	I0603 13:51:06.550765 1142862 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  3 12:37 /usr/share/ca-certificates/1086251.pem
	I0603 13:51:06.550823 1142862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1086251.pem
	I0603 13:51:06.556898 1142862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1086251.pem /etc/ssl/certs/51391683.0"
	I0603 13:51:06.567717 1142862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10862512.pem && ln -fs /usr/share/ca-certificates/10862512.pem /etc/ssl/certs/10862512.pem"
	I0603 13:51:06.578662 1142862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10862512.pem
	I0603 13:51:06.584084 1142862 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  3 12:37 /usr/share/ca-certificates/10862512.pem
	I0603 13:51:06.584153 1142862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10862512.pem
	I0603 13:51:06.591566 1142862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10862512.pem /etc/ssl/certs/3ec20f2e.0"
	I0603 13:51:06.603554 1142862 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0603 13:51:06.608323 1142862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0603 13:51:06.614939 1142862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0603 13:51:06.621519 1142862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0603 13:51:06.627525 1142862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0603 13:51:06.633291 1142862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0603 13:51:06.639258 1142862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0603 13:51:06.644789 1142862 kubeadm.go:391] StartCluster: {Name:no-preload-817450 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:no-preload-817450 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.125 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 13:51:06.644876 1142862 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0603 13:51:06.644928 1142862 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:51:06.694731 1142862 cri.go:89] found id: ""
	I0603 13:51:06.694811 1142862 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0603 13:51:06.709773 1142862 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0603 13:51:06.709804 1142862 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0603 13:51:06.709812 1142862 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0603 13:51:06.709875 1142862 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0603 13:51:06.721095 1142862 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0603 13:51:06.722256 1142862 kubeconfig.go:125] found "no-preload-817450" server: "https://192.168.72.125:8443"
	I0603 13:51:06.724877 1142862 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0603 13:51:06.735753 1142862 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.125
	I0603 13:51:06.735789 1142862 kubeadm.go:1154] stopping kube-system containers ...
	I0603 13:51:06.735802 1142862 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0603 13:51:06.735847 1142862 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0603 13:51:06.776650 1142862 cri.go:89] found id: ""
	I0603 13:51:06.776743 1142862 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0603 13:51:06.796259 1142862 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 13:51:06.809765 1142862 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 13:51:06.809785 1142862 kubeadm.go:156] found existing configuration files:
	
	I0603 13:51:06.809839 1142862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 13:51:06.819821 1142862 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 13:51:06.819878 1142862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 13:51:06.829960 1142862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 13:51:06.839510 1142862 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 13:51:06.839561 1142862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 13:51:06.849346 1142862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 13:51:06.858834 1142862 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 13:51:06.858886 1142862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 13:51:06.869159 1142862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 13:51:06.879672 1142862 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 13:51:06.879739 1142862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 13:51:06.889393 1142862 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 13:51:06.899309 1142862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:51:07.021375 1142862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:51:08.119929 1142862 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.098510185s)
	I0603 13:51:08.119959 1142862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:51:08.318752 1142862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:51:08.396713 1142862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:51:08.506285 1142862 api_server.go:52] waiting for apiserver process to appear ...
	I0603 13:51:08.506384 1142862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:09.006865 1142862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:09.506528 1142862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:09.582432 1142862 api_server.go:72] duration metric: took 1.076134659s to wait for apiserver process to appear ...
	I0603 13:51:09.582463 1142862 api_server.go:88] waiting for apiserver healthz status ...
	I0603 13:51:09.582507 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:07.693540 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:08.194490 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:08.694498 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:09.194496 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:09.694286 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:10.193605 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:10.694326 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:11.193904 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:11.694504 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:12.194093 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:08.318739 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:10.817309 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:10.371622 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:12.372640 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:14.871007 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:12.049693 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 13:51:12.049731 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 13:51:12.049748 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:12.084495 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0603 13:51:12.084526 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0603 13:51:12.084541 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:12.141515 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:51:12.141555 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:51:12.582630 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:12.590238 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:51:12.590279 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:51:13.082813 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:13.097350 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:51:13.097380 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:51:13.582895 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:13.587479 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:51:13.587511 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:51:14.083076 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:14.087531 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:51:14.087561 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:51:14.583203 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:14.587735 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:51:14.587781 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:51:15.082844 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:15.087403 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0603 13:51:15.087438 1142862 api_server.go:103] status: https://192.168.72.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0603 13:51:15.583226 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:51:15.590238 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 200:
	ok
	I0603 13:51:15.601732 1142862 api_server.go:141] control plane version: v1.30.1
	I0603 13:51:15.601762 1142862 api_server.go:131] duration metric: took 6.019291333s to wait for apiserver health ...
	I0603 13:51:15.601775 1142862 cni.go:84] Creating CNI manager for ""
	I0603 13:51:15.601784 1142862 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:51:15.603654 1142862 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 13:51:12.694356 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:13.194219 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:13.693546 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:14.193588 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:14.694003 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:15.193572 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:15.694012 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:16.193567 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:16.694014 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:17.193554 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:13.320666 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:15.818073 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:17.369593 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:19.369916 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:15.605291 1142862 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 13:51:15.618333 1142862 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 13:51:15.640539 1142862 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 13:51:15.651042 1142862 system_pods.go:59] 8 kube-system pods found
	I0603 13:51:15.651086 1142862 system_pods.go:61] "coredns-7db6d8ff4d-s562v" [be995d41-2b25-4839-a36b-212a507e7db7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0603 13:51:15.651102 1142862 system_pods.go:61] "etcd-no-preload-817450" [1b21708b-d81b-4594-a186-546437467c26] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0603 13:51:15.651117 1142862 system_pods.go:61] "kube-apiserver-no-preload-817450" [0741a4bf-3161-4cf3-a9c6-36af2a0c4fde] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0603 13:51:15.651126 1142862 system_pods.go:61] "kube-controller-manager-no-preload-817450" [43713383-9197-4874-8aa9-7b1b1f05e4b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0603 13:51:15.651133 1142862 system_pods.go:61] "kube-proxy-2j4sg" [112657ad-311a-46ee-b5c0-6f544991465e] Running
	I0603 13:51:15.651145 1142862 system_pods.go:61] "kube-scheduler-no-preload-817450" [40db5c40-dc01-4fd3-a5e0-06a6ee1fd0a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0603 13:51:15.651152 1142862 system_pods.go:61] "metrics-server-569cc877fc-mtvrq" [00cb7657-2564-4d25-8faa-b6f618e61115] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:51:15.651163 1142862 system_pods.go:61] "storage-provisioner" [913d3120-32ce-4212-84be-9e3b99f2a894] Running
	I0603 13:51:15.651171 1142862 system_pods.go:74] duration metric: took 10.608401ms to wait for pod list to return data ...
	I0603 13:51:15.651181 1142862 node_conditions.go:102] verifying NodePressure condition ...
	I0603 13:51:15.654759 1142862 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 13:51:15.654784 1142862 node_conditions.go:123] node cpu capacity is 2
	I0603 13:51:15.654795 1142862 node_conditions.go:105] duration metric: took 3.608137ms to run NodePressure ...
	I0603 13:51:15.654813 1142862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0603 13:51:15.940085 1142862 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0603 13:51:15.944785 1142862 kubeadm.go:733] kubelet initialised
	I0603 13:51:15.944808 1142862 kubeadm.go:734] duration metric: took 4.692827ms waiting for restarted kubelet to initialise ...
	I0603 13:51:15.944817 1142862 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:51:15.950113 1142862 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-s562v" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:17.958330 1142862 pod_ready.go:102] pod "coredns-7db6d8ff4d-s562v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:20.456029 1142862 pod_ready.go:102] pod "coredns-7db6d8ff4d-s562v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:17.693856 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:18.193853 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:18.693858 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:19.193568 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:19.693680 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:20.193556 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:20.694129 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:21.193662 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:21.694445 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:22.193668 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:18.317128 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:20.317375 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:22.317530 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:21.371070 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:23.871400 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:21.958183 1142862 pod_ready.go:92] pod "coredns-7db6d8ff4d-s562v" in "kube-system" namespace has status "Ready":"True"
	I0603 13:51:21.958208 1142862 pod_ready.go:81] duration metric: took 6.008058251s for pod "coredns-7db6d8ff4d-s562v" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:21.958220 1142862 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:23.964785 1142862 pod_ready.go:102] pod "etcd-no-preload-817450" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:22.694004 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:23.193793 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:23.694340 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:24.194411 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:24.694314 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:25.194501 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:25.693545 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:26.194255 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:26.694312 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:27.194453 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:24.817165 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:27.317176 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:26.369665 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:28.370392 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:25.966060 1142862 pod_ready.go:102] pod "etcd-no-preload-817450" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:27.965236 1142862 pod_ready.go:92] pod "etcd-no-preload-817450" in "kube-system" namespace has status "Ready":"True"
	I0603 13:51:27.965267 1142862 pod_ready.go:81] duration metric: took 6.007038184s for pod "etcd-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.965281 1142862 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.969898 1142862 pod_ready.go:92] pod "kube-apiserver-no-preload-817450" in "kube-system" namespace has status "Ready":"True"
	I0603 13:51:27.969920 1142862 pod_ready.go:81] duration metric: took 4.630357ms for pod "kube-apiserver-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.969932 1142862 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.974500 1142862 pod_ready.go:92] pod "kube-controller-manager-no-preload-817450" in "kube-system" namespace has status "Ready":"True"
	I0603 13:51:27.974517 1142862 pod_ready.go:81] duration metric: took 4.577117ms for pod "kube-controller-manager-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.974526 1142862 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2j4sg" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.978510 1142862 pod_ready.go:92] pod "kube-proxy-2j4sg" in "kube-system" namespace has status "Ready":"True"
	I0603 13:51:27.978530 1142862 pod_ready.go:81] duration metric: took 3.997645ms for pod "kube-proxy-2j4sg" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.978537 1142862 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.982488 1142862 pod_ready.go:92] pod "kube-scheduler-no-preload-817450" in "kube-system" namespace has status "Ready":"True"
	I0603 13:51:27.982507 1142862 pod_ready.go:81] duration metric: took 3.962666ms for pod "kube-scheduler-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:27.982518 1142862 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace to be "Ready" ...
	I0603 13:51:29.989265 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:27.694334 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:28.193809 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:28.693744 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:29.193608 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:29.693584 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:30.194111 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:30.694213 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:31.193588 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:31.694336 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:32.193716 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:29.317483 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:31.324199 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:30.370435 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:32.870510 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:34.872543 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:31.990649 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:34.488899 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:32.693501 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:33.194174 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:33.693995 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:34.194242 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:34.693961 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:35.194052 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:35.693730 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:36.193559 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:36.693763 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:37.194274 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:33.816533 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:36.316832 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:37.371543 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:39.372034 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:36.489364 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:38.490431 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:40.490888 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:37.693590 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:38.194328 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:38.694296 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:39.194272 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:39.693607 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:40.193595 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:51:40.193691 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:51:40.237747 1143678 cri.go:89] found id: ""
	I0603 13:51:40.237776 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.237785 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:51:40.237792 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:51:40.237854 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:51:40.275924 1143678 cri.go:89] found id: ""
	I0603 13:51:40.275964 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.275975 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:51:40.275983 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:51:40.276049 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:51:40.314827 1143678 cri.go:89] found id: ""
	I0603 13:51:40.314857 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.314870 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:51:40.314877 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:51:40.314939 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:51:40.359040 1143678 cri.go:89] found id: ""
	I0603 13:51:40.359072 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.359084 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:51:40.359092 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:51:40.359154 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:51:40.396136 1143678 cri.go:89] found id: ""
	I0603 13:51:40.396170 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.396185 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:51:40.396194 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:51:40.396261 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:51:40.436766 1143678 cri.go:89] found id: ""
	I0603 13:51:40.436803 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.436814 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:51:40.436828 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:51:40.436902 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:51:40.477580 1143678 cri.go:89] found id: ""
	I0603 13:51:40.477606 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.477615 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:51:40.477621 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:51:40.477713 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:51:40.518920 1143678 cri.go:89] found id: ""
	I0603 13:51:40.518960 1143678 logs.go:276] 0 containers: []
	W0603 13:51:40.518972 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:51:40.518984 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:51:40.519001 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:51:40.659881 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:51:40.659913 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:51:40.659932 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:51:40.727850 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:51:40.727894 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:51:40.774153 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:51:40.774189 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:40.828054 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:51:40.828094 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:51:38.820985 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:41.322044 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:41.870717 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:43.872112 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:42.988898 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:44.989384 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:43.342659 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:43.357063 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:51:43.357131 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:51:43.398000 1143678 cri.go:89] found id: ""
	I0603 13:51:43.398036 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.398045 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:51:43.398051 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:51:43.398106 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:51:43.436761 1143678 cri.go:89] found id: ""
	I0603 13:51:43.436805 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.436814 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:51:43.436820 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:51:43.436872 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:51:43.478122 1143678 cri.go:89] found id: ""
	I0603 13:51:43.478154 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.478164 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:51:43.478172 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:51:43.478243 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:51:43.514473 1143678 cri.go:89] found id: ""
	I0603 13:51:43.514511 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.514523 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:51:43.514532 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:51:43.514600 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:51:43.552354 1143678 cri.go:89] found id: ""
	I0603 13:51:43.552390 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.552399 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:51:43.552405 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:51:43.552489 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:51:43.590637 1143678 cri.go:89] found id: ""
	I0603 13:51:43.590665 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.590677 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:51:43.590685 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:51:43.590745 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:51:43.633958 1143678 cri.go:89] found id: ""
	I0603 13:51:43.634001 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.634013 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:51:43.634021 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:51:43.634088 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:51:43.672640 1143678 cri.go:89] found id: ""
	I0603 13:51:43.672683 1143678 logs.go:276] 0 containers: []
	W0603 13:51:43.672695 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:51:43.672716 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:51:43.672733 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:43.725880 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:51:43.725937 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:51:43.743736 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:51:43.743771 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:51:43.831757 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:51:43.831785 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:51:43.831801 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:51:43.905062 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:51:43.905114 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:51:46.459588 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:46.472911 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:51:46.472983 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:51:46.513723 1143678 cri.go:89] found id: ""
	I0603 13:51:46.513757 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.513768 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:51:46.513776 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:51:46.513845 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:51:46.549205 1143678 cri.go:89] found id: ""
	I0603 13:51:46.549234 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.549242 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:51:46.549251 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:51:46.549311 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:51:46.585004 1143678 cri.go:89] found id: ""
	I0603 13:51:46.585042 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.585053 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:51:46.585063 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:51:46.585120 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:51:46.620534 1143678 cri.go:89] found id: ""
	I0603 13:51:46.620571 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.620582 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:51:46.620590 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:51:46.620661 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:51:46.655974 1143678 cri.go:89] found id: ""
	I0603 13:51:46.656005 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.656014 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:51:46.656020 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:51:46.656091 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:51:46.693078 1143678 cri.go:89] found id: ""
	I0603 13:51:46.693141 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.693158 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:51:46.693168 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:51:46.693244 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:51:46.729177 1143678 cri.go:89] found id: ""
	I0603 13:51:46.729213 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.729223 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:51:46.729232 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:51:46.729300 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:51:46.766899 1143678 cri.go:89] found id: ""
	I0603 13:51:46.766929 1143678 logs.go:276] 0 containers: []
	W0603 13:51:46.766937 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:51:46.766946 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:51:46.766959 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:46.826715 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:51:46.826757 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:51:46.841461 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:51:46.841504 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:51:46.914505 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:51:46.914533 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:51:46.914551 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:51:46.989886 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:51:46.989928 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:51:43.817456 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:45.817576 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:46.370927 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:48.371196 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:46.990440 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:49.489483 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:49.532804 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:49.547359 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:51:49.547438 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:51:49.584262 1143678 cri.go:89] found id: ""
	I0603 13:51:49.584299 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.584311 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:51:49.584319 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:51:49.584389 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:51:49.622332 1143678 cri.go:89] found id: ""
	I0603 13:51:49.622372 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.622384 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:51:49.622393 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:51:49.622488 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:51:49.664339 1143678 cri.go:89] found id: ""
	I0603 13:51:49.664378 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.664390 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:51:49.664399 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:51:49.664468 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:51:49.712528 1143678 cri.go:89] found id: ""
	I0603 13:51:49.712558 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.712565 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:51:49.712574 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:51:49.712640 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:51:49.767343 1143678 cri.go:89] found id: ""
	I0603 13:51:49.767374 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.767382 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:51:49.767388 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:51:49.767450 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:51:49.822457 1143678 cri.go:89] found id: ""
	I0603 13:51:49.822491 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.822499 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:51:49.822505 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:51:49.822561 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:51:49.867823 1143678 cri.go:89] found id: ""
	I0603 13:51:49.867855 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.867867 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:51:49.867875 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:51:49.867936 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:51:49.906765 1143678 cri.go:89] found id: ""
	I0603 13:51:49.906797 1143678 logs.go:276] 0 containers: []
	W0603 13:51:49.906805 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:51:49.906816 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:51:49.906829 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:51:49.921731 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:51:49.921764 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:51:49.993832 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:51:49.993860 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:51:49.993878 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:51:50.070080 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:51:50.070125 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:51:50.112323 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:51:50.112357 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:48.317830 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:50.816577 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:52.817035 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:50.871664 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:52.871865 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:51.990258 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:54.489037 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:52.666289 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:52.680475 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:51:52.680550 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:51:52.722025 1143678 cri.go:89] found id: ""
	I0603 13:51:52.722063 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.722075 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:51:52.722083 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:51:52.722145 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:51:52.759709 1143678 cri.go:89] found id: ""
	I0603 13:51:52.759742 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.759754 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:51:52.759762 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:51:52.759838 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:51:52.797131 1143678 cri.go:89] found id: ""
	I0603 13:51:52.797162 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.797171 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:51:52.797176 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:51:52.797231 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:51:52.832921 1143678 cri.go:89] found id: ""
	I0603 13:51:52.832951 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.832959 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:51:52.832965 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:51:52.833024 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:51:52.869361 1143678 cri.go:89] found id: ""
	I0603 13:51:52.869389 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.869399 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:51:52.869422 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:51:52.869495 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:51:52.905863 1143678 cri.go:89] found id: ""
	I0603 13:51:52.905897 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.905909 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:51:52.905917 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:51:52.905985 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:51:52.940407 1143678 cri.go:89] found id: ""
	I0603 13:51:52.940438 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.940446 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:51:52.940452 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:51:52.940517 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:51:52.982079 1143678 cri.go:89] found id: ""
	I0603 13:51:52.982115 1143678 logs.go:276] 0 containers: []
	W0603 13:51:52.982126 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:51:52.982138 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:51:52.982155 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:51:53.066897 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:51:53.066942 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:51:53.108016 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:51:53.108056 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:53.164105 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:51:53.164151 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:51:53.178708 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:51:53.178743 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:51:53.257441 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:51:55.758633 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:55.774241 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:51:55.774329 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:51:55.809373 1143678 cri.go:89] found id: ""
	I0603 13:51:55.809436 1143678 logs.go:276] 0 containers: []
	W0603 13:51:55.809450 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:51:55.809467 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:51:55.809539 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:51:55.849741 1143678 cri.go:89] found id: ""
	I0603 13:51:55.849768 1143678 logs.go:276] 0 containers: []
	W0603 13:51:55.849776 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:51:55.849783 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:51:55.849834 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:51:55.893184 1143678 cri.go:89] found id: ""
	I0603 13:51:55.893216 1143678 logs.go:276] 0 containers: []
	W0603 13:51:55.893228 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:51:55.893238 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:51:55.893307 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:51:55.931572 1143678 cri.go:89] found id: ""
	I0603 13:51:55.931618 1143678 logs.go:276] 0 containers: []
	W0603 13:51:55.931632 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:51:55.931642 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:51:55.931713 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:51:55.969490 1143678 cri.go:89] found id: ""
	I0603 13:51:55.969527 1143678 logs.go:276] 0 containers: []
	W0603 13:51:55.969538 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:51:55.969546 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:51:55.969614 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:51:56.009266 1143678 cri.go:89] found id: ""
	I0603 13:51:56.009301 1143678 logs.go:276] 0 containers: []
	W0603 13:51:56.009313 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:51:56.009321 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:51:56.009394 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:51:56.049471 1143678 cri.go:89] found id: ""
	I0603 13:51:56.049520 1143678 logs.go:276] 0 containers: []
	W0603 13:51:56.049540 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:51:56.049547 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:51:56.049616 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:51:56.090176 1143678 cri.go:89] found id: ""
	I0603 13:51:56.090213 1143678 logs.go:276] 0 containers: []
	W0603 13:51:56.090228 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:51:56.090241 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:51:56.090266 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:51:56.175692 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:51:56.175737 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:51:56.222642 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:51:56.222683 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:56.276258 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:51:56.276301 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:51:56.291703 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:51:56.291739 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:51:56.364788 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:51:55.316604 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:57.816804 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:55.370917 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:57.372903 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:59.870783 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:56.489636 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:58.990006 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:51:58.865558 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:51:58.879983 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:51:58.880074 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:51:58.917422 1143678 cri.go:89] found id: ""
	I0603 13:51:58.917461 1143678 logs.go:276] 0 containers: []
	W0603 13:51:58.917473 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:51:58.917480 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:51:58.917535 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:51:58.953900 1143678 cri.go:89] found id: ""
	I0603 13:51:58.953933 1143678 logs.go:276] 0 containers: []
	W0603 13:51:58.953943 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:51:58.953959 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:51:58.954030 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:51:58.988677 1143678 cri.go:89] found id: ""
	I0603 13:51:58.988704 1143678 logs.go:276] 0 containers: []
	W0603 13:51:58.988713 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:51:58.988721 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:51:58.988783 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:51:59.023436 1143678 cri.go:89] found id: ""
	I0603 13:51:59.023474 1143678 logs.go:276] 0 containers: []
	W0603 13:51:59.023486 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:51:59.023494 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:51:59.023570 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:51:59.061357 1143678 cri.go:89] found id: ""
	I0603 13:51:59.061386 1143678 logs.go:276] 0 containers: []
	W0603 13:51:59.061394 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:51:59.061400 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:51:59.061487 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:51:59.102995 1143678 cri.go:89] found id: ""
	I0603 13:51:59.103025 1143678 logs.go:276] 0 containers: []
	W0603 13:51:59.103038 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:51:59.103047 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:51:59.103124 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:51:59.141443 1143678 cri.go:89] found id: ""
	I0603 13:51:59.141480 1143678 logs.go:276] 0 containers: []
	W0603 13:51:59.141492 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:51:59.141499 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:51:59.141586 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:51:59.182909 1143678 cri.go:89] found id: ""
	I0603 13:51:59.182943 1143678 logs.go:276] 0 containers: []
	W0603 13:51:59.182953 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:51:59.182967 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:51:59.182984 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:51:59.259533 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:51:59.259580 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:51:59.308976 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:51:59.309016 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:59.362092 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:51:59.362142 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:51:59.378836 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:51:59.378887 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:51:59.454524 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:01.954939 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:01.969968 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:01.970039 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:02.014226 1143678 cri.go:89] found id: ""
	I0603 13:52:02.014267 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.014280 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:02.014289 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:02.014361 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:02.051189 1143678 cri.go:89] found id: ""
	I0603 13:52:02.051244 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.051259 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:02.051268 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:02.051349 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:02.093509 1143678 cri.go:89] found id: ""
	I0603 13:52:02.093548 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.093575 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:02.093586 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:02.093718 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:02.132069 1143678 cri.go:89] found id: ""
	I0603 13:52:02.132113 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.132129 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:02.132138 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:02.132299 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:02.168043 1143678 cri.go:89] found id: ""
	I0603 13:52:02.168071 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.168079 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:02.168085 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:02.168138 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:02.207029 1143678 cri.go:89] found id: ""
	I0603 13:52:02.207064 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.207074 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:02.207081 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:02.207134 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:02.247669 1143678 cri.go:89] found id: ""
	I0603 13:52:02.247719 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.247728 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:02.247734 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:02.247848 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:02.285780 1143678 cri.go:89] found id: ""
	I0603 13:52:02.285817 1143678 logs.go:276] 0 containers: []
	W0603 13:52:02.285829 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:02.285841 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:02.285863 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:51:59.817887 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:01.818381 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:01.871338 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:04.371052 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:00.990263 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:02.990651 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:05.490343 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:02.348775 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:02.349776 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:02.364654 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:02.364691 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:02.447948 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:02.447978 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:02.447992 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:02.534039 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:02.534100 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:05.080437 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:05.094169 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:05.094245 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:05.132312 1143678 cri.go:89] found id: ""
	I0603 13:52:05.132339 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.132346 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:05.132352 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:05.132423 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:05.168941 1143678 cri.go:89] found id: ""
	I0603 13:52:05.168979 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.168990 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:05.168999 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:05.169068 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:05.207151 1143678 cri.go:89] found id: ""
	I0603 13:52:05.207188 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.207196 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:05.207202 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:05.207272 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:05.258807 1143678 cri.go:89] found id: ""
	I0603 13:52:05.258839 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.258850 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:05.258859 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:05.259004 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:05.298250 1143678 cri.go:89] found id: ""
	I0603 13:52:05.298285 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.298297 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:05.298306 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:05.298381 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:05.340922 1143678 cri.go:89] found id: ""
	I0603 13:52:05.340951 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.340959 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:05.340966 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:05.341027 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:05.382680 1143678 cri.go:89] found id: ""
	I0603 13:52:05.382707 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.382715 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:05.382722 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:05.382777 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:05.426774 1143678 cri.go:89] found id: ""
	I0603 13:52:05.426801 1143678 logs.go:276] 0 containers: []
	W0603 13:52:05.426811 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:05.426822 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:05.426836 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:05.483042 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:05.483091 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:05.499119 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:05.499159 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:05.580933 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:05.580962 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:05.580983 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:05.660395 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:05.660437 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:03.818676 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:06.316881 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:06.371515 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:08.871174 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:07.490662 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:09.992709 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:08.200887 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:08.215113 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:08.215203 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:08.252367 1143678 cri.go:89] found id: ""
	I0603 13:52:08.252404 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.252417 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:08.252427 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:08.252500 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:08.289249 1143678 cri.go:89] found id: ""
	I0603 13:52:08.289279 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.289290 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:08.289298 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:08.289364 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:08.331155 1143678 cri.go:89] found id: ""
	I0603 13:52:08.331181 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.331195 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:08.331201 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:08.331258 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:08.371376 1143678 cri.go:89] found id: ""
	I0603 13:52:08.371400 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.371408 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:08.371415 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:08.371477 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:08.408009 1143678 cri.go:89] found id: ""
	I0603 13:52:08.408045 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.408057 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:08.408065 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:08.408119 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:08.446377 1143678 cri.go:89] found id: ""
	I0603 13:52:08.446413 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.446421 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:08.446429 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:08.446504 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:08.485429 1143678 cri.go:89] found id: ""
	I0603 13:52:08.485461 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.485471 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:08.485479 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:08.485546 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:08.527319 1143678 cri.go:89] found id: ""
	I0603 13:52:08.527363 1143678 logs.go:276] 0 containers: []
	W0603 13:52:08.527375 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:08.527388 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:08.527414 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:08.602347 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:08.602371 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:08.602384 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:08.683855 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:08.683902 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:08.724402 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:08.724443 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:08.781154 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:08.781202 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:11.297827 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:11.313927 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:11.314006 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:11.352622 1143678 cri.go:89] found id: ""
	I0603 13:52:11.352660 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.352671 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:11.352678 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:11.352755 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:11.395301 1143678 cri.go:89] found id: ""
	I0603 13:52:11.395338 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.395351 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:11.395360 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:11.395442 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:11.431104 1143678 cri.go:89] found id: ""
	I0603 13:52:11.431143 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.431155 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:11.431170 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:11.431234 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:11.470177 1143678 cri.go:89] found id: ""
	I0603 13:52:11.470212 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.470223 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:11.470241 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:11.470309 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:11.508741 1143678 cri.go:89] found id: ""
	I0603 13:52:11.508779 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.508803 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:11.508810 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:11.508906 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:11.544970 1143678 cri.go:89] found id: ""
	I0603 13:52:11.545002 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.545012 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:11.545022 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:11.545093 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:11.583606 1143678 cri.go:89] found id: ""
	I0603 13:52:11.583636 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.583653 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:11.583666 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:11.583739 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:11.624770 1143678 cri.go:89] found id: ""
	I0603 13:52:11.624806 1143678 logs.go:276] 0 containers: []
	W0603 13:52:11.624815 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:11.624824 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:11.624841 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:11.680251 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:11.680298 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:11.695656 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:11.695695 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:11.770414 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:11.770478 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:11.770497 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:11.850812 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:11.850871 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:08.318447 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:10.817734 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:11.372533 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:13.871822 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:12.490666 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:14.988752 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:14.398649 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:14.411591 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:14.411689 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:14.447126 1143678 cri.go:89] found id: ""
	I0603 13:52:14.447158 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.447170 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:14.447178 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:14.447245 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:14.486681 1143678 cri.go:89] found id: ""
	I0603 13:52:14.486716 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.486728 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:14.486735 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:14.486799 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:14.521297 1143678 cri.go:89] found id: ""
	I0603 13:52:14.521326 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.521337 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:14.521343 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:14.521443 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:14.565086 1143678 cri.go:89] found id: ""
	I0603 13:52:14.565121 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.565130 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:14.565136 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:14.565196 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:14.601947 1143678 cri.go:89] found id: ""
	I0603 13:52:14.601975 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.601984 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:14.601990 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:14.602044 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:14.638332 1143678 cri.go:89] found id: ""
	I0603 13:52:14.638359 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.638366 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:14.638374 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:14.638435 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:14.675254 1143678 cri.go:89] found id: ""
	I0603 13:52:14.675284 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.675293 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:14.675299 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:14.675354 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:14.712601 1143678 cri.go:89] found id: ""
	I0603 13:52:14.712631 1143678 logs.go:276] 0 containers: []
	W0603 13:52:14.712639 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:14.712649 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:14.712663 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:14.787026 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:14.787068 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:14.836534 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:14.836564 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:14.889682 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:14.889729 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:14.905230 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:14.905264 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:14.979090 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:13.317070 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:15.317490 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:17.816412 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:15.871901 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:18.370626 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:16.989195 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:18.990108 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:17.479590 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:17.495088 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:17.495250 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:17.530832 1143678 cri.go:89] found id: ""
	I0603 13:52:17.530871 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.530883 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:17.530891 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:17.530966 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:17.567183 1143678 cri.go:89] found id: ""
	I0603 13:52:17.567213 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.567224 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:17.567232 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:17.567305 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:17.602424 1143678 cri.go:89] found id: ""
	I0603 13:52:17.602458 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.602469 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:17.602493 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:17.602570 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:17.641148 1143678 cri.go:89] found id: ""
	I0603 13:52:17.641184 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.641197 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:17.641205 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:17.641273 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:17.679004 1143678 cri.go:89] found id: ""
	I0603 13:52:17.679031 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.679039 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:17.679045 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:17.679102 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:17.717667 1143678 cri.go:89] found id: ""
	I0603 13:52:17.717698 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.717707 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:17.717715 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:17.717786 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:17.760262 1143678 cri.go:89] found id: ""
	I0603 13:52:17.760300 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.760323 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:17.760331 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:17.760416 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:17.796910 1143678 cri.go:89] found id: ""
	I0603 13:52:17.796943 1143678 logs.go:276] 0 containers: []
	W0603 13:52:17.796960 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:17.796976 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:17.796990 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:17.811733 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:17.811768 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:17.891891 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:17.891920 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:17.891939 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:17.969495 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:17.969535 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:18.032622 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:18.032654 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:20.586079 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:20.599118 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:20.599202 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:20.633732 1143678 cri.go:89] found id: ""
	I0603 13:52:20.633770 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.633780 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:20.633787 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:20.633841 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:20.668126 1143678 cri.go:89] found id: ""
	I0603 13:52:20.668155 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.668163 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:20.668169 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:20.668231 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:20.704144 1143678 cri.go:89] found id: ""
	I0603 13:52:20.704177 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.704187 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:20.704194 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:20.704251 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:20.745562 1143678 cri.go:89] found id: ""
	I0603 13:52:20.745594 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.745602 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:20.745608 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:20.745663 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:20.788998 1143678 cri.go:89] found id: ""
	I0603 13:52:20.789041 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.789053 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:20.789075 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:20.789152 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:20.832466 1143678 cri.go:89] found id: ""
	I0603 13:52:20.832495 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.832503 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:20.832510 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:20.832575 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:20.875212 1143678 cri.go:89] found id: ""
	I0603 13:52:20.875248 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.875258 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:20.875267 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:20.875336 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:20.912957 1143678 cri.go:89] found id: ""
	I0603 13:52:20.912989 1143678 logs.go:276] 0 containers: []
	W0603 13:52:20.912999 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:20.913011 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:20.913030 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:20.963655 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:20.963700 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:20.978619 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:20.978658 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:21.057136 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:21.057163 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:21.057185 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:21.136368 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:21.136415 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:19.817227 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:21.817625 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:20.871465 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:23.370757 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:21.488564 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:23.991662 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:23.676222 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:23.691111 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:23.691213 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:23.733282 1143678 cri.go:89] found id: ""
	I0603 13:52:23.733319 1143678 logs.go:276] 0 containers: []
	W0603 13:52:23.733332 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:23.733341 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:23.733438 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:23.780841 1143678 cri.go:89] found id: ""
	I0603 13:52:23.780873 1143678 logs.go:276] 0 containers: []
	W0603 13:52:23.780882 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:23.780894 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:23.780947 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:23.820521 1143678 cri.go:89] found id: ""
	I0603 13:52:23.820553 1143678 logs.go:276] 0 containers: []
	W0603 13:52:23.820565 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:23.820573 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:23.820636 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:23.857684 1143678 cri.go:89] found id: ""
	I0603 13:52:23.857728 1143678 logs.go:276] 0 containers: []
	W0603 13:52:23.857739 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:23.857747 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:23.857818 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:23.896800 1143678 cri.go:89] found id: ""
	I0603 13:52:23.896829 1143678 logs.go:276] 0 containers: []
	W0603 13:52:23.896842 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:23.896850 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:23.896914 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:23.935511 1143678 cri.go:89] found id: ""
	I0603 13:52:23.935538 1143678 logs.go:276] 0 containers: []
	W0603 13:52:23.935547 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:23.935554 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:23.935608 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:23.973858 1143678 cri.go:89] found id: ""
	I0603 13:52:23.973885 1143678 logs.go:276] 0 containers: []
	W0603 13:52:23.973895 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:23.973901 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:23.973961 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:24.012491 1143678 cri.go:89] found id: ""
	I0603 13:52:24.012521 1143678 logs.go:276] 0 containers: []
	W0603 13:52:24.012532 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:24.012545 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:24.012569 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:24.064274 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:24.064319 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:24.079382 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:24.079420 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:24.153708 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:24.153733 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:24.153749 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:24.233104 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:24.233148 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:26.774771 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:26.789853 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:26.789924 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:26.830089 1143678 cri.go:89] found id: ""
	I0603 13:52:26.830129 1143678 logs.go:276] 0 containers: []
	W0603 13:52:26.830167 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:26.830176 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:26.830251 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:26.866907 1143678 cri.go:89] found id: ""
	I0603 13:52:26.866941 1143678 logs.go:276] 0 containers: []
	W0603 13:52:26.866952 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:26.866960 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:26.867031 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:26.915028 1143678 cri.go:89] found id: ""
	I0603 13:52:26.915061 1143678 logs.go:276] 0 containers: []
	W0603 13:52:26.915070 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:26.915079 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:26.915151 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:26.962044 1143678 cri.go:89] found id: ""
	I0603 13:52:26.962075 1143678 logs.go:276] 0 containers: []
	W0603 13:52:26.962083 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:26.962088 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:26.962154 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:26.996156 1143678 cri.go:89] found id: ""
	I0603 13:52:26.996188 1143678 logs.go:276] 0 containers: []
	W0603 13:52:26.996196 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:26.996202 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:26.996265 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:27.038593 1143678 cri.go:89] found id: ""
	I0603 13:52:27.038627 1143678 logs.go:276] 0 containers: []
	W0603 13:52:27.038636 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:27.038642 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:27.038708 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:27.076116 1143678 cri.go:89] found id: ""
	I0603 13:52:27.076144 1143678 logs.go:276] 0 containers: []
	W0603 13:52:27.076153 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:27.076159 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:27.076228 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:27.110653 1143678 cri.go:89] found id: ""
	I0603 13:52:27.110688 1143678 logs.go:276] 0 containers: []
	W0603 13:52:27.110700 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:27.110714 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:27.110733 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:27.193718 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:27.193743 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:27.193756 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:27.269423 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:27.269483 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:27.307899 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:27.307939 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:24.317663 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:26.817148 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:25.371861 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:27.870070 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:29.870299 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:26.488753 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:28.489065 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:30.489568 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:27.363830 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:27.363878 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:29.879016 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:29.893482 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:29.893553 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:29.932146 1143678 cri.go:89] found id: ""
	I0603 13:52:29.932190 1143678 logs.go:276] 0 containers: []
	W0603 13:52:29.932199 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:29.932205 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:29.932259 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:29.968986 1143678 cri.go:89] found id: ""
	I0603 13:52:29.969020 1143678 logs.go:276] 0 containers: []
	W0603 13:52:29.969032 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:29.969040 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:29.969097 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:30.007190 1143678 cri.go:89] found id: ""
	I0603 13:52:30.007228 1143678 logs.go:276] 0 containers: []
	W0603 13:52:30.007238 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:30.007244 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:30.007303 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:30.044607 1143678 cri.go:89] found id: ""
	I0603 13:52:30.044638 1143678 logs.go:276] 0 containers: []
	W0603 13:52:30.044646 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:30.044652 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:30.044706 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:30.083103 1143678 cri.go:89] found id: ""
	I0603 13:52:30.083179 1143678 logs.go:276] 0 containers: []
	W0603 13:52:30.083193 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:30.083204 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:30.083280 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:30.124125 1143678 cri.go:89] found id: ""
	I0603 13:52:30.124152 1143678 logs.go:276] 0 containers: []
	W0603 13:52:30.124160 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:30.124167 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:30.124234 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:30.164293 1143678 cri.go:89] found id: ""
	I0603 13:52:30.164329 1143678 logs.go:276] 0 containers: []
	W0603 13:52:30.164345 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:30.164353 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:30.164467 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:30.219980 1143678 cri.go:89] found id: ""
	I0603 13:52:30.220015 1143678 logs.go:276] 0 containers: []
	W0603 13:52:30.220028 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:30.220042 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:30.220063 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:30.313282 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:30.313305 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:30.313323 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:30.393759 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:30.393801 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:30.441384 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:30.441434 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:30.493523 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:30.493558 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:28.817554 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:31.317629 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:31.870659 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:33.870954 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:32.990340 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:35.495665 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:33.009114 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:33.023177 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:33.023278 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:33.065346 1143678 cri.go:89] found id: ""
	I0603 13:52:33.065388 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.065400 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:33.065424 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:33.065506 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:33.108513 1143678 cri.go:89] found id: ""
	I0603 13:52:33.108549 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.108561 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:33.108569 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:33.108640 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:33.146053 1143678 cri.go:89] found id: ""
	I0603 13:52:33.146082 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.146089 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:33.146107 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:33.146165 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:33.187152 1143678 cri.go:89] found id: ""
	I0603 13:52:33.187195 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.187206 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:33.187216 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:33.187302 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:33.223887 1143678 cri.go:89] found id: ""
	I0603 13:52:33.223920 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.223932 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:33.223941 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:33.224010 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:33.263902 1143678 cri.go:89] found id: ""
	I0603 13:52:33.263958 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.263971 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:33.263980 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:33.264048 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:33.302753 1143678 cri.go:89] found id: ""
	I0603 13:52:33.302785 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.302796 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:33.302805 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:33.302859 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:33.340711 1143678 cri.go:89] found id: ""
	I0603 13:52:33.340745 1143678 logs.go:276] 0 containers: []
	W0603 13:52:33.340754 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:33.340763 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:33.340780 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:33.400226 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:33.400271 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:33.414891 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:33.414923 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:33.498121 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:33.498156 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:33.498172 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:33.575682 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:33.575731 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:36.116930 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:36.133001 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:36.133070 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:36.182727 1143678 cri.go:89] found id: ""
	I0603 13:52:36.182763 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.182774 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:36.182782 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:36.182851 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:36.228804 1143678 cri.go:89] found id: ""
	I0603 13:52:36.228841 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.228854 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:36.228862 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:36.228929 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:36.279320 1143678 cri.go:89] found id: ""
	I0603 13:52:36.279359 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.279370 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:36.279378 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:36.279461 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:36.319725 1143678 cri.go:89] found id: ""
	I0603 13:52:36.319751 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.319759 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:36.319765 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:36.319819 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:36.356657 1143678 cri.go:89] found id: ""
	I0603 13:52:36.356685 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.356693 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:36.356703 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:36.356760 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:36.393397 1143678 cri.go:89] found id: ""
	I0603 13:52:36.393448 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.393459 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:36.393467 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:36.393545 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:36.429211 1143678 cri.go:89] found id: ""
	I0603 13:52:36.429246 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.429254 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:36.429260 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:36.429324 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:36.466796 1143678 cri.go:89] found id: ""
	I0603 13:52:36.466831 1143678 logs.go:276] 0 containers: []
	W0603 13:52:36.466839 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:36.466849 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:36.466862 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:36.509871 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:36.509900 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:36.562167 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:36.562206 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:36.577014 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:36.577047 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:36.657581 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:36.657604 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:36.657625 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:33.817495 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:35.820854 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:36.371645 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:38.871484 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:37.989038 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:39.989986 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:39.242339 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:39.257985 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:39.258072 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:39.300153 1143678 cri.go:89] found id: ""
	I0603 13:52:39.300185 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.300197 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:39.300205 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:39.300304 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:39.336117 1143678 cri.go:89] found id: ""
	I0603 13:52:39.336152 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.336162 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:39.336175 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:39.336307 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:39.375945 1143678 cri.go:89] found id: ""
	I0603 13:52:39.375979 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.375990 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:39.375998 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:39.376066 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:39.417207 1143678 cri.go:89] found id: ""
	I0603 13:52:39.417242 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.417253 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:39.417261 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:39.417340 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:39.456259 1143678 cri.go:89] found id: ""
	I0603 13:52:39.456295 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.456307 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:39.456315 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:39.456377 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:39.494879 1143678 cri.go:89] found id: ""
	I0603 13:52:39.494904 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.494913 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:39.494919 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:39.494979 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:39.532129 1143678 cri.go:89] found id: ""
	I0603 13:52:39.532157 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.532168 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:39.532177 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:39.532267 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:39.570662 1143678 cri.go:89] found id: ""
	I0603 13:52:39.570693 1143678 logs.go:276] 0 containers: []
	W0603 13:52:39.570703 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:39.570717 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:39.570734 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:39.622008 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:39.622057 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:39.636849 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:39.636884 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:39.719914 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:39.719948 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:39.719967 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:39.801723 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:39.801769 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:38.317321 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:40.817649 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:42.819652 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:41.370965 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:43.371900 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:42.490311 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:44.988731 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:42.348936 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:42.363663 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:42.363735 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:42.400584 1143678 cri.go:89] found id: ""
	I0603 13:52:42.400616 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.400625 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:42.400631 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:42.400685 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:42.438853 1143678 cri.go:89] found id: ""
	I0603 13:52:42.438885 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.438893 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:42.438899 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:42.438954 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:42.474980 1143678 cri.go:89] found id: ""
	I0603 13:52:42.475013 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.475025 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:42.475032 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:42.475086 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:42.511027 1143678 cri.go:89] found id: ""
	I0603 13:52:42.511056 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.511068 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:42.511077 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:42.511237 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:42.545333 1143678 cri.go:89] found id: ""
	I0603 13:52:42.545367 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.545378 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:42.545386 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:42.545468 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:42.583392 1143678 cri.go:89] found id: ""
	I0603 13:52:42.583438 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.583556 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:42.583591 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:42.583656 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:42.620886 1143678 cri.go:89] found id: ""
	I0603 13:52:42.620916 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.620924 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:42.620930 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:42.620985 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:42.656265 1143678 cri.go:89] found id: ""
	I0603 13:52:42.656301 1143678 logs.go:276] 0 containers: []
	W0603 13:52:42.656313 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:42.656327 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:42.656344 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:42.711078 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:42.711124 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:42.727751 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:42.727788 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:42.802330 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:42.802356 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:42.802370 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:42.883700 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:42.883742 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:45.424591 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:45.440797 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:45.440883 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:45.483664 1143678 cri.go:89] found id: ""
	I0603 13:52:45.483698 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.483709 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:45.483717 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:45.483789 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:45.523147 1143678 cri.go:89] found id: ""
	I0603 13:52:45.523182 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.523193 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:45.523201 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:45.523273 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:45.563483 1143678 cri.go:89] found id: ""
	I0603 13:52:45.563516 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.563527 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:45.563536 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:45.563598 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:45.603574 1143678 cri.go:89] found id: ""
	I0603 13:52:45.603603 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.603618 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:45.603625 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:45.603680 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:45.642664 1143678 cri.go:89] found id: ""
	I0603 13:52:45.642694 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.642705 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:45.642714 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:45.642793 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:45.679961 1143678 cri.go:89] found id: ""
	I0603 13:52:45.679998 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.680011 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:45.680026 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:45.680100 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:45.716218 1143678 cri.go:89] found id: ""
	I0603 13:52:45.716255 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.716263 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:45.716270 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:45.716364 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:45.752346 1143678 cri.go:89] found id: ""
	I0603 13:52:45.752374 1143678 logs.go:276] 0 containers: []
	W0603 13:52:45.752382 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:45.752391 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:45.752405 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:45.793992 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:45.794029 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:45.844930 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:45.844973 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:45.859594 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:45.859633 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:45.936469 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:45.936498 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:45.936515 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:45.317705 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:47.818994 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:45.870780 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:47.871003 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:49.871625 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:46.990866 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:49.488680 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:48.514959 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:48.528331 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:48.528401 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:48.565671 1143678 cri.go:89] found id: ""
	I0603 13:52:48.565703 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.565715 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:48.565724 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:48.565786 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:48.603938 1143678 cri.go:89] found id: ""
	I0603 13:52:48.603973 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.603991 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:48.604000 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:48.604068 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:48.643521 1143678 cri.go:89] found id: ""
	I0603 13:52:48.643550 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.643562 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:48.643571 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:48.643627 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:48.678264 1143678 cri.go:89] found id: ""
	I0603 13:52:48.678301 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.678312 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:48.678320 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:48.678407 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:48.714974 1143678 cri.go:89] found id: ""
	I0603 13:52:48.715014 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.715026 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:48.715034 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:48.715138 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:48.750364 1143678 cri.go:89] found id: ""
	I0603 13:52:48.750396 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.750408 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:48.750416 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:48.750482 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:48.788203 1143678 cri.go:89] found id: ""
	I0603 13:52:48.788238 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.788249 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:48.788258 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:48.788345 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:48.826891 1143678 cri.go:89] found id: ""
	I0603 13:52:48.826920 1143678 logs.go:276] 0 containers: []
	W0603 13:52:48.826928 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:48.826938 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:48.826951 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:48.877271 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:48.877315 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:48.892155 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:48.892187 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:48.973433 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:48.973459 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:48.973473 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:49.062819 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:49.062888 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:51.614261 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:51.628056 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:51.628142 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:51.662894 1143678 cri.go:89] found id: ""
	I0603 13:52:51.662924 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.662935 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:51.662942 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:51.663009 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:51.701847 1143678 cri.go:89] found id: ""
	I0603 13:52:51.701878 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.701889 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:51.701896 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:51.701963 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:51.737702 1143678 cri.go:89] found id: ""
	I0603 13:52:51.737741 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.737752 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:51.737760 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:51.737833 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:51.772913 1143678 cri.go:89] found id: ""
	I0603 13:52:51.772944 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.772956 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:51.772964 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:51.773034 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:51.810268 1143678 cri.go:89] found id: ""
	I0603 13:52:51.810298 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.810307 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:51.810312 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:51.810377 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:51.848575 1143678 cri.go:89] found id: ""
	I0603 13:52:51.848612 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.848624 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:51.848633 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:51.848696 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:51.886500 1143678 cri.go:89] found id: ""
	I0603 13:52:51.886536 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.886549 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:51.886560 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:51.886617 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:51.924070 1143678 cri.go:89] found id: ""
	I0603 13:52:51.924104 1143678 logs.go:276] 0 containers: []
	W0603 13:52:51.924115 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:51.924128 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:51.924146 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:51.940324 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:51.940355 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:52.019958 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:52.019997 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:52.020015 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:52.095953 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:52.095999 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:52.141070 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:52.141102 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:50.317008 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:52.317142 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:51.872275 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:54.376761 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:51.490098 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:53.491292 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:54.694651 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:54.708508 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:54.708597 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:54.745708 1143678 cri.go:89] found id: ""
	I0603 13:52:54.745748 1143678 logs.go:276] 0 containers: []
	W0603 13:52:54.745762 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:54.745770 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:54.745842 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:54.783335 1143678 cri.go:89] found id: ""
	I0603 13:52:54.783369 1143678 logs.go:276] 0 containers: []
	W0603 13:52:54.783381 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:54.783389 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:54.783465 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:54.824111 1143678 cri.go:89] found id: ""
	I0603 13:52:54.824140 1143678 logs.go:276] 0 containers: []
	W0603 13:52:54.824151 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:54.824159 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:54.824230 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:54.868676 1143678 cri.go:89] found id: ""
	I0603 13:52:54.868710 1143678 logs.go:276] 0 containers: []
	W0603 13:52:54.868721 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:54.868730 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:54.868801 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:54.906180 1143678 cri.go:89] found id: ""
	I0603 13:52:54.906216 1143678 logs.go:276] 0 containers: []
	W0603 13:52:54.906227 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:54.906235 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:54.906310 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:54.945499 1143678 cri.go:89] found id: ""
	I0603 13:52:54.945532 1143678 logs.go:276] 0 containers: []
	W0603 13:52:54.945544 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:54.945552 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:54.945619 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:54.986785 1143678 cri.go:89] found id: ""
	I0603 13:52:54.986812 1143678 logs.go:276] 0 containers: []
	W0603 13:52:54.986820 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:54.986826 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:54.986888 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:55.035290 1143678 cri.go:89] found id: ""
	I0603 13:52:55.035320 1143678 logs.go:276] 0 containers: []
	W0603 13:52:55.035329 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:55.035338 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:55.035352 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:55.085384 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:55.085451 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:55.100699 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:55.100733 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:55.171587 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:55.171614 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:55.171638 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:55.249078 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:55.249123 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:52:54.317435 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:56.318657 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:56.869954 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:58.872728 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:55.990512 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:58.489578 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:00.490668 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:52:57.791538 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:52:57.804373 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:52:57.804437 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:52:57.843969 1143678 cri.go:89] found id: ""
	I0603 13:52:57.844007 1143678 logs.go:276] 0 containers: []
	W0603 13:52:57.844016 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:52:57.844022 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:52:57.844077 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:52:57.881201 1143678 cri.go:89] found id: ""
	I0603 13:52:57.881239 1143678 logs.go:276] 0 containers: []
	W0603 13:52:57.881252 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:52:57.881261 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:52:57.881336 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:52:57.917572 1143678 cri.go:89] found id: ""
	I0603 13:52:57.917601 1143678 logs.go:276] 0 containers: []
	W0603 13:52:57.917610 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:52:57.917617 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:52:57.917671 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:52:57.951603 1143678 cri.go:89] found id: ""
	I0603 13:52:57.951642 1143678 logs.go:276] 0 containers: []
	W0603 13:52:57.951654 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:52:57.951661 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:52:57.951716 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:52:57.992833 1143678 cri.go:89] found id: ""
	I0603 13:52:57.992863 1143678 logs.go:276] 0 containers: []
	W0603 13:52:57.992874 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:52:57.992881 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:52:57.992945 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:52:58.031595 1143678 cri.go:89] found id: ""
	I0603 13:52:58.031636 1143678 logs.go:276] 0 containers: []
	W0603 13:52:58.031648 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:52:58.031657 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:52:58.031723 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:52:58.068947 1143678 cri.go:89] found id: ""
	I0603 13:52:58.068985 1143678 logs.go:276] 0 containers: []
	W0603 13:52:58.068996 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:52:58.069005 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:52:58.069077 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:52:58.106559 1143678 cri.go:89] found id: ""
	I0603 13:52:58.106587 1143678 logs.go:276] 0 containers: []
	W0603 13:52:58.106598 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:52:58.106623 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:52:58.106640 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:52:58.162576 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:52:58.162623 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:52:58.177104 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:52:58.177155 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:52:58.250279 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:52:58.250312 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:52:58.250329 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:58.330876 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:52:58.330920 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:00.871443 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:00.885505 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:00.885589 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:00.923878 1143678 cri.go:89] found id: ""
	I0603 13:53:00.923910 1143678 logs.go:276] 0 containers: []
	W0603 13:53:00.923920 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:00.923928 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:00.923995 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:00.960319 1143678 cri.go:89] found id: ""
	I0603 13:53:00.960362 1143678 logs.go:276] 0 containers: []
	W0603 13:53:00.960375 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:00.960384 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:00.960449 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:00.998806 1143678 cri.go:89] found id: ""
	I0603 13:53:00.998845 1143678 logs.go:276] 0 containers: []
	W0603 13:53:00.998857 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:00.998866 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:00.998929 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:01.033211 1143678 cri.go:89] found id: ""
	I0603 13:53:01.033245 1143678 logs.go:276] 0 containers: []
	W0603 13:53:01.033256 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:01.033265 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:01.033341 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:01.072852 1143678 cri.go:89] found id: ""
	I0603 13:53:01.072883 1143678 logs.go:276] 0 containers: []
	W0603 13:53:01.072891 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:01.072898 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:01.072950 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:01.115667 1143678 cri.go:89] found id: ""
	I0603 13:53:01.115699 1143678 logs.go:276] 0 containers: []
	W0603 13:53:01.115711 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:01.115719 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:01.115824 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:01.153676 1143678 cri.go:89] found id: ""
	I0603 13:53:01.153717 1143678 logs.go:276] 0 containers: []
	W0603 13:53:01.153733 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:01.153741 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:01.153815 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:01.188970 1143678 cri.go:89] found id: ""
	I0603 13:53:01.189003 1143678 logs.go:276] 0 containers: []
	W0603 13:53:01.189017 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:01.189031 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:01.189049 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:01.233151 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:01.233214 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:01.287218 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:01.287269 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:01.302370 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:01.302408 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:01.378414 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:01.378444 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:01.378463 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:52:58.817003 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:01.317698 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:01.371257 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:03.872917 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:02.989133 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:04.990930 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:03.957327 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:03.971246 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:03.971340 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:04.007299 1143678 cri.go:89] found id: ""
	I0603 13:53:04.007335 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.007347 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:04.007356 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:04.007427 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:04.046364 1143678 cri.go:89] found id: ""
	I0603 13:53:04.046396 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.046405 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:04.046411 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:04.046469 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:04.082094 1143678 cri.go:89] found id: ""
	I0603 13:53:04.082127 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.082139 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:04.082148 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:04.082209 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:04.117389 1143678 cri.go:89] found id: ""
	I0603 13:53:04.117434 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.117446 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:04.117454 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:04.117530 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:04.150560 1143678 cri.go:89] found id: ""
	I0603 13:53:04.150596 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.150606 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:04.150614 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:04.150678 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:04.184808 1143678 cri.go:89] found id: ""
	I0603 13:53:04.184845 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.184857 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:04.184865 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:04.184935 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:04.220286 1143678 cri.go:89] found id: ""
	I0603 13:53:04.220317 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.220326 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:04.220332 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:04.220385 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:04.258898 1143678 cri.go:89] found id: ""
	I0603 13:53:04.258929 1143678 logs.go:276] 0 containers: []
	W0603 13:53:04.258941 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:04.258955 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:04.258972 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:04.312151 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:04.312198 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:04.329908 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:04.329943 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:04.402075 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:04.402106 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:04.402138 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:04.482873 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:04.482936 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:07.049978 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:07.063072 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:07.063140 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:07.097703 1143678 cri.go:89] found id: ""
	I0603 13:53:07.097737 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.097748 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:07.097755 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:07.097811 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:07.134826 1143678 cri.go:89] found id: ""
	I0603 13:53:07.134865 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.134878 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:07.134886 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:07.134955 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:07.178015 1143678 cri.go:89] found id: ""
	I0603 13:53:07.178050 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.178061 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:07.178068 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:07.178138 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:07.215713 1143678 cri.go:89] found id: ""
	I0603 13:53:07.215753 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.215764 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:07.215777 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:07.215840 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:07.251787 1143678 cri.go:89] found id: ""
	I0603 13:53:07.251815 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.251824 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:07.251830 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:07.251897 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:07.293357 1143678 cri.go:89] found id: ""
	I0603 13:53:07.293387 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.293398 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:07.293427 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:07.293496 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:07.329518 1143678 cri.go:89] found id: ""
	I0603 13:53:07.329551 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.329561 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:07.329569 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:07.329650 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:03.819203 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:06.317653 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:06.370539 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:08.370701 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:07.490706 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:09.990002 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:07.369534 1143678 cri.go:89] found id: ""
	I0603 13:53:07.369576 1143678 logs.go:276] 0 containers: []
	W0603 13:53:07.369587 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:07.369601 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:07.369617 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:07.424211 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:07.424260 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:07.439135 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:07.439172 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:07.511325 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:07.511360 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:07.511378 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:07.588348 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:07.588393 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:10.129812 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:10.143977 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:10.144057 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:10.181873 1143678 cri.go:89] found id: ""
	I0603 13:53:10.181906 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.181918 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:10.181926 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:10.181981 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:10.218416 1143678 cri.go:89] found id: ""
	I0603 13:53:10.218460 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.218473 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:10.218482 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:10.218562 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:10.253580 1143678 cri.go:89] found id: ""
	I0603 13:53:10.253618 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.253630 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:10.253646 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:10.253717 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:10.302919 1143678 cri.go:89] found id: ""
	I0603 13:53:10.302949 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.302957 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:10.302964 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:10.303024 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:10.343680 1143678 cri.go:89] found id: ""
	I0603 13:53:10.343709 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.343721 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:10.343729 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:10.343798 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:10.379281 1143678 cri.go:89] found id: ""
	I0603 13:53:10.379307 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.379315 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:10.379322 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:10.379374 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:10.420197 1143678 cri.go:89] found id: ""
	I0603 13:53:10.420225 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.420233 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:10.420239 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:10.420322 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:10.458578 1143678 cri.go:89] found id: ""
	I0603 13:53:10.458609 1143678 logs.go:276] 0 containers: []
	W0603 13:53:10.458618 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:10.458629 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:10.458642 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:10.511785 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:10.511828 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:10.526040 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:10.526081 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:10.603721 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:10.603749 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:10.603766 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:10.684153 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:10.684204 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:08.816447 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:11.318264 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:10.374788 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:12.871019 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:14.871064 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:11.992127 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:14.488866 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:13.227605 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:13.241131 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:13.241228 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:13.284636 1143678 cri.go:89] found id: ""
	I0603 13:53:13.284667 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.284675 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:13.284681 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:13.284737 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:13.322828 1143678 cri.go:89] found id: ""
	I0603 13:53:13.322861 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.322873 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:13.322881 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:13.322945 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:13.360061 1143678 cri.go:89] found id: ""
	I0603 13:53:13.360089 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.360097 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:13.360103 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:13.360176 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:13.397115 1143678 cri.go:89] found id: ""
	I0603 13:53:13.397149 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.397158 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:13.397164 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:13.397234 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:13.434086 1143678 cri.go:89] found id: ""
	I0603 13:53:13.434118 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.434127 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:13.434135 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:13.434194 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:13.470060 1143678 cri.go:89] found id: ""
	I0603 13:53:13.470089 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.470101 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:13.470113 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:13.470189 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:13.508423 1143678 cri.go:89] found id: ""
	I0603 13:53:13.508464 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.508480 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:13.508487 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:13.508552 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:13.546713 1143678 cri.go:89] found id: ""
	I0603 13:53:13.546752 1143678 logs.go:276] 0 containers: []
	W0603 13:53:13.546765 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:13.546778 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:13.546796 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:13.632984 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:13.633027 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:13.679169 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:13.679216 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:13.735765 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:13.735812 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:13.750175 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:13.750210 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:13.826571 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:16.327185 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:16.340163 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:16.340253 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:16.380260 1143678 cri.go:89] found id: ""
	I0603 13:53:16.380292 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.380300 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:16.380307 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:16.380373 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:16.420408 1143678 cri.go:89] found id: ""
	I0603 13:53:16.420438 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.420449 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:16.420457 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:16.420534 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:16.459250 1143678 cri.go:89] found id: ""
	I0603 13:53:16.459285 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.459297 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:16.459307 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:16.459377 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:16.496395 1143678 cri.go:89] found id: ""
	I0603 13:53:16.496427 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.496436 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:16.496444 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:16.496516 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:16.534402 1143678 cri.go:89] found id: ""
	I0603 13:53:16.534433 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.534442 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:16.534449 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:16.534514 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:16.571550 1143678 cri.go:89] found id: ""
	I0603 13:53:16.571577 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.571584 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:16.571591 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:16.571659 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:16.608425 1143678 cri.go:89] found id: ""
	I0603 13:53:16.608457 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.608468 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:16.608482 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:16.608549 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:16.647282 1143678 cri.go:89] found id: ""
	I0603 13:53:16.647315 1143678 logs.go:276] 0 containers: []
	W0603 13:53:16.647324 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:16.647334 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:16.647351 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:16.728778 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:16.728814 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:16.728831 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:16.822702 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:16.822747 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:16.868816 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:16.868845 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:16.922262 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:16.922301 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:13.818935 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:16.316865 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:17.370681 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:19.371232 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:16.489494 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:18.490176 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:20.491433 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:19.438231 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:19.452520 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:19.452603 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:19.488089 1143678 cri.go:89] found id: ""
	I0603 13:53:19.488121 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.488133 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:19.488141 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:19.488216 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:19.524494 1143678 cri.go:89] found id: ""
	I0603 13:53:19.524527 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.524537 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:19.524543 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:19.524595 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:19.561288 1143678 cri.go:89] found id: ""
	I0603 13:53:19.561323 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.561333 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:19.561341 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:19.561420 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:19.597919 1143678 cri.go:89] found id: ""
	I0603 13:53:19.597965 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.597976 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:19.597984 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:19.598056 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:19.634544 1143678 cri.go:89] found id: ""
	I0603 13:53:19.634579 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.634591 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:19.634599 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:19.634668 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:19.671473 1143678 cri.go:89] found id: ""
	I0603 13:53:19.671506 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.671518 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:19.671527 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:19.671598 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:19.707968 1143678 cri.go:89] found id: ""
	I0603 13:53:19.708000 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.708011 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:19.708019 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:19.708119 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:19.745555 1143678 cri.go:89] found id: ""
	I0603 13:53:19.745593 1143678 logs.go:276] 0 containers: []
	W0603 13:53:19.745604 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:19.745617 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:19.745631 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:19.830765 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:19.830812 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:19.875160 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:19.875197 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:19.927582 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:19.927627 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:19.942258 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:19.942289 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:20.016081 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:18.820067 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:21.319103 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:21.871214 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:24.371680 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:22.990210 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:24.990605 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:22.516859 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:22.534973 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:22.535040 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:22.593003 1143678 cri.go:89] found id: ""
	I0603 13:53:22.593043 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.593051 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:22.593058 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:22.593121 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:22.649916 1143678 cri.go:89] found id: ""
	I0603 13:53:22.649951 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.649963 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:22.649971 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:22.650030 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:22.689397 1143678 cri.go:89] found id: ""
	I0603 13:53:22.689449 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.689459 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:22.689465 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:22.689521 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:22.725109 1143678 cri.go:89] found id: ""
	I0603 13:53:22.725149 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.725161 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:22.725169 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:22.725250 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:22.761196 1143678 cri.go:89] found id: ""
	I0603 13:53:22.761225 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.761237 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:22.761245 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:22.761311 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:22.804065 1143678 cri.go:89] found id: ""
	I0603 13:53:22.804103 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.804112 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:22.804119 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:22.804189 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:22.840456 1143678 cri.go:89] found id: ""
	I0603 13:53:22.840485 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.840493 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:22.840499 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:22.840553 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:22.876796 1143678 cri.go:89] found id: ""
	I0603 13:53:22.876831 1143678 logs.go:276] 0 containers: []
	W0603 13:53:22.876842 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:22.876854 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:22.876869 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:22.957274 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:22.957317 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:22.998360 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:22.998394 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:23.054895 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:23.054942 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:23.070107 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:23.070141 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:23.147460 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:25.647727 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:25.663603 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:25.663691 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:25.698102 1143678 cri.go:89] found id: ""
	I0603 13:53:25.698139 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.698150 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:25.698159 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:25.698227 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:25.738601 1143678 cri.go:89] found id: ""
	I0603 13:53:25.738641 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.738648 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:25.738655 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:25.738718 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:25.780622 1143678 cri.go:89] found id: ""
	I0603 13:53:25.780657 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.780670 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:25.780678 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:25.780751 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:25.816950 1143678 cri.go:89] found id: ""
	I0603 13:53:25.816978 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.816989 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:25.816997 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:25.817060 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:25.860011 1143678 cri.go:89] found id: ""
	I0603 13:53:25.860051 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.860063 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:25.860072 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:25.860138 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:25.898832 1143678 cri.go:89] found id: ""
	I0603 13:53:25.898866 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.898878 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:25.898886 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:25.898959 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:25.937483 1143678 cri.go:89] found id: ""
	I0603 13:53:25.937518 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.937533 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:25.937541 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:25.937607 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:25.973972 1143678 cri.go:89] found id: ""
	I0603 13:53:25.974008 1143678 logs.go:276] 0 containers: []
	W0603 13:53:25.974021 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:25.974034 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:25.974065 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:25.989188 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:25.989227 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:26.065521 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:26.065546 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:26.065560 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:26.147852 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:26.147899 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:26.191395 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:26.191431 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:23.816928 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:25.818534 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:26.872084 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:28.872558 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:27.489951 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:29.989352 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:28.751041 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:28.764764 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:28.764826 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:28.808232 1143678 cri.go:89] found id: ""
	I0603 13:53:28.808271 1143678 logs.go:276] 0 containers: []
	W0603 13:53:28.808285 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:28.808293 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:28.808369 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:28.849058 1143678 cri.go:89] found id: ""
	I0603 13:53:28.849094 1143678 logs.go:276] 0 containers: []
	W0603 13:53:28.849107 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:28.849114 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:28.849187 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:28.892397 1143678 cri.go:89] found id: ""
	I0603 13:53:28.892427 1143678 logs.go:276] 0 containers: []
	W0603 13:53:28.892441 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:28.892447 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:28.892515 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:28.932675 1143678 cri.go:89] found id: ""
	I0603 13:53:28.932715 1143678 logs.go:276] 0 containers: []
	W0603 13:53:28.932727 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:28.932735 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:28.932840 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:28.969732 1143678 cri.go:89] found id: ""
	I0603 13:53:28.969769 1143678 logs.go:276] 0 containers: []
	W0603 13:53:28.969781 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:28.969789 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:28.969857 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:29.007765 1143678 cri.go:89] found id: ""
	I0603 13:53:29.007791 1143678 logs.go:276] 0 containers: []
	W0603 13:53:29.007798 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:29.007804 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:29.007865 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:29.044616 1143678 cri.go:89] found id: ""
	I0603 13:53:29.044652 1143678 logs.go:276] 0 containers: []
	W0603 13:53:29.044664 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:29.044675 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:29.044734 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:29.081133 1143678 cri.go:89] found id: ""
	I0603 13:53:29.081166 1143678 logs.go:276] 0 containers: []
	W0603 13:53:29.081187 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:29.081198 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:29.081213 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:29.095753 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:29.095783 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:29.174472 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:29.174496 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:29.174516 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:29.251216 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:29.251262 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:29.289127 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:29.289168 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:31.845335 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:31.860631 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:31.860720 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:31.904507 1143678 cri.go:89] found id: ""
	I0603 13:53:31.904544 1143678 logs.go:276] 0 containers: []
	W0603 13:53:31.904556 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:31.904564 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:31.904633 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:31.940795 1143678 cri.go:89] found id: ""
	I0603 13:53:31.940832 1143678 logs.go:276] 0 containers: []
	W0603 13:53:31.940845 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:31.940852 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:31.940921 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:31.978447 1143678 cri.go:89] found id: ""
	I0603 13:53:31.978481 1143678 logs.go:276] 0 containers: []
	W0603 13:53:31.978499 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:31.978507 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:31.978569 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:32.017975 1143678 cri.go:89] found id: ""
	I0603 13:53:32.018009 1143678 logs.go:276] 0 containers: []
	W0603 13:53:32.018018 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:32.018025 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:32.018089 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:32.053062 1143678 cri.go:89] found id: ""
	I0603 13:53:32.053091 1143678 logs.go:276] 0 containers: []
	W0603 13:53:32.053099 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:32.053106 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:32.053181 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:32.089822 1143678 cri.go:89] found id: ""
	I0603 13:53:32.089856 1143678 logs.go:276] 0 containers: []
	W0603 13:53:32.089868 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:32.089877 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:32.089944 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:32.126243 1143678 cri.go:89] found id: ""
	I0603 13:53:32.126280 1143678 logs.go:276] 0 containers: []
	W0603 13:53:32.126291 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:32.126299 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:32.126358 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:32.163297 1143678 cri.go:89] found id: ""
	I0603 13:53:32.163346 1143678 logs.go:276] 0 containers: []
	W0603 13:53:32.163357 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:32.163370 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:32.163386 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:32.218452 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:32.218495 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:32.233688 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:32.233731 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:32.318927 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:32.318947 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:32.318963 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:28.317046 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:30.317308 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:32.318273 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:31.370654 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:33.371038 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:31.991594 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:34.492142 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:32.403734 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:32.403786 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:34.947857 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:34.961894 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:34.961983 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:35.006279 1143678 cri.go:89] found id: ""
	I0603 13:53:35.006308 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.006318 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:35.006326 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:35.006398 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:35.042765 1143678 cri.go:89] found id: ""
	I0603 13:53:35.042794 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.042807 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:35.042815 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:35.042877 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:35.084332 1143678 cri.go:89] found id: ""
	I0603 13:53:35.084365 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.084375 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:35.084381 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:35.084448 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:35.121306 1143678 cri.go:89] found id: ""
	I0603 13:53:35.121337 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.121348 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:35.121358 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:35.121444 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:35.155952 1143678 cri.go:89] found id: ""
	I0603 13:53:35.155994 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.156008 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:35.156016 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:35.156089 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:35.196846 1143678 cri.go:89] found id: ""
	I0603 13:53:35.196881 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.196893 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:35.196902 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:35.196972 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:35.232396 1143678 cri.go:89] found id: ""
	I0603 13:53:35.232429 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.232440 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:35.232449 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:35.232528 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:35.269833 1143678 cri.go:89] found id: ""
	I0603 13:53:35.269862 1143678 logs.go:276] 0 containers: []
	W0603 13:53:35.269872 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:35.269885 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:35.269902 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:35.357754 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:35.357794 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:35.399793 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:35.399822 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:35.453742 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:35.453782 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:35.468431 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:35.468465 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:35.547817 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:34.816178 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:36.817093 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:35.373072 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:37.870173 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:36.989364 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:38.990163 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:38.048517 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:38.063481 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:38.063569 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:38.100487 1143678 cri.go:89] found id: ""
	I0603 13:53:38.100523 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.100535 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:38.100543 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:38.100612 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:38.137627 1143678 cri.go:89] found id: ""
	I0603 13:53:38.137665 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.137678 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:38.137686 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:38.137754 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:38.176138 1143678 cri.go:89] found id: ""
	I0603 13:53:38.176172 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.176190 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:38.176199 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:38.176265 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:38.214397 1143678 cri.go:89] found id: ""
	I0603 13:53:38.214439 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.214451 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:38.214459 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:38.214528 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:38.250531 1143678 cri.go:89] found id: ""
	I0603 13:53:38.250563 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.250573 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:38.250580 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:38.250642 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:38.286558 1143678 cri.go:89] found id: ""
	I0603 13:53:38.286587 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.286595 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:38.286601 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:38.286652 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:38.327995 1143678 cri.go:89] found id: ""
	I0603 13:53:38.328043 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.328055 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:38.328062 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:38.328126 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:38.374266 1143678 cri.go:89] found id: ""
	I0603 13:53:38.374300 1143678 logs.go:276] 0 containers: []
	W0603 13:53:38.374311 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:38.374324 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:38.374341 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:38.426876 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:38.426918 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:38.443296 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:38.443340 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:38.514702 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:38.514728 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:38.514746 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:38.601536 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:38.601590 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:41.141766 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:41.155927 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:41.156006 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:41.196829 1143678 cri.go:89] found id: ""
	I0603 13:53:41.196871 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.196884 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:41.196896 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:41.196967 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:41.231729 1143678 cri.go:89] found id: ""
	I0603 13:53:41.231780 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.231802 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:41.231812 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:41.231900 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:41.266663 1143678 cri.go:89] found id: ""
	I0603 13:53:41.266699 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.266711 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:41.266720 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:41.266783 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:41.305251 1143678 cri.go:89] found id: ""
	I0603 13:53:41.305278 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.305286 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:41.305292 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:41.305351 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:41.342527 1143678 cri.go:89] found id: ""
	I0603 13:53:41.342556 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.342568 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:41.342575 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:41.342637 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:41.379950 1143678 cri.go:89] found id: ""
	I0603 13:53:41.379982 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.379992 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:41.379999 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:41.380068 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:41.414930 1143678 cri.go:89] found id: ""
	I0603 13:53:41.414965 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.414973 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:41.414980 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:41.415043 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:41.449265 1143678 cri.go:89] found id: ""
	I0603 13:53:41.449299 1143678 logs.go:276] 0 containers: []
	W0603 13:53:41.449310 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:41.449324 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:41.449343 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:41.502525 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:41.502560 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:41.519357 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:41.519390 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:41.591443 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:41.591471 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:41.591485 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:41.668758 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:41.668802 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:39.317333 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:41.317598 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:40.370844 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:42.871161 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:41.489574 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:43.989620 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:44.211768 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:44.226789 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:44.226869 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:44.265525 1143678 cri.go:89] found id: ""
	I0603 13:53:44.265553 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.265561 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:44.265568 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:44.265646 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:44.304835 1143678 cri.go:89] found id: ""
	I0603 13:53:44.304866 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.304874 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:44.304880 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:44.304935 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:44.345832 1143678 cri.go:89] found id: ""
	I0603 13:53:44.345875 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.345885 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:44.345891 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:44.345950 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:44.386150 1143678 cri.go:89] found id: ""
	I0603 13:53:44.386186 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.386198 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:44.386207 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:44.386268 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:44.423662 1143678 cri.go:89] found id: ""
	I0603 13:53:44.423697 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.423709 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:44.423719 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:44.423788 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:44.462437 1143678 cri.go:89] found id: ""
	I0603 13:53:44.462464 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.462473 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:44.462481 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:44.462567 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:44.501007 1143678 cri.go:89] found id: ""
	I0603 13:53:44.501062 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.501074 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:44.501081 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:44.501138 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:44.535501 1143678 cri.go:89] found id: ""
	I0603 13:53:44.535543 1143678 logs.go:276] 0 containers: []
	W0603 13:53:44.535554 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:44.535567 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:44.535585 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:44.587114 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:44.587157 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:44.602151 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:44.602180 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:44.674065 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:44.674104 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:44.674122 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:44.757443 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:44.757488 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:47.306481 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:47.319895 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:47.319958 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:43.818030 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:46.316852 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:45.370762 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:47.371799 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:49.871512 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:46.488076 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:48.488472 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:50.488892 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:47.356975 1143678 cri.go:89] found id: ""
	I0603 13:53:47.357013 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.357026 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:47.357034 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:47.357106 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:47.393840 1143678 cri.go:89] found id: ""
	I0603 13:53:47.393869 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.393877 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:47.393884 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:47.393936 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:47.428455 1143678 cri.go:89] found id: ""
	I0603 13:53:47.428493 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.428506 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:47.428514 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:47.428597 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:47.463744 1143678 cri.go:89] found id: ""
	I0603 13:53:47.463777 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.463788 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:47.463795 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:47.463855 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:47.498134 1143678 cri.go:89] found id: ""
	I0603 13:53:47.498159 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.498167 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:47.498173 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:47.498245 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:47.534153 1143678 cri.go:89] found id: ""
	I0603 13:53:47.534195 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.534206 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:47.534219 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:47.534272 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:47.567148 1143678 cri.go:89] found id: ""
	I0603 13:53:47.567179 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.567187 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:47.567194 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:47.567249 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:47.605759 1143678 cri.go:89] found id: ""
	I0603 13:53:47.605790 1143678 logs.go:276] 0 containers: []
	W0603 13:53:47.605798 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:47.605810 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:47.605824 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:47.683651 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:47.683692 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:47.683705 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:47.763810 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:47.763848 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:47.806092 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:47.806131 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:47.859637 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:47.859677 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:50.377538 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:50.391696 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:50.391776 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:50.433968 1143678 cri.go:89] found id: ""
	I0603 13:53:50.434001 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.434013 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:50.434020 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:50.434080 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:50.470561 1143678 cri.go:89] found id: ""
	I0603 13:53:50.470589 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.470596 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:50.470603 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:50.470662 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:50.510699 1143678 cri.go:89] found id: ""
	I0603 13:53:50.510727 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.510735 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:50.510741 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:50.510808 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:50.553386 1143678 cri.go:89] found id: ""
	I0603 13:53:50.553433 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.553445 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:50.553452 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:50.553533 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:50.589731 1143678 cri.go:89] found id: ""
	I0603 13:53:50.589779 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.589792 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:50.589801 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:50.589885 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:50.625144 1143678 cri.go:89] found id: ""
	I0603 13:53:50.625180 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.625192 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:50.625201 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:50.625274 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:50.669021 1143678 cri.go:89] found id: ""
	I0603 13:53:50.669053 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.669061 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:50.669067 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:50.669121 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:50.714241 1143678 cri.go:89] found id: ""
	I0603 13:53:50.714270 1143678 logs.go:276] 0 containers: []
	W0603 13:53:50.714284 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:50.714297 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:50.714314 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:50.766290 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:50.766333 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:50.797242 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:50.797275 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:50.866589 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:50.866616 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:50.866637 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:50.948808 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:50.948854 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:48.318282 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:50.817445 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:52.370798 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:54.377027 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:52.490719 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:54.989907 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:53.496797 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:53.511944 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:53.512021 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:53.549028 1143678 cri.go:89] found id: ""
	I0603 13:53:53.549057 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.549066 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:53.549072 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:53.549128 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:53.583533 1143678 cri.go:89] found id: ""
	I0603 13:53:53.583566 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.583578 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:53.583586 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:53.583652 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:53.618578 1143678 cri.go:89] found id: ""
	I0603 13:53:53.618609 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.618618 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:53.618626 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:53.618701 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:53.653313 1143678 cri.go:89] found id: ""
	I0603 13:53:53.653347 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.653358 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:53.653364 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:53.653442 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:53.689805 1143678 cri.go:89] found id: ""
	I0603 13:53:53.689839 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.689849 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:53.689857 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:53.689931 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:53.725538 1143678 cri.go:89] found id: ""
	I0603 13:53:53.725571 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.725584 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:53.725592 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:53.725648 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:53.762284 1143678 cri.go:89] found id: ""
	I0603 13:53:53.762325 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.762336 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:53.762345 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:53.762419 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:53.799056 1143678 cri.go:89] found id: ""
	I0603 13:53:53.799083 1143678 logs.go:276] 0 containers: []
	W0603 13:53:53.799092 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:53.799102 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:53.799115 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:53.873743 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:53.873809 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:53.919692 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:53.919724 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:53.969068 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:53.969109 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:53.983840 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:53.983866 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:54.054842 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:56.555587 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:56.570014 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:56.570076 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:56.604352 1143678 cri.go:89] found id: ""
	I0603 13:53:56.604386 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.604400 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:56.604408 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:56.604479 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:56.648126 1143678 cri.go:89] found id: ""
	I0603 13:53:56.648161 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.648171 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:56.648177 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:56.648231 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:56.685621 1143678 cri.go:89] found id: ""
	I0603 13:53:56.685658 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.685670 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:56.685678 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:56.685763 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:56.721860 1143678 cri.go:89] found id: ""
	I0603 13:53:56.721891 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.721913 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:56.721921 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:56.721989 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:56.757950 1143678 cri.go:89] found id: ""
	I0603 13:53:56.757982 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.757995 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:56.758002 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:56.758068 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:56.794963 1143678 cri.go:89] found id: ""
	I0603 13:53:56.794991 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.794999 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:56.795007 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:56.795072 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:56.831795 1143678 cri.go:89] found id: ""
	I0603 13:53:56.831827 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.831839 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:56.831846 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:56.831913 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:56.869263 1143678 cri.go:89] found id: ""
	I0603 13:53:56.869293 1143678 logs.go:276] 0 containers: []
	W0603 13:53:56.869303 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:56.869314 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:53:56.869331 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:53:56.945068 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:53:56.945096 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:53:56.945110 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:53:57.028545 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:53:57.028582 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:57.069973 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:57.070009 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:53:57.126395 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:53:57.126436 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:53:53.316616 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:55.316981 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:57.317295 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:56.870680 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:59.371553 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:56.990964 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:59.489616 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:53:59.644870 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:53:59.658547 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:53:59.658634 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:53:59.694625 1143678 cri.go:89] found id: ""
	I0603 13:53:59.694656 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.694665 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:53:59.694673 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:53:59.694740 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:53:59.730475 1143678 cri.go:89] found id: ""
	I0603 13:53:59.730573 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.730590 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:53:59.730599 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:53:59.730696 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:53:59.768533 1143678 cri.go:89] found id: ""
	I0603 13:53:59.768567 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.768580 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:53:59.768590 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:53:59.768662 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:53:59.804913 1143678 cri.go:89] found id: ""
	I0603 13:53:59.804944 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.804953 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:53:59.804960 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:53:59.805014 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:53:59.850331 1143678 cri.go:89] found id: ""
	I0603 13:53:59.850363 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.850376 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:53:59.850385 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:53:59.850466 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:53:59.890777 1143678 cri.go:89] found id: ""
	I0603 13:53:59.890814 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.890826 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:53:59.890834 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:53:59.890909 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:53:59.931233 1143678 cri.go:89] found id: ""
	I0603 13:53:59.931268 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.931277 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:53:59.931283 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:53:59.931354 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:53:59.966267 1143678 cri.go:89] found id: ""
	I0603 13:53:59.966307 1143678 logs.go:276] 0 containers: []
	W0603 13:53:59.966319 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:53:59.966333 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:53:59.966356 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:00.019884 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:00.019924 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:00.034936 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:00.034982 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:00.115002 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:00.115035 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:00.115053 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:00.189992 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:00.190035 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:53:59.818065 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:02.316183 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:01.870679 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:03.872563 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:01.490213 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:03.988699 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:02.737387 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:02.752131 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:02.752220 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:02.787863 1143678 cri.go:89] found id: ""
	I0603 13:54:02.787893 1143678 logs.go:276] 0 containers: []
	W0603 13:54:02.787902 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:02.787908 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:02.787974 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:02.824938 1143678 cri.go:89] found id: ""
	I0603 13:54:02.824973 1143678 logs.go:276] 0 containers: []
	W0603 13:54:02.824983 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:02.824989 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:02.825061 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:02.861425 1143678 cri.go:89] found id: ""
	I0603 13:54:02.861461 1143678 logs.go:276] 0 containers: []
	W0603 13:54:02.861469 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:02.861476 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:02.861546 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:02.907417 1143678 cri.go:89] found id: ""
	I0603 13:54:02.907453 1143678 logs.go:276] 0 containers: []
	W0603 13:54:02.907475 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:02.907483 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:02.907553 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:02.953606 1143678 cri.go:89] found id: ""
	I0603 13:54:02.953640 1143678 logs.go:276] 0 containers: []
	W0603 13:54:02.953649 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:02.953655 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:02.953728 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:03.007785 1143678 cri.go:89] found id: ""
	I0603 13:54:03.007816 1143678 logs.go:276] 0 containers: []
	W0603 13:54:03.007824 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:03.007830 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:03.007896 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:03.058278 1143678 cri.go:89] found id: ""
	I0603 13:54:03.058316 1143678 logs.go:276] 0 containers: []
	W0603 13:54:03.058329 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:03.058338 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:03.058404 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:03.094766 1143678 cri.go:89] found id: ""
	I0603 13:54:03.094800 1143678 logs.go:276] 0 containers: []
	W0603 13:54:03.094811 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:03.094824 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:03.094840 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:03.163663 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:03.163690 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:03.163704 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:03.250751 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:03.250802 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:03.292418 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:03.292466 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:03.344552 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:03.344600 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:05.859965 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:05.875255 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:05.875340 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:05.918590 1143678 cri.go:89] found id: ""
	I0603 13:54:05.918619 1143678 logs.go:276] 0 containers: []
	W0603 13:54:05.918630 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:05.918637 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:05.918706 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:05.953932 1143678 cri.go:89] found id: ""
	I0603 13:54:05.953969 1143678 logs.go:276] 0 containers: []
	W0603 13:54:05.953980 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:05.953988 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:05.954056 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:05.993319 1143678 cri.go:89] found id: ""
	I0603 13:54:05.993348 1143678 logs.go:276] 0 containers: []
	W0603 13:54:05.993359 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:05.993368 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:05.993468 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:06.033047 1143678 cri.go:89] found id: ""
	I0603 13:54:06.033079 1143678 logs.go:276] 0 containers: []
	W0603 13:54:06.033087 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:06.033100 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:06.033156 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:06.072607 1143678 cri.go:89] found id: ""
	I0603 13:54:06.072631 1143678 logs.go:276] 0 containers: []
	W0603 13:54:06.072640 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:06.072647 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:06.072698 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:06.109944 1143678 cri.go:89] found id: ""
	I0603 13:54:06.109990 1143678 logs.go:276] 0 containers: []
	W0603 13:54:06.109999 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:06.110007 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:06.110071 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:06.150235 1143678 cri.go:89] found id: ""
	I0603 13:54:06.150266 1143678 logs.go:276] 0 containers: []
	W0603 13:54:06.150276 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:06.150284 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:06.150349 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:06.193963 1143678 cri.go:89] found id: ""
	I0603 13:54:06.193992 1143678 logs.go:276] 0 containers: []
	W0603 13:54:06.194004 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:06.194017 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:06.194035 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:06.235790 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:06.235827 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:06.289940 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:06.289980 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:06.305205 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:06.305240 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:06.381170 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:06.381191 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:06.381206 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:04.316812 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:06.317759 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:06.370944 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:08.371668 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:05.989346 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:08.492021 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:08.958985 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:08.973364 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:08.973462 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:09.015050 1143678 cri.go:89] found id: ""
	I0603 13:54:09.015087 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.015099 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:09.015107 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:09.015187 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:09.054474 1143678 cri.go:89] found id: ""
	I0603 13:54:09.054508 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.054521 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:09.054533 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:09.054590 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:09.090867 1143678 cri.go:89] found id: ""
	I0603 13:54:09.090905 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.090917 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:09.090926 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:09.090995 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:09.128401 1143678 cri.go:89] found id: ""
	I0603 13:54:09.128433 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.128441 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:09.128447 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:09.128511 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:09.162952 1143678 cri.go:89] found id: ""
	I0603 13:54:09.162992 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.163005 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:09.163013 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:09.163078 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:09.200375 1143678 cri.go:89] found id: ""
	I0603 13:54:09.200402 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.200410 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:09.200416 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:09.200495 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:09.244694 1143678 cri.go:89] found id: ""
	I0603 13:54:09.244729 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.244740 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:09.244749 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:09.244818 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:09.281633 1143678 cri.go:89] found id: ""
	I0603 13:54:09.281666 1143678 logs.go:276] 0 containers: []
	W0603 13:54:09.281675 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:09.281686 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:09.281700 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:09.341287 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:09.341331 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:09.355379 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:09.355415 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:09.435934 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:09.435960 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:09.435979 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:09.518203 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:09.518248 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:12.061538 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:12.076939 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:12.077020 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:12.114308 1143678 cri.go:89] found id: ""
	I0603 13:54:12.114344 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.114353 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:12.114359 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:12.114427 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:12.150336 1143678 cri.go:89] found id: ""
	I0603 13:54:12.150368 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.150383 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:12.150390 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:12.150455 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:12.189881 1143678 cri.go:89] found id: ""
	I0603 13:54:12.189934 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.189946 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:12.189954 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:12.190020 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:12.226361 1143678 cri.go:89] found id: ""
	I0603 13:54:12.226396 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.226407 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:12.226415 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:12.226488 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:12.264216 1143678 cri.go:89] found id: ""
	I0603 13:54:12.264257 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.264265 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:12.264271 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:12.264341 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:12.306563 1143678 cri.go:89] found id: ""
	I0603 13:54:12.306600 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.306612 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:12.306620 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:12.306690 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:12.347043 1143678 cri.go:89] found id: ""
	I0603 13:54:12.347082 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.347094 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:12.347105 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:12.347170 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:08.317824 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:10.816743 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:12.816776 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:10.372079 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:12.872314 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:10.990240 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:13.489762 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:12.383947 1143678 cri.go:89] found id: ""
	I0603 13:54:12.383978 1143678 logs.go:276] 0 containers: []
	W0603 13:54:12.383989 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:12.384001 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:12.384018 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:12.464306 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:12.464348 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:12.505079 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:12.505110 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:12.563631 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:12.563666 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:12.578328 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:12.578357 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:12.646015 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:15.147166 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:15.163786 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:15.163865 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:15.202249 1143678 cri.go:89] found id: ""
	I0603 13:54:15.202286 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.202296 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:15.202304 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:15.202372 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:15.236305 1143678 cri.go:89] found id: ""
	I0603 13:54:15.236345 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.236359 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:15.236368 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:15.236459 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:15.273457 1143678 cri.go:89] found id: ""
	I0603 13:54:15.273493 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.273510 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:15.273521 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:15.273592 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:15.314917 1143678 cri.go:89] found id: ""
	I0603 13:54:15.314951 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.314963 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:15.314984 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:15.315055 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:15.353060 1143678 cri.go:89] found id: ""
	I0603 13:54:15.353098 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.353112 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:15.353118 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:15.353197 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:15.390412 1143678 cri.go:89] found id: ""
	I0603 13:54:15.390448 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.390460 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:15.390469 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:15.390534 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:15.427735 1143678 cri.go:89] found id: ""
	I0603 13:54:15.427771 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.427782 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:15.427789 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:15.427854 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:15.467134 1143678 cri.go:89] found id: ""
	I0603 13:54:15.467165 1143678 logs.go:276] 0 containers: []
	W0603 13:54:15.467175 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:15.467184 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:15.467199 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:15.517924 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:15.517973 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:15.531728 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:15.531760 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:15.608397 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:15.608421 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:15.608444 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:15.688976 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:15.689016 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:15.319250 1143252 pod_ready.go:102] pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:16.817018 1143252 pod_ready.go:81] duration metric: took 4m0.00664589s for pod "metrics-server-569cc877fc-v7d9t" in "kube-system" namespace to be "Ready" ...
	E0603 13:54:16.817042 1143252 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0603 13:54:16.817049 1143252 pod_ready.go:38] duration metric: took 4m6.670583216s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:54:16.817081 1143252 api_server.go:52] waiting for apiserver process to appear ...
	I0603 13:54:16.817110 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:16.817158 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:16.871314 1143252 cri.go:89] found id: "45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a"
	I0603 13:54:16.871339 1143252 cri.go:89] found id: ""
	I0603 13:54:16.871350 1143252 logs.go:276] 1 containers: [45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a]
	I0603 13:54:16.871405 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:16.876249 1143252 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:16.876319 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:16.917267 1143252 cri.go:89] found id: "114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05"
	I0603 13:54:16.917298 1143252 cri.go:89] found id: ""
	I0603 13:54:16.917310 1143252 logs.go:276] 1 containers: [114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05]
	I0603 13:54:16.917374 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:16.923290 1143252 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:16.923374 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:16.963598 1143252 cri.go:89] found id: "f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574"
	I0603 13:54:16.963619 1143252 cri.go:89] found id: ""
	I0603 13:54:16.963628 1143252 logs.go:276] 1 containers: [f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574]
	I0603 13:54:16.963689 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:16.968201 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:16.968277 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:17.008229 1143252 cri.go:89] found id: "f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87"
	I0603 13:54:17.008264 1143252 cri.go:89] found id: ""
	I0603 13:54:17.008274 1143252 logs.go:276] 1 containers: [f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87]
	I0603 13:54:17.008341 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:17.012719 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:17.012795 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:17.048353 1143252 cri.go:89] found id: "c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1"
	I0603 13:54:17.048384 1143252 cri.go:89] found id: ""
	I0603 13:54:17.048394 1143252 logs.go:276] 1 containers: [c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1]
	I0603 13:54:17.048459 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:17.053094 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:17.053162 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:17.088475 1143252 cri.go:89] found id: "a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d"
	I0603 13:54:17.088507 1143252 cri.go:89] found id: ""
	I0603 13:54:17.088518 1143252 logs.go:276] 1 containers: [a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d]
	I0603 13:54:17.088583 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:17.093293 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:17.093373 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:17.130335 1143252 cri.go:89] found id: ""
	I0603 13:54:17.130370 1143252 logs.go:276] 0 containers: []
	W0603 13:54:17.130381 1143252 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:17.130389 1143252 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0603 13:54:17.130472 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0603 13:54:17.176283 1143252 cri.go:89] found id: "e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b"
	I0603 13:54:17.176317 1143252 cri.go:89] found id: "141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88"
	I0603 13:54:17.176324 1143252 cri.go:89] found id: ""
	I0603 13:54:17.176335 1143252 logs.go:276] 2 containers: [e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b 141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88]
	I0603 13:54:17.176409 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:17.181455 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:17.185881 1143252 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:17.185902 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:17.239636 1143252 logs.go:123] Gathering logs for kube-apiserver [45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a] ...
	I0603 13:54:17.239680 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a"
	I0603 13:54:17.309488 1143252 logs.go:123] Gathering logs for etcd [114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05] ...
	I0603 13:54:17.309532 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05"
	I0603 13:54:17.362243 1143252 logs.go:123] Gathering logs for coredns [f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574] ...
	I0603 13:54:17.362282 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574"
	I0603 13:54:17.401389 1143252 logs.go:123] Gathering logs for kube-proxy [c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1] ...
	I0603 13:54:17.401440 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1"
	I0603 13:54:17.442095 1143252 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:17.442127 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:17.923198 1143252 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:17.923247 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:17.939968 1143252 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:17.940000 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 13:54:18.075054 1143252 logs.go:123] Gathering logs for kube-scheduler [f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87] ...
	I0603 13:54:18.075098 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87"
	I0603 13:54:18.113954 1143252 logs.go:123] Gathering logs for kube-controller-manager [a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d] ...
	I0603 13:54:18.113994 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d"
	I0603 13:54:18.181862 1143252 logs.go:123] Gathering logs for storage-provisioner [e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b] ...
	I0603 13:54:18.181906 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b"
	I0603 13:54:18.227105 1143252 logs.go:123] Gathering logs for storage-provisioner [141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88] ...
	I0603 13:54:18.227137 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88"
	I0603 13:54:18.272684 1143252 logs.go:123] Gathering logs for container status ...
	I0603 13:54:18.272721 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:15.371753 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:17.870321 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:19.879331 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:15.990326 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:18.489960 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:18.228279 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:18.242909 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:18.242985 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:18.285400 1143678 cri.go:89] found id: ""
	I0603 13:54:18.285445 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.285455 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:18.285461 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:18.285521 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:18.321840 1143678 cri.go:89] found id: ""
	I0603 13:54:18.321868 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.321877 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:18.321884 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:18.321943 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:18.358856 1143678 cri.go:89] found id: ""
	I0603 13:54:18.358888 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.358902 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:18.358911 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:18.358979 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:18.395638 1143678 cri.go:89] found id: ""
	I0603 13:54:18.395678 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.395691 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:18.395699 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:18.395766 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:18.435541 1143678 cri.go:89] found id: ""
	I0603 13:54:18.435570 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.435581 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:18.435589 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:18.435653 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:18.469491 1143678 cri.go:89] found id: ""
	I0603 13:54:18.469527 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.469538 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:18.469545 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:18.469615 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:18.507986 1143678 cri.go:89] found id: ""
	I0603 13:54:18.508018 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.508030 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:18.508039 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:18.508106 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:18.542311 1143678 cri.go:89] found id: ""
	I0603 13:54:18.542343 1143678 logs.go:276] 0 containers: []
	W0603 13:54:18.542351 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:18.542361 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:18.542375 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:18.619295 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:18.619337 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:18.662500 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:18.662540 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:18.714392 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:18.714432 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:18.728750 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:18.728785 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:18.800786 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:21.301554 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:21.315880 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:21.315944 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:21.358178 1143678 cri.go:89] found id: ""
	I0603 13:54:21.358208 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.358217 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:21.358227 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:21.358289 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:21.395873 1143678 cri.go:89] found id: ""
	I0603 13:54:21.395969 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.395995 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:21.396014 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:21.396111 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:21.431781 1143678 cri.go:89] found id: ""
	I0603 13:54:21.431810 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.431822 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:21.431831 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:21.431906 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:21.472840 1143678 cri.go:89] found id: ""
	I0603 13:54:21.472872 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.472885 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:21.472893 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:21.472955 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:21.512296 1143678 cri.go:89] found id: ""
	I0603 13:54:21.512333 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.512346 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:21.512353 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:21.512421 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:21.547555 1143678 cri.go:89] found id: ""
	I0603 13:54:21.547588 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.547599 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:21.547609 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:21.547670 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:21.584972 1143678 cri.go:89] found id: ""
	I0603 13:54:21.585005 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.585013 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:21.585019 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:21.585085 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:21.621566 1143678 cri.go:89] found id: ""
	I0603 13:54:21.621599 1143678 logs.go:276] 0 containers: []
	W0603 13:54:21.621610 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:21.621623 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:21.621639 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:21.637223 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:21.637263 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:21.712272 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:21.712294 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:21.712310 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:21.800453 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:21.800490 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:21.841477 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:21.841525 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:20.819740 1143252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:20.836917 1143252 api_server.go:72] duration metric: took 4m15.913250824s to wait for apiserver process to appear ...
	I0603 13:54:20.836947 1143252 api_server.go:88] waiting for apiserver healthz status ...
	I0603 13:54:20.836988 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:20.837038 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:20.874034 1143252 cri.go:89] found id: "45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a"
	I0603 13:54:20.874064 1143252 cri.go:89] found id: ""
	I0603 13:54:20.874076 1143252 logs.go:276] 1 containers: [45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a]
	I0603 13:54:20.874146 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:20.878935 1143252 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:20.879020 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:20.920390 1143252 cri.go:89] found id: "114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05"
	I0603 13:54:20.920417 1143252 cri.go:89] found id: ""
	I0603 13:54:20.920425 1143252 logs.go:276] 1 containers: [114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05]
	I0603 13:54:20.920494 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:20.924858 1143252 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:20.924934 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:20.966049 1143252 cri.go:89] found id: "f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574"
	I0603 13:54:20.966077 1143252 cri.go:89] found id: ""
	I0603 13:54:20.966088 1143252 logs.go:276] 1 containers: [f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574]
	I0603 13:54:20.966174 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:20.970734 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:20.970812 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:21.010892 1143252 cri.go:89] found id: "f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87"
	I0603 13:54:21.010918 1143252 cri.go:89] found id: ""
	I0603 13:54:21.010929 1143252 logs.go:276] 1 containers: [f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87]
	I0603 13:54:21.010994 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:21.016274 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:21.016347 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:21.055294 1143252 cri.go:89] found id: "c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1"
	I0603 13:54:21.055318 1143252 cri.go:89] found id: ""
	I0603 13:54:21.055327 1143252 logs.go:276] 1 containers: [c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1]
	I0603 13:54:21.055375 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:21.060007 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:21.060069 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:21.099200 1143252 cri.go:89] found id: "a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d"
	I0603 13:54:21.099225 1143252 cri.go:89] found id: ""
	I0603 13:54:21.099236 1143252 logs.go:276] 1 containers: [a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d]
	I0603 13:54:21.099309 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:21.103590 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:21.103662 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:21.140375 1143252 cri.go:89] found id: ""
	I0603 13:54:21.140409 1143252 logs.go:276] 0 containers: []
	W0603 13:54:21.140422 1143252 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:21.140431 1143252 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0603 13:54:21.140498 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0603 13:54:21.180709 1143252 cri.go:89] found id: "e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b"
	I0603 13:54:21.180735 1143252 cri.go:89] found id: "141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88"
	I0603 13:54:21.180739 1143252 cri.go:89] found id: ""
	I0603 13:54:21.180747 1143252 logs.go:276] 2 containers: [e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b 141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88]
	I0603 13:54:21.180814 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:21.184952 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:21.189111 1143252 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:21.189140 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:21.663768 1143252 logs.go:123] Gathering logs for kube-apiserver [45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a] ...
	I0603 13:54:21.663807 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a"
	I0603 13:54:21.719542 1143252 logs.go:123] Gathering logs for etcd [114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05] ...
	I0603 13:54:21.719573 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05"
	I0603 13:54:21.786686 1143252 logs.go:123] Gathering logs for coredns [f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574] ...
	I0603 13:54:21.786725 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574"
	I0603 13:54:21.824908 1143252 logs.go:123] Gathering logs for kube-scheduler [f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87] ...
	I0603 13:54:21.824948 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87"
	I0603 13:54:21.864778 1143252 logs.go:123] Gathering logs for kube-proxy [c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1] ...
	I0603 13:54:21.864818 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1"
	I0603 13:54:21.904450 1143252 logs.go:123] Gathering logs for storage-provisioner [e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b] ...
	I0603 13:54:21.904480 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b"
	I0603 13:54:21.942006 1143252 logs.go:123] Gathering logs for storage-provisioner [141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88] ...
	I0603 13:54:21.942040 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88"
	I0603 13:54:21.979636 1143252 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:21.979673 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:22.033943 1143252 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:22.033980 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:22.048545 1143252 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:22.048578 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 13:54:22.154866 1143252 logs.go:123] Gathering logs for kube-controller-manager [a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d] ...
	I0603 13:54:22.154906 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d"
	I0603 13:54:22.218033 1143252 logs.go:123] Gathering logs for container status ...
	I0603 13:54:22.218073 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:22.374700 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:24.871898 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:20.989874 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:23.489083 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:24.394864 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:24.408416 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:24.408527 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:24.444572 1143678 cri.go:89] found id: ""
	I0603 13:54:24.444603 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.444612 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:24.444618 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:24.444672 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:24.483710 1143678 cri.go:89] found id: ""
	I0603 13:54:24.483744 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.483755 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:24.483763 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:24.483837 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:24.522396 1143678 cri.go:89] found id: ""
	I0603 13:54:24.522437 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.522450 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:24.522457 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:24.522520 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:24.560865 1143678 cri.go:89] found id: ""
	I0603 13:54:24.560896 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.560905 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:24.560911 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:24.560964 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:24.598597 1143678 cri.go:89] found id: ""
	I0603 13:54:24.598632 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.598643 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:24.598657 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:24.598722 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:24.638854 1143678 cri.go:89] found id: ""
	I0603 13:54:24.638885 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.638897 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:24.638908 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:24.638979 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:24.678039 1143678 cri.go:89] found id: ""
	I0603 13:54:24.678076 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.678088 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:24.678096 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:24.678166 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:24.712836 1143678 cri.go:89] found id: ""
	I0603 13:54:24.712871 1143678 logs.go:276] 0 containers: []
	W0603 13:54:24.712883 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:24.712896 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:24.712913 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:24.763503 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:24.763545 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:24.779383 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:24.779416 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:24.867254 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:24.867287 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:24.867307 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:24.944920 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:24.944957 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:24.768551 1143252 api_server.go:253] Checking apiserver healthz at https://192.168.83.246:8443/healthz ...
	I0603 13:54:24.774942 1143252 api_server.go:279] https://192.168.83.246:8443/healthz returned 200:
	ok
	I0603 13:54:24.776278 1143252 api_server.go:141] control plane version: v1.30.1
	I0603 13:54:24.776301 1143252 api_server.go:131] duration metric: took 3.939347802s to wait for apiserver health ...
	I0603 13:54:24.776310 1143252 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 13:54:24.776334 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:24.776386 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:24.827107 1143252 cri.go:89] found id: "45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a"
	I0603 13:54:24.827139 1143252 cri.go:89] found id: ""
	I0603 13:54:24.827152 1143252 logs.go:276] 1 containers: [45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a]
	I0603 13:54:24.827210 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:24.831681 1143252 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:24.831752 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:24.875645 1143252 cri.go:89] found id: "114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05"
	I0603 13:54:24.875689 1143252 cri.go:89] found id: ""
	I0603 13:54:24.875711 1143252 logs.go:276] 1 containers: [114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05]
	I0603 13:54:24.875778 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:24.880157 1143252 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:24.880256 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:24.932131 1143252 cri.go:89] found id: "f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574"
	I0603 13:54:24.932157 1143252 cri.go:89] found id: ""
	I0603 13:54:24.932167 1143252 logs.go:276] 1 containers: [f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574]
	I0603 13:54:24.932262 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:24.938104 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:24.938168 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:24.980289 1143252 cri.go:89] found id: "f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87"
	I0603 13:54:24.980318 1143252 cri.go:89] found id: ""
	I0603 13:54:24.980327 1143252 logs.go:276] 1 containers: [f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87]
	I0603 13:54:24.980389 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:24.985608 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:24.985687 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:25.033726 1143252 cri.go:89] found id: "c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1"
	I0603 13:54:25.033749 1143252 cri.go:89] found id: ""
	I0603 13:54:25.033757 1143252 logs.go:276] 1 containers: [c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1]
	I0603 13:54:25.033811 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:25.038493 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:25.038561 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:25.077447 1143252 cri.go:89] found id: "a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d"
	I0603 13:54:25.077474 1143252 cri.go:89] found id: ""
	I0603 13:54:25.077485 1143252 logs.go:276] 1 containers: [a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d]
	I0603 13:54:25.077545 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:25.081701 1143252 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:25.081770 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:25.120216 1143252 cri.go:89] found id: ""
	I0603 13:54:25.120246 1143252 logs.go:276] 0 containers: []
	W0603 13:54:25.120254 1143252 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:25.120261 1143252 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0603 13:54:25.120313 1143252 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0603 13:54:25.162562 1143252 cri.go:89] found id: "e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b"
	I0603 13:54:25.162596 1143252 cri.go:89] found id: "141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88"
	I0603 13:54:25.162602 1143252 cri.go:89] found id: ""
	I0603 13:54:25.162613 1143252 logs.go:276] 2 containers: [e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b 141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88]
	I0603 13:54:25.162678 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:25.167179 1143252 ssh_runner.go:195] Run: which crictl
	I0603 13:54:25.171531 1143252 logs.go:123] Gathering logs for container status ...
	I0603 13:54:25.171558 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:25.223749 1143252 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:25.223787 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:25.290251 1143252 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:25.290293 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:25.315271 1143252 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:25.315302 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 13:54:25.433219 1143252 logs.go:123] Gathering logs for coredns [f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574] ...
	I0603 13:54:25.433257 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8daff2944ee29bea08a8933bbad349b297d31b169ec2591a51b2c5d9ab1d574"
	I0603 13:54:25.473156 1143252 logs.go:123] Gathering logs for kube-scheduler [f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87] ...
	I0603 13:54:25.473194 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1a49ac6ea3e623f316dcc522e3f09bd4658e0666d6e5ae42d45b582ac720d87"
	I0603 13:54:25.513988 1143252 logs.go:123] Gathering logs for kube-controller-manager [a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d] ...
	I0603 13:54:25.514015 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4f8ab9c0a067d9eb51e458f15f3106249233dbbeab72be5e1ec44af2cdfbf3d"
	I0603 13:54:25.587224 1143252 logs.go:123] Gathering logs for kube-apiserver [45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a] ...
	I0603 13:54:25.587260 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45eebdf59dbe2a146e291cb81691cc67c3a992d686094e7a30a0f781096d558a"
	I0603 13:54:25.638872 1143252 logs.go:123] Gathering logs for etcd [114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05] ...
	I0603 13:54:25.638909 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 114ee50eb8f33f312035ed301e5ed9e2d2ff9a93ce3ff46936a17d1370299f05"
	I0603 13:54:25.687323 1143252 logs.go:123] Gathering logs for kube-proxy [c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1] ...
	I0603 13:54:25.687372 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c17ec1b1cf666f615cf6352846cdd5d1d3822771c87426cd730d96342f51fad1"
	I0603 13:54:25.739508 1143252 logs.go:123] Gathering logs for storage-provisioner [e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b] ...
	I0603 13:54:25.739539 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0c551e53061e478c5820677f96bd6cb6a0e071b2ca16b138e56ec9b4ebec90b"
	I0603 13:54:25.775066 1143252 logs.go:123] Gathering logs for storage-provisioner [141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88] ...
	I0603 13:54:25.775096 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 141e89821d9bab375aa3627d011cfcf04e4fd50e6bba2ab5e4997fd265f1cb88"
	I0603 13:54:25.811982 1143252 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:25.812016 1143252 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:28.685228 1143252 system_pods.go:59] 8 kube-system pods found
	I0603 13:54:28.685261 1143252 system_pods.go:61] "coredns-7db6d8ff4d-qdjrv" [9a490ea5-c189-4d28-bd6b-509610d35f37] Running
	I0603 13:54:28.685265 1143252 system_pods.go:61] "etcd-embed-certs-223260" [97807b62-195b-4d94-a7f8-754f68ad4f03] Running
	I0603 13:54:28.685269 1143252 system_pods.go:61] "kube-apiserver-embed-certs-223260" [df2f6cde-407c-4ed2-8fec-5fa61a428a88] Running
	I0603 13:54:28.685272 1143252 system_pods.go:61] "kube-controller-manager-embed-certs-223260" [9b8bc1b7-3f43-4626-b9ee-37f5176b7fd6] Running
	I0603 13:54:28.685276 1143252 system_pods.go:61] "kube-proxy-s5vdl" [4c515f67-d265-4140-82ec-ba9ac4ddda80] Running
	I0603 13:54:28.685279 1143252 system_pods.go:61] "kube-scheduler-embed-certs-223260" [d23001bf-d971-42d2-a901-b2ec4b4db649] Running
	I0603 13:54:28.685285 1143252 system_pods.go:61] "metrics-server-569cc877fc-v7d9t" [e89c698d-7aab-4acd-a9b3-5ba0315ad681] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:54:28.685290 1143252 system_pods.go:61] "storage-provisioner" [6ff65744-2d90-4589-a97f-d6b4d792eab4] Running
	I0603 13:54:28.685298 1143252 system_pods.go:74] duration metric: took 3.908982484s to wait for pod list to return data ...
	I0603 13:54:28.685305 1143252 default_sa.go:34] waiting for default service account to be created ...
	I0603 13:54:28.687914 1143252 default_sa.go:45] found service account: "default"
	I0603 13:54:28.687939 1143252 default_sa.go:55] duration metric: took 2.627402ms for default service account to be created ...
	I0603 13:54:28.687947 1143252 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 13:54:28.693336 1143252 system_pods.go:86] 8 kube-system pods found
	I0603 13:54:28.693369 1143252 system_pods.go:89] "coredns-7db6d8ff4d-qdjrv" [9a490ea5-c189-4d28-bd6b-509610d35f37] Running
	I0603 13:54:28.693375 1143252 system_pods.go:89] "etcd-embed-certs-223260" [97807b62-195b-4d94-a7f8-754f68ad4f03] Running
	I0603 13:54:28.693379 1143252 system_pods.go:89] "kube-apiserver-embed-certs-223260" [df2f6cde-407c-4ed2-8fec-5fa61a428a88] Running
	I0603 13:54:28.693385 1143252 system_pods.go:89] "kube-controller-manager-embed-certs-223260" [9b8bc1b7-3f43-4626-b9ee-37f5176b7fd6] Running
	I0603 13:54:28.693389 1143252 system_pods.go:89] "kube-proxy-s5vdl" [4c515f67-d265-4140-82ec-ba9ac4ddda80] Running
	I0603 13:54:28.693393 1143252 system_pods.go:89] "kube-scheduler-embed-certs-223260" [d23001bf-d971-42d2-a901-b2ec4b4db649] Running
	I0603 13:54:28.693401 1143252 system_pods.go:89] "metrics-server-569cc877fc-v7d9t" [e89c698d-7aab-4acd-a9b3-5ba0315ad681] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:54:28.693418 1143252 system_pods.go:89] "storage-provisioner" [6ff65744-2d90-4589-a97f-d6b4d792eab4] Running
	I0603 13:54:28.693438 1143252 system_pods.go:126] duration metric: took 5.484487ms to wait for k8s-apps to be running ...
	I0603 13:54:28.693450 1143252 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 13:54:28.693497 1143252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:54:28.710364 1143252 system_svc.go:56] duration metric: took 16.901982ms WaitForService to wait for kubelet
	I0603 13:54:28.710399 1143252 kubeadm.go:576] duration metric: took 4m23.786738812s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 13:54:28.710444 1143252 node_conditions.go:102] verifying NodePressure condition ...
	I0603 13:54:28.713300 1143252 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 13:54:28.713328 1143252 node_conditions.go:123] node cpu capacity is 2
	I0603 13:54:28.713362 1143252 node_conditions.go:105] duration metric: took 2.909242ms to run NodePressure ...
	I0603 13:54:28.713382 1143252 start.go:240] waiting for startup goroutines ...
	I0603 13:54:28.713392 1143252 start.go:245] waiting for cluster config update ...
	I0603 13:54:28.713424 1143252 start.go:254] writing updated cluster config ...
	I0603 13:54:28.713798 1143252 ssh_runner.go:195] Run: rm -f paused
	I0603 13:54:28.767538 1143252 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 13:54:28.769737 1143252 out.go:177] * Done! kubectl is now configured to use "embed-certs-223260" cluster and "default" namespace by default
	I0603 13:54:27.370695 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:29.870214 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:25.990136 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:28.489276 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:30.489392 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:27.495908 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:27.509885 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:27.509968 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:27.545591 1143678 cri.go:89] found id: ""
	I0603 13:54:27.545626 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.545635 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:27.545641 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:27.545695 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:27.583699 1143678 cri.go:89] found id: ""
	I0603 13:54:27.583728 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.583740 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:27.583748 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:27.583835 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:27.623227 1143678 cri.go:89] found id: ""
	I0603 13:54:27.623268 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.623277 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:27.623283 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:27.623341 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:27.663057 1143678 cri.go:89] found id: ""
	I0603 13:54:27.663090 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.663102 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:27.663109 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:27.663187 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:27.708448 1143678 cri.go:89] found id: ""
	I0603 13:54:27.708481 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.708489 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:27.708495 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:27.708551 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:27.743629 1143678 cri.go:89] found id: ""
	I0603 13:54:27.743663 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.743674 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:27.743682 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:27.743748 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:27.778094 1143678 cri.go:89] found id: ""
	I0603 13:54:27.778128 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.778137 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:27.778147 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:27.778210 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:27.813137 1143678 cri.go:89] found id: ""
	I0603 13:54:27.813170 1143678 logs.go:276] 0 containers: []
	W0603 13:54:27.813180 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:27.813192 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:27.813208 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:27.861100 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:27.861136 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:27.914752 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:27.914794 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:27.929479 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:27.929511 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:28.002898 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:28.002926 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:28.002942 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:30.581890 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:30.595982 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:30.596068 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:30.638804 1143678 cri.go:89] found id: ""
	I0603 13:54:30.638841 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.638853 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:30.638862 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:30.638942 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:30.677202 1143678 cri.go:89] found id: ""
	I0603 13:54:30.677242 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.677253 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:30.677262 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:30.677329 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:30.717382 1143678 cri.go:89] found id: ""
	I0603 13:54:30.717436 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.717446 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:30.717455 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:30.717523 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:30.753691 1143678 cri.go:89] found id: ""
	I0603 13:54:30.753719 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.753728 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:30.753734 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:30.753798 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:30.790686 1143678 cri.go:89] found id: ""
	I0603 13:54:30.790714 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.790723 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:30.790729 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:30.790783 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:30.830196 1143678 cri.go:89] found id: ""
	I0603 13:54:30.830224 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.830237 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:30.830245 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:30.830299 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:30.865952 1143678 cri.go:89] found id: ""
	I0603 13:54:30.865980 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.865992 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:30.866000 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:30.866066 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:30.901561 1143678 cri.go:89] found id: ""
	I0603 13:54:30.901592 1143678 logs.go:276] 0 containers: []
	W0603 13:54:30.901601 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:30.901610 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:30.901627 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:30.979416 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:30.979459 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:31.035024 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:31.035061 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:31.089005 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:31.089046 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:31.105176 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:31.105210 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:31.172862 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:32.371040 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:34.870810 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:32.989041 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:34.989599 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:33.674069 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:33.688423 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:33.688499 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:33.729840 1143678 cri.go:89] found id: ""
	I0603 13:54:33.729876 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.729886 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:33.729893 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:33.729945 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:33.764984 1143678 cri.go:89] found id: ""
	I0603 13:54:33.765010 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.765018 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:33.765025 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:33.765075 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:33.798411 1143678 cri.go:89] found id: ""
	I0603 13:54:33.798446 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.798459 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:33.798468 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:33.798547 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:33.831565 1143678 cri.go:89] found id: ""
	I0603 13:54:33.831600 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.831611 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:33.831620 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:33.831688 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:33.869701 1143678 cri.go:89] found id: ""
	I0603 13:54:33.869727 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.869735 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:33.869741 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:33.869802 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:33.906108 1143678 cri.go:89] found id: ""
	I0603 13:54:33.906134 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.906144 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:33.906153 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:33.906218 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:33.946577 1143678 cri.go:89] found id: ""
	I0603 13:54:33.946607 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.946615 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:33.946621 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:33.946673 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:33.986691 1143678 cri.go:89] found id: ""
	I0603 13:54:33.986724 1143678 logs.go:276] 0 containers: []
	W0603 13:54:33.986743 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:33.986757 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:33.986775 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:34.044068 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:34.044110 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:34.059686 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:34.059724 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:34.141490 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:34.141514 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:34.141531 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:34.227890 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:34.227930 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:36.778969 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:36.792527 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:36.792612 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:36.828044 1143678 cri.go:89] found id: ""
	I0603 13:54:36.828083 1143678 logs.go:276] 0 containers: []
	W0603 13:54:36.828096 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:36.828102 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:36.828166 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:36.863869 1143678 cri.go:89] found id: ""
	I0603 13:54:36.863905 1143678 logs.go:276] 0 containers: []
	W0603 13:54:36.863917 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:36.863926 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:36.863996 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:36.899610 1143678 cri.go:89] found id: ""
	I0603 13:54:36.899649 1143678 logs.go:276] 0 containers: []
	W0603 13:54:36.899661 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:36.899669 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:36.899742 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:36.938627 1143678 cri.go:89] found id: ""
	I0603 13:54:36.938664 1143678 logs.go:276] 0 containers: []
	W0603 13:54:36.938675 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:36.938683 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:36.938739 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:36.973810 1143678 cri.go:89] found id: ""
	I0603 13:54:36.973842 1143678 logs.go:276] 0 containers: []
	W0603 13:54:36.973857 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:36.973863 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:36.973915 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:37.013759 1143678 cri.go:89] found id: ""
	I0603 13:54:37.013792 1143678 logs.go:276] 0 containers: []
	W0603 13:54:37.013805 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:37.013813 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:37.013881 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:37.049665 1143678 cri.go:89] found id: ""
	I0603 13:54:37.049697 1143678 logs.go:276] 0 containers: []
	W0603 13:54:37.049706 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:37.049712 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:37.049787 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:37.087405 1143678 cri.go:89] found id: ""
	I0603 13:54:37.087436 1143678 logs.go:276] 0 containers: []
	W0603 13:54:37.087446 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:37.087457 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:37.087470 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:37.126443 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:37.126476 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:37.177976 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:37.178015 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:37.192821 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:37.192860 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:37.267895 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:37.267926 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:37.267945 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:36.871536 1143450 pod_ready.go:102] pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:37.371048 1143450 pod_ready.go:81] duration metric: took 4m0.007102739s for pod "metrics-server-569cc877fc-8xw9v" in "kube-system" namespace to be "Ready" ...
	E0603 13:54:37.371080 1143450 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0603 13:54:37.371092 1143450 pod_ready.go:38] duration metric: took 4m5.236838117s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:54:37.371111 1143450 api_server.go:52] waiting for apiserver process to appear ...
	I0603 13:54:37.371145 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:37.371202 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:37.428454 1143450 cri.go:89] found id: "50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836"
	I0603 13:54:37.428487 1143450 cri.go:89] found id: ""
	I0603 13:54:37.428498 1143450 logs.go:276] 1 containers: [50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836]
	I0603 13:54:37.428564 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:37.434473 1143450 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:37.434552 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:37.476251 1143450 cri.go:89] found id: "c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d"
	I0603 13:54:37.476288 1143450 cri.go:89] found id: ""
	I0603 13:54:37.476300 1143450 logs.go:276] 1 containers: [c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d]
	I0603 13:54:37.476368 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:37.483190 1143450 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:37.483280 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:37.528660 1143450 cri.go:89] found id: "bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d"
	I0603 13:54:37.528693 1143450 cri.go:89] found id: ""
	I0603 13:54:37.528704 1143450 logs.go:276] 1 containers: [bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d]
	I0603 13:54:37.528797 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:37.533716 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:37.533809 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:37.573995 1143450 cri.go:89] found id: "7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a"
	I0603 13:54:37.574016 1143450 cri.go:89] found id: ""
	I0603 13:54:37.574025 1143450 logs.go:276] 1 containers: [7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a]
	I0603 13:54:37.574071 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:37.578385 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:37.578465 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:37.616468 1143450 cri.go:89] found id: "9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b"
	I0603 13:54:37.616511 1143450 cri.go:89] found id: ""
	I0603 13:54:37.616522 1143450 logs.go:276] 1 containers: [9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b]
	I0603 13:54:37.616603 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:37.621204 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:37.621277 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:37.661363 1143450 cri.go:89] found id: "b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7"
	I0603 13:54:37.661390 1143450 cri.go:89] found id: ""
	I0603 13:54:37.661401 1143450 logs.go:276] 1 containers: [b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7]
	I0603 13:54:37.661507 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:37.665969 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:37.666055 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:37.705096 1143450 cri.go:89] found id: ""
	I0603 13:54:37.705128 1143450 logs.go:276] 0 containers: []
	W0603 13:54:37.705136 1143450 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:37.705142 1143450 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0603 13:54:37.705210 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0603 13:54:37.746365 1143450 cri.go:89] found id: "969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240"
	I0603 13:54:37.746400 1143450 cri.go:89] found id: "bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4"
	I0603 13:54:37.746404 1143450 cri.go:89] found id: ""
	I0603 13:54:37.746412 1143450 logs.go:276] 2 containers: [969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240 bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4]
	I0603 13:54:37.746470 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:37.750874 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:37.755146 1143450 logs.go:123] Gathering logs for kube-controller-manager [b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7] ...
	I0603 13:54:37.755175 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7"
	I0603 13:54:37.811365 1143450 logs.go:123] Gathering logs for storage-provisioner [bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4] ...
	I0603 13:54:37.811403 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4"
	I0603 13:54:37.849687 1143450 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:37.849729 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:37.904870 1143450 logs.go:123] Gathering logs for etcd [c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d] ...
	I0603 13:54:37.904909 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d"
	I0603 13:54:37.955448 1143450 logs.go:123] Gathering logs for coredns [bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d] ...
	I0603 13:54:37.955497 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d"
	I0603 13:54:37.996659 1143450 logs.go:123] Gathering logs for kube-proxy [9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b] ...
	I0603 13:54:37.996687 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b"
	I0603 13:54:38.047501 1143450 logs.go:123] Gathering logs for storage-provisioner [969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240] ...
	I0603 13:54:38.047540 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240"
	I0603 13:54:38.090932 1143450 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:38.090969 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:38.606612 1143450 logs.go:123] Gathering logs for container status ...
	I0603 13:54:38.606672 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:38.652732 1143450 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:38.652774 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:38.670570 1143450 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:38.670620 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 13:54:38.812156 1143450 logs.go:123] Gathering logs for kube-apiserver [50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836] ...
	I0603 13:54:38.812208 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836"
	I0603 13:54:38.862940 1143450 logs.go:123] Gathering logs for kube-scheduler [7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a] ...
	I0603 13:54:38.862988 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a"
	I0603 13:54:37.491134 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:39.990379 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:39.846505 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:39.860426 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:39.860514 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:39.896684 1143678 cri.go:89] found id: ""
	I0603 13:54:39.896712 1143678 logs.go:276] 0 containers: []
	W0603 13:54:39.896726 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:54:39.896736 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:39.896801 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:39.932437 1143678 cri.go:89] found id: ""
	I0603 13:54:39.932482 1143678 logs.go:276] 0 containers: []
	W0603 13:54:39.932494 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:54:39.932503 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:39.932571 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:39.967850 1143678 cri.go:89] found id: ""
	I0603 13:54:39.967883 1143678 logs.go:276] 0 containers: []
	W0603 13:54:39.967891 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:54:39.967898 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:39.967952 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:40.003255 1143678 cri.go:89] found id: ""
	I0603 13:54:40.003284 1143678 logs.go:276] 0 containers: []
	W0603 13:54:40.003292 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:54:40.003298 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:40.003351 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:40.045865 1143678 cri.go:89] found id: ""
	I0603 13:54:40.045892 1143678 logs.go:276] 0 containers: []
	W0603 13:54:40.045904 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:54:40.045912 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:40.045976 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:40.082469 1143678 cri.go:89] found id: ""
	I0603 13:54:40.082498 1143678 logs.go:276] 0 containers: []
	W0603 13:54:40.082507 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:54:40.082513 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:40.082584 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:40.117181 1143678 cri.go:89] found id: ""
	I0603 13:54:40.117231 1143678 logs.go:276] 0 containers: []
	W0603 13:54:40.117242 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:40.117250 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:54:40.117320 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:54:40.157776 1143678 cri.go:89] found id: ""
	I0603 13:54:40.157813 1143678 logs.go:276] 0 containers: []
	W0603 13:54:40.157822 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:54:40.157832 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:40.157848 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:40.213374 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:40.213437 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:40.228298 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:40.228330 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:54:40.305450 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:54:40.305485 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:40.305503 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:40.393653 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:54:40.393704 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:41.405129 1143450 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:41.423234 1143450 api_server.go:72] duration metric: took 4m14.998447047s to wait for apiserver process to appear ...
	I0603 13:54:41.423266 1143450 api_server.go:88] waiting for apiserver healthz status ...
	I0603 13:54:41.423312 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:41.423374 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:41.463540 1143450 cri.go:89] found id: "50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836"
	I0603 13:54:41.463562 1143450 cri.go:89] found id: ""
	I0603 13:54:41.463570 1143450 logs.go:276] 1 containers: [50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836]
	I0603 13:54:41.463620 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:41.468145 1143450 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:41.468226 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:41.511977 1143450 cri.go:89] found id: "c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d"
	I0603 13:54:41.512000 1143450 cri.go:89] found id: ""
	I0603 13:54:41.512017 1143450 logs.go:276] 1 containers: [c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d]
	I0603 13:54:41.512081 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:41.516600 1143450 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:41.516674 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:41.554392 1143450 cri.go:89] found id: "bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d"
	I0603 13:54:41.554420 1143450 cri.go:89] found id: ""
	I0603 13:54:41.554443 1143450 logs.go:276] 1 containers: [bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d]
	I0603 13:54:41.554508 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:41.558983 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:41.559039 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:41.597710 1143450 cri.go:89] found id: "7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a"
	I0603 13:54:41.597737 1143450 cri.go:89] found id: ""
	I0603 13:54:41.597747 1143450 logs.go:276] 1 containers: [7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a]
	I0603 13:54:41.597811 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:41.602164 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:41.602227 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:41.639422 1143450 cri.go:89] found id: "9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b"
	I0603 13:54:41.639452 1143450 cri.go:89] found id: ""
	I0603 13:54:41.639462 1143450 logs.go:276] 1 containers: [9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b]
	I0603 13:54:41.639532 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:41.644093 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:41.644171 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:41.682475 1143450 cri.go:89] found id: "b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7"
	I0603 13:54:41.682506 1143450 cri.go:89] found id: ""
	I0603 13:54:41.682515 1143450 logs.go:276] 1 containers: [b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7]
	I0603 13:54:41.682578 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:41.687654 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:41.687734 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:41.724804 1143450 cri.go:89] found id: ""
	I0603 13:54:41.724839 1143450 logs.go:276] 0 containers: []
	W0603 13:54:41.724850 1143450 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:41.724858 1143450 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0603 13:54:41.724928 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0603 13:54:41.764625 1143450 cri.go:89] found id: "969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240"
	I0603 13:54:41.764653 1143450 cri.go:89] found id: "bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4"
	I0603 13:54:41.764659 1143450 cri.go:89] found id: ""
	I0603 13:54:41.764670 1143450 logs.go:276] 2 containers: [969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240 bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4]
	I0603 13:54:41.764736 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:41.769499 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:41.773782 1143450 logs.go:123] Gathering logs for container status ...
	I0603 13:54:41.773806 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:41.816486 1143450 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:41.816520 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:41.833538 1143450 logs.go:123] Gathering logs for kube-apiserver [50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836] ...
	I0603 13:54:41.833569 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836"
	I0603 13:54:41.877958 1143450 logs.go:123] Gathering logs for etcd [c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d] ...
	I0603 13:54:41.878004 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d"
	I0603 13:54:41.922575 1143450 logs.go:123] Gathering logs for kube-controller-manager [b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7] ...
	I0603 13:54:41.922612 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7"
	I0603 13:54:41.983865 1143450 logs.go:123] Gathering logs for storage-provisioner [969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240] ...
	I0603 13:54:41.983900 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240"
	I0603 13:54:42.032746 1143450 logs.go:123] Gathering logs for storage-provisioner [bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4] ...
	I0603 13:54:42.032773 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4"
	I0603 13:54:42.076129 1143450 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:42.076166 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:42.129061 1143450 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:42.129099 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 13:54:42.248179 1143450 logs.go:123] Gathering logs for coredns [bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d] ...
	I0603 13:54:42.248213 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d"
	I0603 13:54:42.292179 1143450 logs.go:123] Gathering logs for kube-scheduler [7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a] ...
	I0603 13:54:42.292288 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a"
	I0603 13:54:42.340447 1143450 logs.go:123] Gathering logs for kube-proxy [9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b] ...
	I0603 13:54:42.340493 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b"
	I0603 13:54:42.381993 1143450 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:42.382024 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:42.488926 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:44.990221 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:42.934691 1143678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:54:42.948505 1143678 kubeadm.go:591] duration metric: took 4m4.45791317s to restartPrimaryControlPlane
	W0603 13:54:42.948592 1143678 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0603 13:54:42.948629 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 13:54:48.316951 1143678 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.36829775s)
	I0603 13:54:48.317039 1143678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:54:48.333630 1143678 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 13:54:48.345772 1143678 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 13:54:48.357359 1143678 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 13:54:48.357386 1143678 kubeadm.go:156] found existing configuration files:
	
	I0603 13:54:48.357477 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 13:54:48.367844 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 13:54:48.367917 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 13:54:48.379349 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 13:54:48.389684 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 13:54:48.389760 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 13:54:48.401562 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 13:54:48.412670 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 13:54:48.412743 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 13:54:48.424261 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 13:54:48.434598 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 13:54:48.434674 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 13:54:48.446187 1143678 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 13:54:48.527873 1143678 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0603 13:54:48.528073 1143678 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 13:54:48.695244 1143678 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 13:54:48.695401 1143678 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 13:54:48.695581 1143678 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 13:54:48.930141 1143678 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 13:54:45.281199 1143450 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8444/healthz ...
	I0603 13:54:45.286305 1143450 api_server.go:279] https://192.168.39.177:8444/healthz returned 200:
	ok
	I0603 13:54:45.287421 1143450 api_server.go:141] control plane version: v1.30.1
	I0603 13:54:45.287444 1143450 api_server.go:131] duration metric: took 3.864171356s to wait for apiserver health ...
	I0603 13:54:45.287455 1143450 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 13:54:45.287486 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:54:45.287540 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:54:45.328984 1143450 cri.go:89] found id: "50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836"
	I0603 13:54:45.329012 1143450 cri.go:89] found id: ""
	I0603 13:54:45.329022 1143450 logs.go:276] 1 containers: [50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836]
	I0603 13:54:45.329075 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:45.334601 1143450 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:54:45.334683 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:54:45.382942 1143450 cri.go:89] found id: "c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d"
	I0603 13:54:45.382967 1143450 cri.go:89] found id: ""
	I0603 13:54:45.382978 1143450 logs.go:276] 1 containers: [c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d]
	I0603 13:54:45.383039 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:45.387904 1143450 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:54:45.387969 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:54:45.431948 1143450 cri.go:89] found id: "bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d"
	I0603 13:54:45.431981 1143450 cri.go:89] found id: ""
	I0603 13:54:45.431992 1143450 logs.go:276] 1 containers: [bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d]
	I0603 13:54:45.432052 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:45.440993 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:54:45.441074 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:54:45.490086 1143450 cri.go:89] found id: "7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a"
	I0603 13:54:45.490114 1143450 cri.go:89] found id: ""
	I0603 13:54:45.490125 1143450 logs.go:276] 1 containers: [7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a]
	I0603 13:54:45.490194 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:45.494628 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:54:45.494688 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:54:45.532264 1143450 cri.go:89] found id: "9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b"
	I0603 13:54:45.532296 1143450 cri.go:89] found id: ""
	I0603 13:54:45.532307 1143450 logs.go:276] 1 containers: [9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b]
	I0603 13:54:45.532374 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:45.536914 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:54:45.536985 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:54:45.576641 1143450 cri.go:89] found id: "b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7"
	I0603 13:54:45.576663 1143450 cri.go:89] found id: ""
	I0603 13:54:45.576671 1143450 logs.go:276] 1 containers: [b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7]
	I0603 13:54:45.576720 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:45.580872 1143450 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:54:45.580926 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:54:45.628834 1143450 cri.go:89] found id: ""
	I0603 13:54:45.628864 1143450 logs.go:276] 0 containers: []
	W0603 13:54:45.628872 1143450 logs.go:278] No container was found matching "kindnet"
	I0603 13:54:45.628879 1143450 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0603 13:54:45.628931 1143450 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0603 13:54:45.671689 1143450 cri.go:89] found id: "969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240"
	I0603 13:54:45.671719 1143450 cri.go:89] found id: "bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4"
	I0603 13:54:45.671727 1143450 cri.go:89] found id: ""
	I0603 13:54:45.671740 1143450 logs.go:276] 2 containers: [969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240 bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4]
	I0603 13:54:45.671799 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:45.677161 1143450 ssh_runner.go:195] Run: which crictl
	I0603 13:54:45.682179 1143450 logs.go:123] Gathering logs for container status ...
	I0603 13:54:45.682219 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0603 13:54:45.731155 1143450 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:54:45.731192 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0603 13:54:45.846365 1143450 logs.go:123] Gathering logs for kube-apiserver [50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836] ...
	I0603 13:54:45.846411 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50541b09cc089f8b3b5115e8ef71147a126246b62636287bca5c4f39e1e8e836"
	I0603 13:54:45.907694 1143450 logs.go:123] Gathering logs for coredns [bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d] ...
	I0603 13:54:45.907733 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc9ddfc8f250badc38397518def822171251effc31acbdde868ba8bb0c98d12d"
	I0603 13:54:45.952881 1143450 logs.go:123] Gathering logs for kube-scheduler [7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a] ...
	I0603 13:54:45.952919 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7aab9931698b9d0203eed0c81b909670718bd813bef6c28ca6443ed29cb48a8a"
	I0603 13:54:45.998674 1143450 logs.go:123] Gathering logs for kube-controller-manager [b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7] ...
	I0603 13:54:45.998722 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b97dd1f775dd34d7e78f9718437de49993c41b11ea7e115646f8829429d502a7"
	I0603 13:54:46.061902 1143450 logs.go:123] Gathering logs for storage-provisioner [969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240] ...
	I0603 13:54:46.061949 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 969178964b33deb4efbb9f1bf24dec81423d89157aa4accc7f884f8ba8994240"
	I0603 13:54:46.106017 1143450 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:54:46.106056 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:54:46.473915 1143450 logs.go:123] Gathering logs for kubelet ...
	I0603 13:54:46.473981 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:54:46.530212 1143450 logs.go:123] Gathering logs for dmesg ...
	I0603 13:54:46.530260 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:54:46.545954 1143450 logs.go:123] Gathering logs for etcd [c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d] ...
	I0603 13:54:46.545996 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1051588032f5077dad5975ae7f21cc2347b9494f7ac3923207938f8ad3bca3d"
	I0603 13:54:46.595057 1143450 logs.go:123] Gathering logs for kube-proxy [9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b] ...
	I0603 13:54:46.595097 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9359de3110480b09f8ca3add9f49910f4de5b2e40a34cab04863cb1813bdcc5b"
	I0603 13:54:46.637835 1143450 logs.go:123] Gathering logs for storage-provisioner [bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4] ...
	I0603 13:54:46.637872 1143450 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc407a1d19d20012384eacdaf1cd2ec5399dfea2806c8961de8b248a0944f8d4"
	I0603 13:54:49.190539 1143450 system_pods.go:59] 8 kube-system pods found
	I0603 13:54:49.190572 1143450 system_pods.go:61] "coredns-7db6d8ff4d-flxqj" [a116f363-ca50-4e2d-8c77-e99498c81e36] Running
	I0603 13:54:49.190577 1143450 system_pods.go:61] "etcd-default-k8s-diff-port-030870" [4134b8e4-b7c4-4571-ae7f-f1eff2be2427] Running
	I0603 13:54:49.190582 1143450 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-030870" [38fe3d48-9d20-448a-b8d1-7c3af8ab1d2b] Running
	I0603 13:54:49.190586 1143450 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-030870" [5c8f2fc4-fc4f-48f8-8d81-3b64aa9a93c3] Running
	I0603 13:54:49.190590 1143450 system_pods.go:61] "kube-proxy-thsrx" [96df5442-b343-47c8-a561-681a2d568d50] Running
	I0603 13:54:49.190593 1143450 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-030870" [1f2c23a1-1c2c-463f-a5f0-e8f1bb8956f6] Running
	I0603 13:54:49.190602 1143450 system_pods.go:61] "metrics-server-569cc877fc-8xw9v" [4ab08177-2171-493b-928c-456d8a21fd68] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:54:49.190609 1143450 system_pods.go:61] "storage-provisioner" [64d080e5-d582-4ee5-adbc-a652e8e2b820] Running
	I0603 13:54:49.190620 1143450 system_pods.go:74] duration metric: took 3.903157143s to wait for pod list to return data ...
	I0603 13:54:49.190633 1143450 default_sa.go:34] waiting for default service account to be created ...
	I0603 13:54:49.193192 1143450 default_sa.go:45] found service account: "default"
	I0603 13:54:49.193219 1143450 default_sa.go:55] duration metric: took 2.575016ms for default service account to be created ...
	I0603 13:54:49.193229 1143450 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 13:54:49.202028 1143450 system_pods.go:86] 8 kube-system pods found
	I0603 13:54:49.202065 1143450 system_pods.go:89] "coredns-7db6d8ff4d-flxqj" [a116f363-ca50-4e2d-8c77-e99498c81e36] Running
	I0603 13:54:49.202074 1143450 system_pods.go:89] "etcd-default-k8s-diff-port-030870" [4134b8e4-b7c4-4571-ae7f-f1eff2be2427] Running
	I0603 13:54:49.202081 1143450 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-030870" [38fe3d48-9d20-448a-b8d1-7c3af8ab1d2b] Running
	I0603 13:54:49.202088 1143450 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-030870" [5c8f2fc4-fc4f-48f8-8d81-3b64aa9a93c3] Running
	I0603 13:54:49.202094 1143450 system_pods.go:89] "kube-proxy-thsrx" [96df5442-b343-47c8-a561-681a2d568d50] Running
	I0603 13:54:49.202100 1143450 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-030870" [1f2c23a1-1c2c-463f-a5f0-e8f1bb8956f6] Running
	I0603 13:54:49.202113 1143450 system_pods.go:89] "metrics-server-569cc877fc-8xw9v" [4ab08177-2171-493b-928c-456d8a21fd68] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:54:49.202124 1143450 system_pods.go:89] "storage-provisioner" [64d080e5-d582-4ee5-adbc-a652e8e2b820] Running
	I0603 13:54:49.202135 1143450 system_pods.go:126] duration metric: took 8.899065ms to wait for k8s-apps to be running ...
	I0603 13:54:49.202152 1143450 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 13:54:49.202209 1143450 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:54:49.220199 1143450 system_svc.go:56] duration metric: took 18.025994ms WaitForService to wait for kubelet
	I0603 13:54:49.220242 1143450 kubeadm.go:576] duration metric: took 4m22.79546223s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 13:54:49.220269 1143450 node_conditions.go:102] verifying NodePressure condition ...
	I0603 13:54:49.223327 1143450 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 13:54:49.223354 1143450 node_conditions.go:123] node cpu capacity is 2
	I0603 13:54:49.223367 1143450 node_conditions.go:105] duration metric: took 3.093435ms to run NodePressure ...
	I0603 13:54:49.223383 1143450 start.go:240] waiting for startup goroutines ...
	I0603 13:54:49.223393 1143450 start.go:245] waiting for cluster config update ...
	I0603 13:54:49.223408 1143450 start.go:254] writing updated cluster config ...
	I0603 13:54:49.223704 1143450 ssh_runner.go:195] Run: rm -f paused
	I0603 13:54:49.277924 1143450 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 13:54:49.280442 1143450 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-030870" cluster and "default" namespace by default
	I0603 13:54:48.932024 1143678 out.go:204]   - Generating certificates and keys ...
	I0603 13:54:48.932110 1143678 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 13:54:48.932168 1143678 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 13:54:48.932235 1143678 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 13:54:48.932305 1143678 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 13:54:48.932481 1143678 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 13:54:48.932639 1143678 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 13:54:48.933272 1143678 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 13:54:48.933771 1143678 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 13:54:48.934251 1143678 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 13:54:48.934654 1143678 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 13:54:48.934712 1143678 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 13:54:48.934762 1143678 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 13:54:49.063897 1143678 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 13:54:49.266680 1143678 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 13:54:49.364943 1143678 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 13:54:49.628905 1143678 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 13:54:49.645861 1143678 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 13:54:49.645991 1143678 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 13:54:49.646049 1143678 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 13:54:49.795196 1143678 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 13:54:47.490336 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:49.989543 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:49.798407 1143678 out.go:204]   - Booting up control plane ...
	I0603 13:54:49.798564 1143678 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 13:54:49.800163 1143678 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 13:54:49.802226 1143678 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 13:54:49.803809 1143678 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 13:54:49.806590 1143678 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0603 13:54:52.490088 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:54.990092 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:57.488119 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:54:59.489775 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:01.490194 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:03.989075 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:05.990054 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:08.489226 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:10.989028 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:13.489118 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:15.489176 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:17.989008 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:20.489091 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:22.989284 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:24.990020 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:27.489326 1142862 pod_ready.go:102] pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace has status "Ready":"False"
	I0603 13:55:27.983679 1142862 pod_ready.go:81] duration metric: took 4m0.001142992s for pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace to be "Ready" ...
	E0603 13:55:27.983708 1142862 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-mtvrq" in "kube-system" namespace to be "Ready" (will not retry!)
	I0603 13:55:27.983731 1142862 pod_ready.go:38] duration metric: took 4m12.038904247s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:55:27.983760 1142862 kubeadm.go:591] duration metric: took 4m21.273943202s to restartPrimaryControlPlane
	W0603 13:55:27.983831 1142862 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0603 13:55:27.983865 1142862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 13:55:29.807867 1143678 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0603 13:55:29.808474 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:55:29.808754 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:55:34.809455 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:55:34.809722 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:55:44.810305 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:55:44.810491 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:55:59.870853 1142862 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.886953189s)
	I0603 13:55:59.870958 1142862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:55:59.889658 1142862 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0603 13:55:59.901529 1142862 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 13:55:59.914241 1142862 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 13:55:59.914266 1142862 kubeadm.go:156] found existing configuration files:
	
	I0603 13:55:59.914312 1142862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 13:55:59.924884 1142862 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 13:55:59.924950 1142862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 13:55:59.935494 1142862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 13:55:59.946222 1142862 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 13:55:59.946321 1142862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 13:55:59.956749 1142862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 13:55:59.967027 1142862 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 13:55:59.967110 1142862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 13:55:59.979124 1142862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 13:55:59.989689 1142862 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 13:55:59.989751 1142862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 13:56:00.000616 1142862 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 13:56:00.230878 1142862 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 13:56:04.811725 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:56:04.811929 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:56:08.995375 1142862 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0603 13:56:08.995463 1142862 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 13:56:08.995588 1142862 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 13:56:08.995724 1142862 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 13:56:08.995874 1142862 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 13:56:08.995970 1142862 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 13:56:08.997810 1142862 out.go:204]   - Generating certificates and keys ...
	I0603 13:56:08.997914 1142862 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 13:56:08.998045 1142862 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 13:56:08.998154 1142862 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 13:56:08.998321 1142862 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 13:56:08.998423 1142862 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 13:56:08.998506 1142862 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 13:56:08.998578 1142862 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 13:56:08.998665 1142862 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 13:56:08.998764 1142862 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 13:56:08.998860 1142862 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 13:56:08.998919 1142862 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 13:56:08.999011 1142862 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 13:56:08.999111 1142862 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 13:56:08.999202 1142862 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0603 13:56:08.999275 1142862 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 13:56:08.999354 1142862 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 13:56:08.999423 1142862 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 13:56:08.999538 1142862 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 13:56:08.999692 1142862 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 13:56:09.001133 1142862 out.go:204]   - Booting up control plane ...
	I0603 13:56:09.001218 1142862 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 13:56:09.001293 1142862 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 13:56:09.001354 1142862 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 13:56:09.001499 1142862 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 13:56:09.001584 1142862 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 13:56:09.001637 1142862 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 13:56:09.001768 1142862 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0603 13:56:09.001881 1142862 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0603 13:56:09.001941 1142862 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.923053ms
	I0603 13:56:09.002010 1142862 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0603 13:56:09.002090 1142862 kubeadm.go:309] [api-check] The API server is healthy after 5.502208975s
	I0603 13:56:09.002224 1142862 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0603 13:56:09.002363 1142862 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0603 13:56:09.002457 1142862 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0603 13:56:09.002647 1142862 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-817450 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0603 13:56:09.002713 1142862 kubeadm.go:309] [bootstrap-token] Using token: a7hbk8.xb8is7k6ewa3l3ya
	I0603 13:56:09.004666 1142862 out.go:204]   - Configuring RBAC rules ...
	I0603 13:56:09.004792 1142862 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0603 13:56:09.004883 1142862 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0603 13:56:09.005026 1142862 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0603 13:56:09.005234 1142862 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0603 13:56:09.005389 1142862 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0603 13:56:09.005531 1142862 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0603 13:56:09.005651 1142862 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0603 13:56:09.005709 1142862 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0603 13:56:09.005779 1142862 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0603 13:56:09.005787 1142862 kubeadm.go:309] 
	I0603 13:56:09.005869 1142862 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0603 13:56:09.005885 1142862 kubeadm.go:309] 
	I0603 13:56:09.006014 1142862 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0603 13:56:09.006034 1142862 kubeadm.go:309] 
	I0603 13:56:09.006076 1142862 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0603 13:56:09.006136 1142862 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0603 13:56:09.006197 1142862 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0603 13:56:09.006203 1142862 kubeadm.go:309] 
	I0603 13:56:09.006263 1142862 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0603 13:56:09.006273 1142862 kubeadm.go:309] 
	I0603 13:56:09.006330 1142862 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0603 13:56:09.006338 1142862 kubeadm.go:309] 
	I0603 13:56:09.006393 1142862 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0603 13:56:09.006476 1142862 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0603 13:56:09.006542 1142862 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0603 13:56:09.006548 1142862 kubeadm.go:309] 
	I0603 13:56:09.006629 1142862 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0603 13:56:09.006746 1142862 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0603 13:56:09.006758 1142862 kubeadm.go:309] 
	I0603 13:56:09.006850 1142862 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token a7hbk8.xb8is7k6ewa3l3ya \
	I0603 13:56:09.006987 1142862 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c33e9516f6d05db03b44f9194bafe44692a1b8ae1d860b8bc74f77578e93fdb1 \
	I0603 13:56:09.007028 1142862 kubeadm.go:309] 	--control-plane 
	I0603 13:56:09.007037 1142862 kubeadm.go:309] 
	I0603 13:56:09.007141 1142862 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0603 13:56:09.007170 1142862 kubeadm.go:309] 
	I0603 13:56:09.007266 1142862 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token a7hbk8.xb8is7k6ewa3l3ya \
	I0603 13:56:09.007427 1142862 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c33e9516f6d05db03b44f9194bafe44692a1b8ae1d860b8bc74f77578e93fdb1 
	I0603 13:56:09.007451 1142862 cni.go:84] Creating CNI manager for ""
	I0603 13:56:09.007464 1142862 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 13:56:09.009292 1142862 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0603 13:56:09.010750 1142862 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0603 13:56:09.022810 1142862 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0603 13:56:09.052132 1142862 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0603 13:56:09.052150 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:09.052150 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-817450 minikube.k8s.io/updated_at=2024_06_03T13_56_09_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354 minikube.k8s.io/name=no-preload-817450 minikube.k8s.io/primary=true
	I0603 13:56:09.291610 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:09.296892 1142862 ops.go:34] apiserver oom_adj: -16
	I0603 13:56:09.792736 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:10.292471 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:10.792688 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:11.291782 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:11.792454 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:12.292056 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:12.792150 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:13.292620 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:13.792024 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:14.292501 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:14.791790 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:15.292128 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:15.792608 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:16.292106 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:16.791876 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:17.292276 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:17.791876 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:18.292644 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:18.792571 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:19.292064 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:19.791908 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:20.292511 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:20.792137 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:21.292153 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:21.791809 1142862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0603 13:56:21.882178 1142862 kubeadm.go:1107] duration metric: took 12.830108615s to wait for elevateKubeSystemPrivileges
	W0603 13:56:21.882223 1142862 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0603 13:56:21.882236 1142862 kubeadm.go:393] duration metric: took 5m15.237452092s to StartCluster
	I0603 13:56:21.882260 1142862 settings.go:142] acquiring lock: {Name:mka7155af15d143794eb08b8670f7d850f44839e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:56:21.882368 1142862 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 13:56:21.883986 1142862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/kubeconfig: {Name:mk082a4c41fd0f4876b4085806e1bc5ef6533b14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 13:56:21.884288 1142862 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.125 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0603 13:56:21.885915 1142862 out.go:177] * Verifying Kubernetes components...
	I0603 13:56:21.884411 1142862 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0603 13:56:21.884504 1142862 config.go:182] Loaded profile config "no-preload-817450": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:56:21.887156 1142862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0603 13:56:21.887168 1142862 addons.go:69] Setting storage-provisioner=true in profile "no-preload-817450"
	I0603 13:56:21.887199 1142862 addons.go:69] Setting metrics-server=true in profile "no-preload-817450"
	I0603 13:56:21.887230 1142862 addons.go:234] Setting addon storage-provisioner=true in "no-preload-817450"
	W0603 13:56:21.887245 1142862 addons.go:243] addon storage-provisioner should already be in state true
	I0603 13:56:21.887261 1142862 addons.go:234] Setting addon metrics-server=true in "no-preload-817450"
	W0603 13:56:21.887276 1142862 addons.go:243] addon metrics-server should already be in state true
	I0603 13:56:21.887295 1142862 host.go:66] Checking if "no-preload-817450" exists ...
	I0603 13:56:21.887316 1142862 host.go:66] Checking if "no-preload-817450" exists ...
	I0603 13:56:21.887156 1142862 addons.go:69] Setting default-storageclass=true in profile "no-preload-817450"
	I0603 13:56:21.887366 1142862 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-817450"
	I0603 13:56:21.887709 1142862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:56:21.887711 1142862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:56:21.887749 1142862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:56:21.887752 1142862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:56:21.887779 1142862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:56:21.887778 1142862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:56:21.906019 1142862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37881
	I0603 13:56:21.906319 1142862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42985
	I0603 13:56:21.906563 1142862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43371
	I0603 13:56:21.906601 1142862 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:56:21.906714 1142862 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:56:21.907043 1142862 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:56:21.907126 1142862 main.go:141] libmachine: Using API Version  1
	I0603 13:56:21.907143 1142862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:56:21.907269 1142862 main.go:141] libmachine: Using API Version  1
	I0603 13:56:21.907288 1142862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:56:21.907558 1142862 main.go:141] libmachine: Using API Version  1
	I0603 13:56:21.907578 1142862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:56:21.907752 1142862 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:56:21.907891 1142862 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:56:21.908248 1142862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:56:21.908269 1142862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:56:21.908419 1142862 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:56:21.908487 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetState
	I0603 13:56:21.909150 1142862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:56:21.909175 1142862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:56:21.912898 1142862 addons.go:234] Setting addon default-storageclass=true in "no-preload-817450"
	W0603 13:56:21.912926 1142862 addons.go:243] addon default-storageclass should already be in state true
	I0603 13:56:21.912963 1142862 host.go:66] Checking if "no-preload-817450" exists ...
	I0603 13:56:21.913361 1142862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:56:21.913413 1142862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:56:21.928877 1142862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32983
	I0603 13:56:21.929336 1142862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44735
	I0603 13:56:21.929541 1142862 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:56:21.930006 1142862 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:56:21.930064 1142862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43105
	I0603 13:56:21.930161 1142862 main.go:141] libmachine: Using API Version  1
	I0603 13:56:21.930186 1142862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:56:21.930580 1142862 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:56:21.930723 1142862 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:56:21.930798 1142862 main.go:141] libmachine: Using API Version  1
	I0603 13:56:21.930812 1142862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:56:21.930891 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetState
	I0603 13:56:21.931037 1142862 main.go:141] libmachine: Using API Version  1
	I0603 13:56:21.931052 1142862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:56:21.931187 1142862 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:56:21.931369 1142862 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:56:21.931394 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetState
	I0603 13:56:21.932113 1142862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:56:21.932140 1142862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:56:21.933613 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:56:21.936068 1142862 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0603 13:56:21.934518 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:56:21.937788 1142862 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 13:56:21.937821 1142862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0603 13:56:21.937844 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:56:21.939174 1142862 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0603 13:56:21.940435 1142862 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0603 13:56:21.940458 1142862 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0603 13:56:21.940559 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:56:21.942628 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:56:21.943950 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:56:21.944227 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:56:21.944257 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:56:21.944449 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:56:21.944658 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:56:21.944734 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:56:21.944754 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:56:21.944780 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:56:21.944919 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:56:21.944932 1142862 sshutil.go:53] new ssh client: &{IP:192.168.72.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa Username:docker}
	I0603 13:56:21.945154 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:56:21.945309 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:56:21.945457 1142862 sshutil.go:53] new ssh client: &{IP:192.168.72.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa Username:docker}
	I0603 13:56:21.951140 1142862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45619
	I0603 13:56:21.951606 1142862 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:56:21.952125 1142862 main.go:141] libmachine: Using API Version  1
	I0603 13:56:21.952152 1142862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:56:21.952579 1142862 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:56:21.952808 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetState
	I0603 13:56:21.954505 1142862 main.go:141] libmachine: (no-preload-817450) Calling .DriverName
	I0603 13:56:21.954760 1142862 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0603 13:56:21.954781 1142862 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0603 13:56:21.954801 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHHostname
	I0603 13:56:21.958298 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:56:21.958816 1142862 main.go:141] libmachine: (no-preload-817450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cc:be", ip: ""} in network mk-no-preload-817450: {Iface:virbr4 ExpiryTime:2024-06-03 14:41:07 +0000 UTC Type:0 Mac:52:54:00:8f:cc:be Iaid: IPaddr:192.168.72.125 Prefix:24 Hostname:no-preload-817450 Clientid:01:52:54:00:8f:cc:be}
	I0603 13:56:21.958851 1142862 main.go:141] libmachine: (no-preload-817450) DBG | domain no-preload-817450 has defined IP address 192.168.72.125 and MAC address 52:54:00:8f:cc:be in network mk-no-preload-817450
	I0603 13:56:21.959086 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHPort
	I0603 13:56:21.959325 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHKeyPath
	I0603 13:56:21.959515 1142862 main.go:141] libmachine: (no-preload-817450) Calling .GetSSHUsername
	I0603 13:56:21.959678 1142862 sshutil.go:53] new ssh client: &{IP:192.168.72.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/no-preload-817450/id_rsa Username:docker}
	I0603 13:56:22.102359 1142862 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0603 13:56:22.121380 1142862 node_ready.go:35] waiting up to 6m0s for node "no-preload-817450" to be "Ready" ...
	I0603 13:56:22.135572 1142862 node_ready.go:49] node "no-preload-817450" has status "Ready":"True"
	I0603 13:56:22.135599 1142862 node_ready.go:38] duration metric: took 14.156504ms for node "no-preload-817450" to be "Ready" ...
	I0603 13:56:22.135614 1142862 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:56:22.151036 1142862 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-f8pbl" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:22.283805 1142862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0603 13:56:22.288913 1142862 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0603 13:56:22.288938 1142862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0603 13:56:22.297769 1142862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0603 13:56:22.329187 1142862 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0603 13:56:22.329221 1142862 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0603 13:56:22.393569 1142862 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 13:56:22.393594 1142862 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0603 13:56:22.435605 1142862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0603 13:56:23.470078 1142862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.18622743s)
	I0603 13:56:23.470155 1142862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.172344092s)
	I0603 13:56:23.470171 1142862 main.go:141] libmachine: Making call to close driver server
	I0603 13:56:23.470192 1142862 main.go:141] libmachine: (no-preload-817450) Calling .Close
	I0603 13:56:23.470200 1142862 main.go:141] libmachine: Making call to close driver server
	I0603 13:56:23.470216 1142862 main.go:141] libmachine: (no-preload-817450) Calling .Close
	I0603 13:56:23.470515 1142862 main.go:141] libmachine: (no-preload-817450) DBG | Closing plugin on server side
	I0603 13:56:23.470553 1142862 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:56:23.470567 1142862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:56:23.470576 1142862 main.go:141] libmachine: Making call to close driver server
	I0603 13:56:23.470586 1142862 main.go:141] libmachine: (no-preload-817450) Calling .Close
	I0603 13:56:23.470589 1142862 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:56:23.470602 1142862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:56:23.470613 1142862 main.go:141] libmachine: Making call to close driver server
	I0603 13:56:23.470625 1142862 main.go:141] libmachine: (no-preload-817450) Calling .Close
	I0603 13:56:23.470807 1142862 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:56:23.470823 1142862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:56:23.471108 1142862 main.go:141] libmachine: (no-preload-817450) DBG | Closing plugin on server side
	I0603 13:56:23.471138 1142862 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:56:23.471180 1142862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:56:23.492187 1142862 main.go:141] libmachine: Making call to close driver server
	I0603 13:56:23.492226 1142862 main.go:141] libmachine: (no-preload-817450) Calling .Close
	I0603 13:56:23.492596 1142862 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:56:23.492618 1142862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:56:23.492636 1142862 main.go:141] libmachine: (no-preload-817450) DBG | Closing plugin on server side
	I0603 13:56:23.892903 1142862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.45716212s)
	I0603 13:56:23.892991 1142862 main.go:141] libmachine: Making call to close driver server
	I0603 13:56:23.893006 1142862 main.go:141] libmachine: (no-preload-817450) Calling .Close
	I0603 13:56:23.893418 1142862 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:56:23.893426 1142862 main.go:141] libmachine: (no-preload-817450) DBG | Closing plugin on server side
	I0603 13:56:23.893442 1142862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:56:23.893459 1142862 main.go:141] libmachine: Making call to close driver server
	I0603 13:56:23.893468 1142862 main.go:141] libmachine: (no-preload-817450) Calling .Close
	I0603 13:56:23.893745 1142862 main.go:141] libmachine: (no-preload-817450) DBG | Closing plugin on server side
	I0603 13:56:23.893790 1142862 main.go:141] libmachine: Successfully made call to close driver server
	I0603 13:56:23.893811 1142862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0603 13:56:23.893832 1142862 addons.go:475] Verifying addon metrics-server=true in "no-preload-817450"
	I0603 13:56:23.895990 1142862 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0603 13:56:23.897968 1142862 addons.go:510] duration metric: took 2.013558036s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0603 13:56:24.157803 1142862 pod_ready.go:102] pod "coredns-7db6d8ff4d-f8pbl" in "kube-system" namespace has status "Ready":"False"
	I0603 13:56:24.658730 1142862 pod_ready.go:92] pod "coredns-7db6d8ff4d-f8pbl" in "kube-system" namespace has status "Ready":"True"
	I0603 13:56:24.658765 1142862 pod_ready.go:81] duration metric: took 2.507699067s for pod "coredns-7db6d8ff4d-f8pbl" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.658779 1142862 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.664053 1142862 pod_ready.go:92] pod "etcd-no-preload-817450" in "kube-system" namespace has status "Ready":"True"
	I0603 13:56:24.664084 1142862 pod_ready.go:81] duration metric: took 5.2962ms for pod "etcd-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.664096 1142862 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.668496 1142862 pod_ready.go:92] pod "kube-apiserver-no-preload-817450" in "kube-system" namespace has status "Ready":"True"
	I0603 13:56:24.668521 1142862 pod_ready.go:81] duration metric: took 4.417565ms for pod "kube-apiserver-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.668533 1142862 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.673549 1142862 pod_ready.go:92] pod "kube-controller-manager-no-preload-817450" in "kube-system" namespace has status "Ready":"True"
	I0603 13:56:24.673568 1142862 pod_ready.go:81] duration metric: took 5.026882ms for pod "kube-controller-manager-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.673577 1142862 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-t45fn" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.678207 1142862 pod_ready.go:92] pod "kube-proxy-t45fn" in "kube-system" namespace has status "Ready":"True"
	I0603 13:56:24.678228 1142862 pod_ready.go:81] duration metric: took 4.644345ms for pod "kube-proxy-t45fn" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:24.678239 1142862 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:25.056174 1142862 pod_ready.go:92] pod "kube-scheduler-no-preload-817450" in "kube-system" namespace has status "Ready":"True"
	I0603 13:56:25.056204 1142862 pod_ready.go:81] duration metric: took 377.957963ms for pod "kube-scheduler-no-preload-817450" in "kube-system" namespace to be "Ready" ...
	I0603 13:56:25.056214 1142862 pod_ready.go:38] duration metric: took 2.920586356s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0603 13:56:25.056231 1142862 api_server.go:52] waiting for apiserver process to appear ...
	I0603 13:56:25.056294 1142862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:56:25.071253 1142862 api_server.go:72] duration metric: took 3.186917827s to wait for apiserver process to appear ...
	I0603 13:56:25.071291 1142862 api_server.go:88] waiting for apiserver healthz status ...
	I0603 13:56:25.071319 1142862 api_server.go:253] Checking apiserver healthz at https://192.168.72.125:8443/healthz ...
	I0603 13:56:25.076592 1142862 api_server.go:279] https://192.168.72.125:8443/healthz returned 200:
	ok
	I0603 13:56:25.077531 1142862 api_server.go:141] control plane version: v1.30.1
	I0603 13:56:25.077553 1142862 api_server.go:131] duration metric: took 6.255263ms to wait for apiserver health ...
	I0603 13:56:25.077561 1142862 system_pods.go:43] waiting for kube-system pods to appear ...
	I0603 13:56:25.258520 1142862 system_pods.go:59] 9 kube-system pods found
	I0603 13:56:25.258552 1142862 system_pods.go:61] "coredns-7db6d8ff4d-f8pbl" [201e687b-1c1b-4030-8b59-b0257a0f876c] Running
	I0603 13:56:25.258557 1142862 system_pods.go:61] "coredns-7db6d8ff4d-jgk4p" [75956644-426d-49a7-b80c-492c4284f438] Running
	I0603 13:56:25.258560 1142862 system_pods.go:61] "etcd-no-preload-817450" [51d6541e-42ba-4d69-938d-0f2d379572ec] Running
	I0603 13:56:25.258565 1142862 system_pods.go:61] "kube-apiserver-no-preload-817450" [76c05ee7-8f8c-4280-af34-534c73422c51] Running
	I0603 13:56:25.258569 1142862 system_pods.go:61] "kube-controller-manager-no-preload-817450" [e3394427-3c75-4fb4-bd08-b22b8b6ad9eb] Running
	I0603 13:56:25.258573 1142862 system_pods.go:61] "kube-proxy-t45fn" [0578c151-2b36-4125-83f8-f4fbd62a1dc4] Running
	I0603 13:56:25.258578 1142862 system_pods.go:61] "kube-scheduler-no-preload-817450" [9d7c419f-d671-4b0a-bfee-7fe26c690312] Running
	I0603 13:56:25.258585 1142862 system_pods.go:61] "metrics-server-569cc877fc-j2lpf" [4f776017-1575-4461-a7c8-656e5a170460] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:56:25.258591 1142862 system_pods.go:61] "storage-provisioner" [f22655fc-5571-496e-a93f-3970d1693435] Running
	I0603 13:56:25.258603 1142862 system_pods.go:74] duration metric: took 181.034608ms to wait for pod list to return data ...
	I0603 13:56:25.258618 1142862 default_sa.go:34] waiting for default service account to be created ...
	I0603 13:56:25.454775 1142862 default_sa.go:45] found service account: "default"
	I0603 13:56:25.454810 1142862 default_sa.go:55] duration metric: took 196.18004ms for default service account to be created ...
	I0603 13:56:25.454820 1142862 system_pods.go:116] waiting for k8s-apps to be running ...
	I0603 13:56:25.658868 1142862 system_pods.go:86] 9 kube-system pods found
	I0603 13:56:25.658908 1142862 system_pods.go:89] "coredns-7db6d8ff4d-f8pbl" [201e687b-1c1b-4030-8b59-b0257a0f876c] Running
	I0603 13:56:25.658919 1142862 system_pods.go:89] "coredns-7db6d8ff4d-jgk4p" [75956644-426d-49a7-b80c-492c4284f438] Running
	I0603 13:56:25.658926 1142862 system_pods.go:89] "etcd-no-preload-817450" [51d6541e-42ba-4d69-938d-0f2d379572ec] Running
	I0603 13:56:25.658932 1142862 system_pods.go:89] "kube-apiserver-no-preload-817450" [76c05ee7-8f8c-4280-af34-534c73422c51] Running
	I0603 13:56:25.658938 1142862 system_pods.go:89] "kube-controller-manager-no-preload-817450" [e3394427-3c75-4fb4-bd08-b22b8b6ad9eb] Running
	I0603 13:56:25.658944 1142862 system_pods.go:89] "kube-proxy-t45fn" [0578c151-2b36-4125-83f8-f4fbd62a1dc4] Running
	I0603 13:56:25.658950 1142862 system_pods.go:89] "kube-scheduler-no-preload-817450" [9d7c419f-d671-4b0a-bfee-7fe26c690312] Running
	I0603 13:56:25.658959 1142862 system_pods.go:89] "metrics-server-569cc877fc-j2lpf" [4f776017-1575-4461-a7c8-656e5a170460] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0603 13:56:25.658970 1142862 system_pods.go:89] "storage-provisioner" [f22655fc-5571-496e-a93f-3970d1693435] Running
	I0603 13:56:25.658983 1142862 system_pods.go:126] duration metric: took 204.156078ms to wait for k8s-apps to be running ...
	I0603 13:56:25.658999 1142862 system_svc.go:44] waiting for kubelet service to be running ....
	I0603 13:56:25.659058 1142862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:56:25.674728 1142862 system_svc.go:56] duration metric: took 15.717684ms WaitForService to wait for kubelet
	I0603 13:56:25.674759 1142862 kubeadm.go:576] duration metric: took 3.790431991s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0603 13:56:25.674777 1142862 node_conditions.go:102] verifying NodePressure condition ...
	I0603 13:56:25.855640 1142862 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0603 13:56:25.855671 1142862 node_conditions.go:123] node cpu capacity is 2
	I0603 13:56:25.855684 1142862 node_conditions.go:105] duration metric: took 180.901974ms to run NodePressure ...
	I0603 13:56:25.855696 1142862 start.go:240] waiting for startup goroutines ...
	I0603 13:56:25.855703 1142862 start.go:245] waiting for cluster config update ...
	I0603 13:56:25.855716 1142862 start.go:254] writing updated cluster config ...
	I0603 13:56:25.856020 1142862 ssh_runner.go:195] Run: rm -f paused
	I0603 13:56:25.908747 1142862 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0603 13:56:25.911049 1142862 out.go:177] * Done! kubectl is now configured to use "no-preload-817450" cluster and "default" namespace by default
	I0603 13:56:44.813650 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:56:44.813933 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:56:44.813964 1143678 kubeadm.go:309] 
	I0603 13:56:44.814039 1143678 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0603 13:56:44.814075 1143678 kubeadm.go:309] 		timed out waiting for the condition
	I0603 13:56:44.814115 1143678 kubeadm.go:309] 
	I0603 13:56:44.814197 1143678 kubeadm.go:309] 	This error is likely caused by:
	I0603 13:56:44.814246 1143678 kubeadm.go:309] 		- The kubelet is not running
	I0603 13:56:44.814369 1143678 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0603 13:56:44.814378 1143678 kubeadm.go:309] 
	I0603 13:56:44.814496 1143678 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0603 13:56:44.814540 1143678 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0603 13:56:44.814573 1143678 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0603 13:56:44.814580 1143678 kubeadm.go:309] 
	I0603 13:56:44.814685 1143678 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0603 13:56:44.814785 1143678 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0603 13:56:44.814798 1143678 kubeadm.go:309] 
	I0603 13:56:44.814896 1143678 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0603 13:56:44.815001 1143678 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0603 13:56:44.815106 1143678 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0603 13:56:44.815208 1143678 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0603 13:56:44.815220 1143678 kubeadm.go:309] 
	I0603 13:56:44.816032 1143678 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 13:56:44.816137 1143678 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0603 13:56:44.816231 1143678 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0603 13:56:44.816405 1143678 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0603 13:56:44.816480 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0603 13:56:45.288649 1143678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:56:45.305284 1143678 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0603 13:56:45.316705 1143678 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0603 13:56:45.316736 1143678 kubeadm.go:156] found existing configuration files:
	
	I0603 13:56:45.316804 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0603 13:56:45.327560 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0603 13:56:45.327630 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0603 13:56:45.337910 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0603 13:56:45.349864 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0603 13:56:45.349948 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0603 13:56:45.361369 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0603 13:56:45.371797 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0603 13:56:45.371866 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0603 13:56:45.382861 1143678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0603 13:56:45.393310 1143678 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0603 13:56:45.393382 1143678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0603 13:56:45.403822 1143678 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0603 13:56:45.476725 1143678 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0603 13:56:45.476794 1143678 kubeadm.go:309] [preflight] Running pre-flight checks
	I0603 13:56:45.630786 1143678 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0603 13:56:45.630956 1143678 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0603 13:56:45.631125 1143678 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0603 13:56:45.814370 1143678 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0603 13:56:45.816372 1143678 out.go:204]   - Generating certificates and keys ...
	I0603 13:56:45.816481 1143678 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0603 13:56:45.816556 1143678 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0603 13:56:45.816710 1143678 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0603 13:56:45.816831 1143678 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0603 13:56:45.816928 1143678 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0603 13:56:45.817003 1143678 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0603 13:56:45.817093 1143678 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0603 13:56:45.817178 1143678 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0603 13:56:45.817328 1143678 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0603 13:56:45.817477 1143678 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0603 13:56:45.817533 1143678 kubeadm.go:309] [certs] Using the existing "sa" key
	I0603 13:56:45.817607 1143678 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0603 13:56:46.025905 1143678 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0603 13:56:46.331809 1143678 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0603 13:56:46.551488 1143678 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0603 13:56:46.636938 1143678 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0603 13:56:46.663292 1143678 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0603 13:56:46.663400 1143678 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0603 13:56:46.663448 1143678 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0603 13:56:46.840318 1143678 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0603 13:56:46.842399 1143678 out.go:204]   - Booting up control plane ...
	I0603 13:56:46.842530 1143678 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0603 13:56:46.851940 1143678 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0603 13:56:46.855283 1143678 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0603 13:56:46.855443 1143678 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0603 13:56:46.857883 1143678 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0603 13:57:26.860915 1143678 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0603 13:57:26.861047 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:57:26.861296 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:57:31.861724 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:57:31.862046 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:57:41.862803 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:57:41.863057 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:58:01.862907 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:58:01.863136 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:58:41.862069 1143678 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0603 13:58:41.862391 1143678 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0603 13:58:41.862430 1143678 kubeadm.go:309] 
	I0603 13:58:41.862535 1143678 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0603 13:58:41.862613 1143678 kubeadm.go:309] 		timed out waiting for the condition
	I0603 13:58:41.862624 1143678 kubeadm.go:309] 
	I0603 13:58:41.862675 1143678 kubeadm.go:309] 	This error is likely caused by:
	I0603 13:58:41.862737 1143678 kubeadm.go:309] 		- The kubelet is not running
	I0603 13:58:41.862895 1143678 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0603 13:58:41.862909 1143678 kubeadm.go:309] 
	I0603 13:58:41.863030 1143678 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0603 13:58:41.863060 1143678 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0603 13:58:41.863090 1143678 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0603 13:58:41.863100 1143678 kubeadm.go:309] 
	I0603 13:58:41.863230 1143678 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0603 13:58:41.863388 1143678 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0603 13:58:41.863406 1143678 kubeadm.go:309] 
	I0603 13:58:41.863583 1143678 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0603 13:58:41.863709 1143678 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0603 13:58:41.863811 1143678 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0603 13:58:41.863894 1143678 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0603 13:58:41.863917 1143678 kubeadm.go:309] 
	I0603 13:58:41.865001 1143678 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0603 13:58:41.865120 1143678 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0603 13:58:41.865209 1143678 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0603 13:58:41.865361 1143678 kubeadm.go:393] duration metric: took 8m3.432874561s to StartCluster
	I0603 13:58:41.865460 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0603 13:58:41.865537 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0603 13:58:41.912780 1143678 cri.go:89] found id: ""
	I0603 13:58:41.912812 1143678 logs.go:276] 0 containers: []
	W0603 13:58:41.912826 1143678 logs.go:278] No container was found matching "kube-apiserver"
	I0603 13:58:41.912832 1143678 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0603 13:58:41.912901 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0603 13:58:41.951372 1143678 cri.go:89] found id: ""
	I0603 13:58:41.951402 1143678 logs.go:276] 0 containers: []
	W0603 13:58:41.951411 1143678 logs.go:278] No container was found matching "etcd"
	I0603 13:58:41.951418 1143678 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0603 13:58:41.951490 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0603 13:58:41.989070 1143678 cri.go:89] found id: ""
	I0603 13:58:41.989104 1143678 logs.go:276] 0 containers: []
	W0603 13:58:41.989115 1143678 logs.go:278] No container was found matching "coredns"
	I0603 13:58:41.989123 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0603 13:58:41.989191 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0603 13:58:42.026208 1143678 cri.go:89] found id: ""
	I0603 13:58:42.026238 1143678 logs.go:276] 0 containers: []
	W0603 13:58:42.026246 1143678 logs.go:278] No container was found matching "kube-scheduler"
	I0603 13:58:42.026252 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0603 13:58:42.026312 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0603 13:58:42.064899 1143678 cri.go:89] found id: ""
	I0603 13:58:42.064941 1143678 logs.go:276] 0 containers: []
	W0603 13:58:42.064950 1143678 logs.go:278] No container was found matching "kube-proxy"
	I0603 13:58:42.064971 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0603 13:58:42.065043 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0603 13:58:42.098817 1143678 cri.go:89] found id: ""
	I0603 13:58:42.098858 1143678 logs.go:276] 0 containers: []
	W0603 13:58:42.098868 1143678 logs.go:278] No container was found matching "kube-controller-manager"
	I0603 13:58:42.098876 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0603 13:58:42.098939 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0603 13:58:42.133520 1143678 cri.go:89] found id: ""
	I0603 13:58:42.133558 1143678 logs.go:276] 0 containers: []
	W0603 13:58:42.133570 1143678 logs.go:278] No container was found matching "kindnet"
	I0603 13:58:42.133579 1143678 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0603 13:58:42.133639 1143678 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0603 13:58:42.187356 1143678 cri.go:89] found id: ""
	I0603 13:58:42.187387 1143678 logs.go:276] 0 containers: []
	W0603 13:58:42.187399 1143678 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0603 13:58:42.187412 1143678 logs.go:123] Gathering logs for kubelet ...
	I0603 13:58:42.187434 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0603 13:58:42.249992 1143678 logs.go:123] Gathering logs for dmesg ...
	I0603 13:58:42.250034 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0603 13:58:42.272762 1143678 logs.go:123] Gathering logs for describe nodes ...
	I0603 13:58:42.272801 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0603 13:58:42.362004 1143678 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0603 13:58:42.362030 1143678 logs.go:123] Gathering logs for CRI-O ...
	I0603 13:58:42.362046 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0603 13:58:42.468630 1143678 logs.go:123] Gathering logs for container status ...
	I0603 13:58:42.468676 1143678 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0603 13:58:42.510945 1143678 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0603 13:58:42.511002 1143678 out.go:239] * 
	W0603 13:58:42.511094 1143678 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0603 13:58:42.511119 1143678 out.go:239] * 
	W0603 13:58:42.512307 1143678 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0603 13:58:42.516199 1143678 out.go:177] 
	W0603 13:58:42.517774 1143678 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0603 13:58:42.517848 1143678 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0603 13:58:42.517883 1143678 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0603 13:58:42.519747 1143678 out.go:177] 
	
	
	==> CRI-O <==
	Jun 03 14:10:07 old-k8s-version-151788 crio[659]: time="2024-06-03 14:10:07.453512610Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717423807453480765,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eebe6f08-baed-440c-9250-8c7fb7bfa4e0 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:10:07 old-k8s-version-151788 crio[659]: time="2024-06-03 14:10:07.454123576Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=712324db-bd48-45b0-8ec5-9b97e547d4c3 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:10:07 old-k8s-version-151788 crio[659]: time="2024-06-03 14:10:07.454214507Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=712324db-bd48-45b0-8ec5-9b97e547d4c3 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:10:07 old-k8s-version-151788 crio[659]: time="2024-06-03 14:10:07.454316293Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=712324db-bd48-45b0-8ec5-9b97e547d4c3 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:10:07 old-k8s-version-151788 crio[659]: time="2024-06-03 14:10:07.490441802Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=16c89b09-2e81-40b1-aff5-d3cf8fb2b5e8 name=/runtime.v1.RuntimeService/Version
	Jun 03 14:10:07 old-k8s-version-151788 crio[659]: time="2024-06-03 14:10:07.490541743Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=16c89b09-2e81-40b1-aff5-d3cf8fb2b5e8 name=/runtime.v1.RuntimeService/Version
	Jun 03 14:10:07 old-k8s-version-151788 crio[659]: time="2024-06-03 14:10:07.491873314Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9dc351d7-86d5-4e8d-b4ba-e21436d913b3 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:10:07 old-k8s-version-151788 crio[659]: time="2024-06-03 14:10:07.492339250Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717423807492310086,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9dc351d7-86d5-4e8d-b4ba-e21436d913b3 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:10:07 old-k8s-version-151788 crio[659]: time="2024-06-03 14:10:07.492893047Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7d34ea15-274f-4368-b1f6-7726f162b4d8 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:10:07 old-k8s-version-151788 crio[659]: time="2024-06-03 14:10:07.492963486Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7d34ea15-274f-4368-b1f6-7726f162b4d8 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:10:07 old-k8s-version-151788 crio[659]: time="2024-06-03 14:10:07.492998319Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=7d34ea15-274f-4368-b1f6-7726f162b4d8 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:10:07 old-k8s-version-151788 crio[659]: time="2024-06-03 14:10:07.528516577Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f12a9032-0ec2-4001-b739-94a208c1ef79 name=/runtime.v1.RuntimeService/Version
	Jun 03 14:10:07 old-k8s-version-151788 crio[659]: time="2024-06-03 14:10:07.528600343Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f12a9032-0ec2-4001-b739-94a208c1ef79 name=/runtime.v1.RuntimeService/Version
	Jun 03 14:10:07 old-k8s-version-151788 crio[659]: time="2024-06-03 14:10:07.530072222Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c580865a-977c-4341-a17b-ee77411a0700 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:10:07 old-k8s-version-151788 crio[659]: time="2024-06-03 14:10:07.530566163Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717423807530538391,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c580865a-977c-4341-a17b-ee77411a0700 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:10:07 old-k8s-version-151788 crio[659]: time="2024-06-03 14:10:07.531154329Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5b02a9b3-29e1-4c16-846c-184ab288866e name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:10:07 old-k8s-version-151788 crio[659]: time="2024-06-03 14:10:07.531198224Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5b02a9b3-29e1-4c16-846c-184ab288866e name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:10:07 old-k8s-version-151788 crio[659]: time="2024-06-03 14:10:07.531233963Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5b02a9b3-29e1-4c16-846c-184ab288866e name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:10:07 old-k8s-version-151788 crio[659]: time="2024-06-03 14:10:07.568598889Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=65b85475-1951-4009-b0a7-28a84c684e64 name=/runtime.v1.RuntimeService/Version
	Jun 03 14:10:07 old-k8s-version-151788 crio[659]: time="2024-06-03 14:10:07.568688533Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=65b85475-1951-4009-b0a7-28a84c684e64 name=/runtime.v1.RuntimeService/Version
	Jun 03 14:10:07 old-k8s-version-151788 crio[659]: time="2024-06-03 14:10:07.569935332Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c5458412-1185-4dc9-8db0-b9b43ebbc797 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:10:07 old-k8s-version-151788 crio[659]: time="2024-06-03 14:10:07.570423300Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717423807570399637,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c5458412-1185-4dc9-8db0-b9b43ebbc797 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 03 14:10:07 old-k8s-version-151788 crio[659]: time="2024-06-03 14:10:07.571357760Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=91c5032a-f962-49d9-8b14-1e590c02c3f7 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:10:07 old-k8s-version-151788 crio[659]: time="2024-06-03 14:10:07.571412824Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=91c5032a-f962-49d9-8b14-1e590c02c3f7 name=/runtime.v1.RuntimeService/ListContainers
	Jun 03 14:10:07 old-k8s-version-151788 crio[659]: time="2024-06-03 14:10:07.571451271Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=91c5032a-f962-49d9-8b14-1e590c02c3f7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jun 3 13:50] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055954] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042975] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.825342] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.576562] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.695734] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.047871] systemd-fstab-generator[580]: Ignoring "noauto" option for root device
	[  +0.063174] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.087641] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.197728] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.185593] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.323645] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +6.685681] systemd-fstab-generator[846]: Ignoring "noauto" option for root device
	[  +0.076136] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.031593] systemd-fstab-generator[973]: Ignoring "noauto" option for root device
	[ +10.661843] kauditd_printk_skb: 46 callbacks suppressed
	[Jun 3 13:54] systemd-fstab-generator[5027]: Ignoring "noauto" option for root device
	[Jun 3 13:56] systemd-fstab-generator[5309]: Ignoring "noauto" option for root device
	[  +0.079564] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 14:10:07 up 19 min,  0 users,  load average: 0.03, 0.06, 0.07
	Linux old-k8s-version-151788 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jun 03 14:10:07 old-k8s-version-151788 kubelet[6806]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Jun 03 14:10:07 old-k8s-version-151788 kubelet[6806]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Jun 03 14:10:07 old-k8s-version-151788 kubelet[6806]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Jun 03 14:10:07 old-k8s-version-151788 kubelet[6806]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00024cef0)
	Jun 03 14:10:07 old-k8s-version-151788 kubelet[6806]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Jun 03 14:10:07 old-k8s-version-151788 kubelet[6806]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0009adef0, 0x4f0ac20, 0xc000b8a0a0, 0x1, 0xc00009e0c0)
	Jun 03 14:10:07 old-k8s-version-151788 kubelet[6806]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Jun 03 14:10:07 old-k8s-version-151788 kubelet[6806]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0001d40e0, 0xc00009e0c0)
	Jun 03 14:10:07 old-k8s-version-151788 kubelet[6806]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Jun 03 14:10:07 old-k8s-version-151788 kubelet[6806]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Jun 03 14:10:07 old-k8s-version-151788 kubelet[6806]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Jun 03 14:10:07 old-k8s-version-151788 kubelet[6806]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000d5e790, 0xc0001d60c0)
	Jun 03 14:10:07 old-k8s-version-151788 kubelet[6806]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Jun 03 14:10:07 old-k8s-version-151788 kubelet[6806]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Jun 03 14:10:07 old-k8s-version-151788 kubelet[6806]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Jun 03 14:10:07 old-k8s-version-151788 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jun 03 14:10:07 old-k8s-version-151788 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jun 03 14:10:07 old-k8s-version-151788 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 139.
	Jun 03 14:10:07 old-k8s-version-151788 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jun 03 14:10:07 old-k8s-version-151788 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jun 03 14:10:07 old-k8s-version-151788 kubelet[6888]: I0603 14:10:07.770907    6888 server.go:416] Version: v1.20.0
	Jun 03 14:10:07 old-k8s-version-151788 kubelet[6888]: I0603 14:10:07.771391    6888 server.go:837] Client rotation is on, will bootstrap in background
	Jun 03 14:10:07 old-k8s-version-151788 kubelet[6888]: I0603 14:10:07.775941    6888 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jun 03 14:10:07 old-k8s-version-151788 kubelet[6888]: W0603 14:10:07.778417    6888 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jun 03 14:10:07 old-k8s-version-151788 kubelet[6888]: I0603 14:10:07.780047    6888 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-151788 -n old-k8s-version-151788
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-151788 -n old-k8s-version-151788: exit status 2 (230.741453ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-151788" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (139.12s)

                                                
                                    

Test pass (244/312)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 10.13
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.30.1/json-events 4.9
13 TestDownloadOnly/v1.30.1/preload-exists 0
17 TestDownloadOnly/v1.30.1/LogsDuration 0.06
18 TestDownloadOnly/v1.30.1/DeleteAll 0.13
19 TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.56
22 TestOffline 63.53
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 201.18
29 TestAddons/parallel/Registry 16.22
31 TestAddons/parallel/InspektorGadget 10.91
33 TestAddons/parallel/HelmTiller 11.96
35 TestAddons/parallel/CSI 42.92
36 TestAddons/parallel/Headlamp 14.13
37 TestAddons/parallel/CloudSpanner 6.56
39 TestAddons/parallel/NvidiaDevicePlugin 6.52
40 TestAddons/parallel/Yakd 5.01
44 TestAddons/serial/GCPAuth/Namespaces 0.12
46 TestCertOptions 67.43
47 TestCertExpiration 250.88
49 TestForceSystemdFlag 73.38
50 TestForceSystemdEnv 88.06
52 TestKVMDriverInstallOrUpdate 3.17
56 TestErrorSpam/setup 41.32
57 TestErrorSpam/start 0.35
58 TestErrorSpam/status 0.72
59 TestErrorSpam/pause 1.57
60 TestErrorSpam/unpause 1.63
61 TestErrorSpam/stop 4.98
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 56.71
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 34.61
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.07
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.37
73 TestFunctional/serial/CacheCmd/cache/add_local 1.47
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.5
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.11
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 62.94
82 TestFunctional/serial/ComponentHealth 0.07
83 TestFunctional/serial/LogsCmd 1.46
84 TestFunctional/serial/LogsFileCmd 1.45
85 TestFunctional/serial/InvalidService 4.73
87 TestFunctional/parallel/ConfigCmd 0.35
88 TestFunctional/parallel/DashboardCmd 14.99
89 TestFunctional/parallel/DryRun 0.27
90 TestFunctional/parallel/InternationalLanguage 0.14
91 TestFunctional/parallel/StatusCmd 1.04
95 TestFunctional/parallel/ServiceCmdConnect 10.64
96 TestFunctional/parallel/AddonsCmd 0.15
97 TestFunctional/parallel/PersistentVolumeClaim 44.99
99 TestFunctional/parallel/SSHCmd 0.47
100 TestFunctional/parallel/CpCmd 1.62
101 TestFunctional/parallel/MySQL 25.76
102 TestFunctional/parallel/FileSync 0.24
103 TestFunctional/parallel/CertSync 1.4
107 TestFunctional/parallel/NodeLabels 0.08
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.51
111 TestFunctional/parallel/License 0.17
112 TestFunctional/parallel/Version/short 0.05
113 TestFunctional/parallel/Version/components 0.7
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.3
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
118 TestFunctional/parallel/ImageCommands/ImageBuild 2.75
119 TestFunctional/parallel/ImageCommands/Setup 0.87
120 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
121 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
122 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5
133 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.23
134 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 10.76
135 TestFunctional/parallel/ServiceCmd/DeployApp 11.17
136 TestFunctional/parallel/ImageCommands/ImageSaveToFile 4.95
137 TestFunctional/parallel/ImageCommands/ImageRemove 0.67
138 TestFunctional/parallel/ServiceCmd/List 0.47
139 TestFunctional/parallel/ServiceCmd/JSONOutput 0.45
140 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.95
141 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
142 TestFunctional/parallel/ServiceCmd/Format 0.43
143 TestFunctional/parallel/ServiceCmd/URL 0.37
144 TestFunctional/parallel/ProfileCmd/profile_not_create 0.32
145 TestFunctional/parallel/MountCmd/any-port 6.8
146 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.18
147 TestFunctional/parallel/ProfileCmd/profile_list 0.35
148 TestFunctional/parallel/ProfileCmd/profile_json_output 0.36
149 TestFunctional/parallel/MountCmd/specific-port 1.99
150 TestFunctional/parallel/MountCmd/VerifyCleanup 1.72
151 TestFunctional/delete_addon-resizer_images 0.07
152 TestFunctional/delete_my-image_image 0.02
153 TestFunctional/delete_minikube_cached_images 0.02
157 TestMultiControlPlane/serial/StartCluster 203.25
158 TestMultiControlPlane/serial/DeployApp 4.59
159 TestMultiControlPlane/serial/PingHostFromPods 1.26
160 TestMultiControlPlane/serial/AddWorkerNode 44.59
161 TestMultiControlPlane/serial/NodeLabels 0.07
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.55
163 TestMultiControlPlane/serial/CopyFile 13.05
165 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.49
167 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.4
169 TestMultiControlPlane/serial/DeleteSecondaryNode 17.06
170 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.37
172 TestMultiControlPlane/serial/RestartCluster 354.71
173 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.38
174 TestMultiControlPlane/serial/AddSecondaryNode 77.59
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.55
179 TestJSONOutput/start/Command 60.14
180 TestJSONOutput/start/Audit 0
182 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/pause/Command 0.7
186 TestJSONOutput/pause/Audit 0
188 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/unpause/Command 0.64
192 TestJSONOutput/unpause/Audit 0
194 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/stop/Command 7.69
198 TestJSONOutput/stop/Audit 0
200 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
202 TestErrorJSONOutput 0.2
207 TestMainNoArgs 0.05
208 TestMinikubeProfile 86.02
211 TestMountStart/serial/StartWithMountFirst 27.15
212 TestMountStart/serial/VerifyMountFirst 0.37
213 TestMountStart/serial/StartWithMountSecond 27
214 TestMountStart/serial/VerifyMountSecond 0.38
215 TestMountStart/serial/DeleteFirst 0.68
216 TestMountStart/serial/VerifyMountPostDelete 0.38
217 TestMountStart/serial/Stop 1.28
218 TestMountStart/serial/RestartStopped 22.3
219 TestMountStart/serial/VerifyMountPostStop 0.38
222 TestMultiNode/serial/FreshStart2Nodes 94.75
223 TestMultiNode/serial/DeployApp2Nodes 3.4
224 TestMultiNode/serial/PingHostFrom2Pods 0.81
225 TestMultiNode/serial/AddNode 37.11
226 TestMultiNode/serial/MultiNodeLabels 0.06
227 TestMultiNode/serial/ProfileList 0.22
228 TestMultiNode/serial/CopyFile 7.37
229 TestMultiNode/serial/StopNode 2.36
230 TestMultiNode/serial/StartAfterStop 26.69
232 TestMultiNode/serial/DeleteNode 2.45
234 TestMultiNode/serial/RestartMultiNode 208.44
235 TestMultiNode/serial/ValidateNameConflict 45.94
242 TestScheduledStopUnix 111.01
246 TestRunningBinaryUpgrade 217.4
251 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
252 TestNoKubernetes/serial/StartWithK8s 96.72
253 TestNoKubernetes/serial/StartWithStopK8s 42.5
254 TestNoKubernetes/serial/Start 27.36
262 TestNetworkPlugins/group/false 2.96
266 TestStoppedBinaryUpgrade/Setup 0.36
267 TestStoppedBinaryUpgrade/Upgrade 148.73
268 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
269 TestNoKubernetes/serial/ProfileList 26.6
270 TestNoKubernetes/serial/Stop 1.28
271 TestNoKubernetes/serial/StartNoArgs 21.2
272 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.23
281 TestPause/serial/Start 98.46
282 TestStoppedBinaryUpgrade/MinikubeLogs 0.85
283 TestNetworkPlugins/group/auto/Start 64.53
284 TestNetworkPlugins/group/calico/Start 116.61
286 TestNetworkPlugins/group/auto/KubeletFlags 0.23
287 TestNetworkPlugins/group/auto/NetCatPod 10.25
288 TestNetworkPlugins/group/auto/DNS 0.18
289 TestNetworkPlugins/group/auto/Localhost 0.17
290 TestNetworkPlugins/group/auto/HairPin 0.16
291 TestNetworkPlugins/group/custom-flannel/Start 81.51
292 TestNetworkPlugins/group/kindnet/Start 93.61
293 TestNetworkPlugins/group/flannel/Start 123.26
294 TestNetworkPlugins/group/calico/ControllerPod 6.01
295 TestNetworkPlugins/group/calico/KubeletFlags 0.23
296 TestNetworkPlugins/group/calico/NetCatPod 12.32
297 TestNetworkPlugins/group/calico/DNS 0.16
298 TestNetworkPlugins/group/calico/Localhost 0.13
299 TestNetworkPlugins/group/calico/HairPin 0.13
300 TestNetworkPlugins/group/enable-default-cni/Start 73.94
301 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.26
302 TestNetworkPlugins/group/custom-flannel/NetCatPod 14.31
303 TestNetworkPlugins/group/custom-flannel/DNS 0.2
304 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
305 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
306 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
307 TestNetworkPlugins/group/kindnet/KubeletFlags 0.24
308 TestNetworkPlugins/group/kindnet/NetCatPod 12.27
309 TestNetworkPlugins/group/bridge/Start 68.07
310 TestNetworkPlugins/group/kindnet/DNS 0.19
311 TestNetworkPlugins/group/kindnet/Localhost 0.17
312 TestNetworkPlugins/group/kindnet/HairPin 0.17
315 TestNetworkPlugins/group/flannel/ControllerPod 6.01
316 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
317 TestNetworkPlugins/group/enable-default-cni/NetCatPod 16.27
318 TestNetworkPlugins/group/flannel/KubeletFlags 0.45
319 TestNetworkPlugins/group/flannel/NetCatPod 13.46
320 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
321 TestNetworkPlugins/group/flannel/DNS 0.2
322 TestNetworkPlugins/group/enable-default-cni/Localhost 0.19
323 TestNetworkPlugins/group/flannel/Localhost 0.17
324 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
325 TestNetworkPlugins/group/flannel/HairPin 0.17
327 TestStartStop/group/no-preload/serial/FirstStart 79.6
329 TestStartStop/group/embed-certs/serial/FirstStart 131.21
330 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
331 TestNetworkPlugins/group/bridge/NetCatPod 10.24
332 TestNetworkPlugins/group/bridge/DNS 0.16
333 TestNetworkPlugins/group/bridge/Localhost 0.14
334 TestNetworkPlugins/group/bridge/HairPin 0.17
336 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 129.4
337 TestStartStop/group/no-preload/serial/DeployApp 8.33
338 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.32
340 TestStartStop/group/embed-certs/serial/DeployApp 8.29
341 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.1
343 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 7.27
344 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.98
349 TestStartStop/group/no-preload/serial/SecondStart 695.69
351 TestStartStop/group/embed-certs/serial/SecondStart 525.8
353 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 519.45
354 TestStartStop/group/old-k8s-version/serial/Stop 2.3
355 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
366 TestStartStop/group/newest-cni/serial/FirstStart 54.77
367 TestStartStop/group/newest-cni/serial/DeployApp 0
368 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.08
369 TestStartStop/group/newest-cni/serial/Stop 7.36
370 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
371 TestStartStop/group/newest-cni/serial/SecondStart 36.19
372 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
373 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
374 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
375 TestStartStop/group/newest-cni/serial/Pause 2.51
x
+
TestDownloadOnly/v1.20.0/json-events (10.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-979896 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-979896 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (10.133743429s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (10.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-979896
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-979896: exit status 85 (60.97502ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-979896 | jenkins | v1.33.1 | 03 Jun 24 12:24 UTC |          |
	|         | -p download-only-979896        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 12:24:07
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 12:24:07.872075 1086263 out.go:291] Setting OutFile to fd 1 ...
	I0603 12:24:07.872365 1086263 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:24:07.872376 1086263 out.go:304] Setting ErrFile to fd 2...
	I0603 12:24:07.872383 1086263 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:24:07.872589 1086263 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
	W0603 12:24:07.872730 1086263 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19011-1078924/.minikube/config/config.json: open /home/jenkins/minikube-integration/19011-1078924/.minikube/config/config.json: no such file or directory
	I0603 12:24:07.873318 1086263 out.go:298] Setting JSON to true
	I0603 12:24:07.874371 1086263 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":11195,"bootTime":1717406253,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 12:24:07.874783 1086263 start.go:139] virtualization: kvm guest
	I0603 12:24:07.877414 1086263 out.go:97] [download-only-979896] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0603 12:24:07.877507 1086263 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball: no such file or directory
	I0603 12:24:07.878972 1086263 out.go:169] MINIKUBE_LOCATION=19011
	I0603 12:24:07.877551 1086263 notify.go:220] Checking for updates...
	I0603 12:24:07.881658 1086263 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 12:24:07.883020 1086263 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 12:24:07.884507 1086263 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 12:24:07.885941 1086263 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0603 12:24:07.888372 1086263 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0603 12:24:07.888593 1086263 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 12:24:07.920185 1086263 out.go:97] Using the kvm2 driver based on user configuration
	I0603 12:24:07.920212 1086263 start.go:297] selected driver: kvm2
	I0603 12:24:07.920229 1086263 start.go:901] validating driver "kvm2" against <nil>
	I0603 12:24:07.920557 1086263 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 12:24:07.920676 1086263 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19011-1078924/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0603 12:24:07.935274 1086263 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0603 12:24:07.935324 1086263 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0603 12:24:07.935786 1086263 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0603 12:24:07.935937 1086263 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0603 12:24:07.936012 1086263 cni.go:84] Creating CNI manager for ""
	I0603 12:24:07.936032 1086263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0603 12:24:07.936045 1086263 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0603 12:24:07.936115 1086263 start.go:340] cluster config:
	{Name:download-only-979896 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-979896 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:24:07.936318 1086263 iso.go:125] acquiring lock: {Name:mka26d6a83f88b83737ccc78b57cc462fbe70fe1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0603 12:24:07.938343 1086263 out.go:97] Downloading VM boot image ...
	I0603 12:24:07.938378 1086263 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso
	I0603 12:24:09.964905 1086263 out.go:97] Starting "download-only-979896" primary control-plane node in "download-only-979896" cluster
	I0603 12:24:09.964928 1086263 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0603 12:24:10.015382 1086263 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0603 12:24:10.015422 1086263 cache.go:56] Caching tarball of preloaded images
	I0603 12:24:10.015584 1086263 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0603 12:24:10.017381 1086263 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0603 12:24:10.017400 1086263 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0603 12:24:10.051946 1086263 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0603 12:24:14.120947 1086263 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0603 12:24:14.121044 1086263 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0603 12:24:15.172448 1086263 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0603 12:24:15.172808 1086263 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/download-only-979896/config.json ...
	I0603 12:24:15.172839 1086263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/download-only-979896/config.json: {Name:mk5a36bc1d79259cb5ae5c6de8d723b29477d757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0603 12:24:15.172997 1086263 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0603 12:24:15.173165 1086263 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-979896 host does not exist
	  To start a cluster, run: "minikube start -p download-only-979896"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-979896
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/json-events (4.9s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-640021 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-640021 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.904385101s)
--- PASS: TestDownloadOnly/v1.30.1/json-events (4.90s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/preload-exists
--- PASS: TestDownloadOnly/v1.30.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-640021
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-640021: exit status 85 (60.978094ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-979896 | jenkins | v1.33.1 | 03 Jun 24 12:24 UTC |                     |
	|         | -p download-only-979896        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 03 Jun 24 12:24 UTC | 03 Jun 24 12:24 UTC |
	| delete  | -p download-only-979896        | download-only-979896 | jenkins | v1.33.1 | 03 Jun 24 12:24 UTC | 03 Jun 24 12:24 UTC |
	| start   | -o=json --download-only        | download-only-640021 | jenkins | v1.33.1 | 03 Jun 24 12:24 UTC |                     |
	|         | -p download-only-640021        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/03 12:24:18
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0603 12:24:18.321574 1086453 out.go:291] Setting OutFile to fd 1 ...
	I0603 12:24:18.321833 1086453 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:24:18.321842 1086453 out.go:304] Setting ErrFile to fd 2...
	I0603 12:24:18.321846 1086453 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:24:18.321999 1086453 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
	I0603 12:24:18.322546 1086453 out.go:298] Setting JSON to true
	I0603 12:24:18.323533 1086453 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":11205,"bootTime":1717406253,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 12:24:18.323594 1086453 start.go:139] virtualization: kvm guest
	I0603 12:24:18.325630 1086453 out.go:97] [download-only-640021] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 12:24:18.327196 1086453 out.go:169] MINIKUBE_LOCATION=19011
	I0603 12:24:18.325763 1086453 notify.go:220] Checking for updates...
	I0603 12:24:18.330194 1086453 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 12:24:18.331670 1086453 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 12:24:18.333139 1086453 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 12:24:18.334495 1086453 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-640021 host does not exist
	  To start a cluster, run: "minikube start -p download-only-640021"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.1/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-640021
--- PASS: TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-778765 --alsologtostderr --binary-mirror http://127.0.0.1:35769 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-778765" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-778765
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestOffline (63.53s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-416180 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-416180 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m2.456651268s)
helpers_test.go:175: Cleaning up "offline-crio-416180" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-416180
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-416180: (1.071944584s)
--- PASS: TestOffline (63.53s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-699562
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-699562: exit status 85 (50.388663ms)

                                                
                                                
-- stdout --
	* Profile "addons-699562" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-699562"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-699562
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-699562: exit status 85 (51.839727ms)

                                                
                                                
-- stdout --
	* Profile "addons-699562" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-699562"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (201.18s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-699562 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-699562 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m21.18057082s)
--- PASS: TestAddons/Setup (201.18s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.22s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 17.654149ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-jrrh7" [af432feb-b699-477a-8cd5-ff109071d13d] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005517576s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-n8265" [343bbd2c-1a4b-4796-8401-ebd3686c0a61] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.006599097s
addons_test.go:342: (dbg) Run:  kubectl --context addons-699562 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-699562 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-699562 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.400969335s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-699562 ip
2024/06/03 12:28:01 [DEBUG] GET http://192.168.39.241:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-699562 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.22s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.91s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-wkk2d" [4b8b19f6-6ca6-44a1-8020-69e2ed475ef8] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.006805852s
addons_test.go:843: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-699562
addons_test.go:843: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-699562: (5.904330102s)
--- PASS: TestAddons/parallel/InspektorGadget (10.91s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.96s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 15.871128ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-k4tt8" [0ecadef4-5251-4d11-a39c-77a196200334] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.007849052s
addons_test.go:475: (dbg) Run:  kubectl --context addons-699562 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-699562 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.316546914s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-699562 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.96s)

                                                
                                    
x
+
TestAddons/parallel/CSI (42.92s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 5.234752ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-699562 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-699562 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-699562 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-699562 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-699562 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-699562 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-699562 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-699562 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [8aac43dd-30a2-4c77-8743-6b1509a54756] Pending
helpers_test.go:344: "task-pv-pod" [8aac43dd-30a2-4c77-8743-6b1509a54756] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [8aac43dd-30a2-4c77-8743-6b1509a54756] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.004755369s
addons_test.go:586: (dbg) Run:  kubectl --context addons-699562 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-699562 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-699562 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-699562 delete pod task-pv-pod
addons_test.go:602: (dbg) Run:  kubectl --context addons-699562 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-699562 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-699562 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-699562 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-699562 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-699562 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-699562 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-699562 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-699562 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-699562 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-699562 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [4a308194-0c01-4b58-88a0-6e43ce1cad75] Pending
helpers_test.go:344: "task-pv-pod-restore" [4a308194-0c01-4b58-88a0-6e43ce1cad75] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [4a308194-0c01-4b58-88a0-6e43ce1cad75] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004505692s
addons_test.go:628: (dbg) Run:  kubectl --context addons-699562 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Run:  kubectl --context addons-699562 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-699562 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-linux-amd64 -p addons-699562 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-linux-amd64 -p addons-699562 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.698185618s)
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-699562 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (42.92s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.13s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-699562 --alsologtostderr -v=1
addons_test.go:826: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-699562 --alsologtostderr -v=1: (1.129362769s)
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-68456f997b-tpgtj" [c02f3cb7-dd75-4d83-89fe-082ca6c80805] Pending
helpers_test.go:344: "headlamp-68456f997b-tpgtj" [c02f3cb7-dd75-4d83-89fe-082ca6c80805] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-68456f997b-tpgtj" [c02f3cb7-dd75-4d83-89fe-082ca6c80805] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004145127s
--- PASS: TestAddons/parallel/Headlamp (14.13s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-xl8zp" [10c5707e-899d-4f72-85a7-a3d1b3455960] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004268282s
addons_test.go:862: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-699562
--- PASS: TestAddons/parallel/CloudSpanner (6.56s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.52s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-2sw5z" [3ad1866a-b3d5-4783-b2dd-557082180d8f] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005236214s
addons_test.go:1056: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-699562
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.52s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-th7qj" [cb66a0b3-53cb-493e-8010-d545cc1dc5b8] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004384863s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-699562 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-699562 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestCertOptions (67.43s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-724800 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-724800 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m6.1122267s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-724800 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-724800 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-724800 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-724800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-724800
--- PASS: TestCertOptions (67.43s)

                                                
                                    
x
+
TestCertExpiration (250.88s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-925487 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-925487 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (44.068203345s)
E0603 13:34:41.278640 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/functional-093300/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-925487 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-925487 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (25.705680981s)
helpers_test.go:175: Cleaning up "cert-expiration-925487" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-925487
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-925487: (1.104823869s)
--- PASS: TestCertExpiration (250.88s)

                                                
                                    
x
+
TestForceSystemdFlag (73.38s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-977376 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-977376 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m12.025610408s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-977376 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-977376" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-977376
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-977376: (1.127385119s)
--- PASS: TestForceSystemdFlag (73.38s)

                                                
                                    
x
+
TestForceSystemdEnv (88.06s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-416305 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-416305 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m27.050827659s)
helpers_test.go:175: Cleaning up "force-systemd-env-416305" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-416305
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-416305: (1.010512056s)
--- PASS: TestForceSystemdEnv (88.06s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.17s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
E0603 13:32:45.541327 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.crt: no such file or directory
--- PASS: TestKVMDriverInstallOrUpdate (3.17s)

                                                
                                    
x
+
TestErrorSpam/setup (41.32s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-517769 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-517769 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-517769 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-517769 --driver=kvm2  --container-runtime=crio: (41.315125405s)
--- PASS: TestErrorSpam/setup (41.32s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-517769 --log_dir /tmp/nospam-517769 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-517769 --log_dir /tmp/nospam-517769 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-517769 --log_dir /tmp/nospam-517769 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-517769 --log_dir /tmp/nospam-517769 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-517769 --log_dir /tmp/nospam-517769 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-517769 --log_dir /tmp/nospam-517769 status
--- PASS: TestErrorSpam/status (0.72s)

                                                
                                    
x
+
TestErrorSpam/pause (1.57s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-517769 --log_dir /tmp/nospam-517769 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-517769 --log_dir /tmp/nospam-517769 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-517769 --log_dir /tmp/nospam-517769 pause
--- PASS: TestErrorSpam/pause (1.57s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.63s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-517769 --log_dir /tmp/nospam-517769 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-517769 --log_dir /tmp/nospam-517769 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-517769 --log_dir /tmp/nospam-517769 unpause
--- PASS: TestErrorSpam/unpause (1.63s)

                                                
                                    
x
+
TestErrorSpam/stop (4.98s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-517769 --log_dir /tmp/nospam-517769 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-517769 --log_dir /tmp/nospam-517769 stop: (2.285542112s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-517769 --log_dir /tmp/nospam-517769 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-517769 --log_dir /tmp/nospam-517769 stop: (1.571318527s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-517769 --log_dir /tmp/nospam-517769 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-517769 --log_dir /tmp/nospam-517769 stop: (1.119943826s)
--- PASS: TestErrorSpam/stop (4.98s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19011-1078924/.minikube/files/etc/test/nested/copy/1086251/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (56.71s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-093300 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0603 12:37:45.542126 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.crt: no such file or directory
E0603 12:37:45.548067 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.crt: no such file or directory
E0603 12:37:45.558341 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.crt: no such file or directory
E0603 12:37:45.578601 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.crt: no such file or directory
E0603 12:37:45.618933 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.crt: no such file or directory
E0603 12:37:45.699316 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.crt: no such file or directory
E0603 12:37:45.859742 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.crt: no such file or directory
E0603 12:37:46.180356 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.crt: no such file or directory
E0603 12:37:46.821329 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.crt: no such file or directory
E0603 12:37:48.101746 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.crt: no such file or directory
E0603 12:37:50.663614 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.crt: no such file or directory
E0603 12:37:55.784799 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-093300 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (56.709808504s)
--- PASS: TestFunctional/serial/StartWithProxy (56.71s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (34.61s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-093300 --alsologtostderr -v=8
E0603 12:38:06.025326 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.crt: no such file or directory
E0603 12:38:26.505861 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-093300 --alsologtostderr -v=8: (34.612616356s)
functional_test.go:659: soft start took 34.613322178s for "functional-093300" cluster.
--- PASS: TestFunctional/serial/SoftStart (34.61s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-093300 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.37s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-093300 cache add registry.k8s.io/pause:3.1: (1.190758696s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-093300 cache add registry.k8s.io/pause:3.3: (1.114357321s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-093300 cache add registry.k8s.io/pause:latest: (1.060239255s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.37s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-093300 /tmp/TestFunctionalserialCacheCmdcacheadd_local2834214630/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 cache add minikube-local-cache-test:functional-093300
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-093300 cache add minikube-local-cache-test:functional-093300: (1.116722974s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 cache delete minikube-local-cache-test:functional-093300
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-093300
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.47s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-093300 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (216.784313ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 kubectl -- --context functional-093300 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-093300 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (62.94s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-093300 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0603 12:39:07.467607 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-093300 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m2.941307408s)
functional_test.go:757: restart took 1m2.94143154s for "functional-093300" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (62.94s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-093300 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-093300 logs: (1.457662476s)
--- PASS: TestFunctional/serial/LogsCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 logs --file /tmp/TestFunctionalserialLogsFileCmd2909442830/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-093300 logs --file /tmp/TestFunctionalserialLogsFileCmd2909442830/001/logs.txt: (1.44502855s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.45s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.73s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-093300 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-093300
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-093300: exit status 115 (276.870589ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.250:30588 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-093300 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-093300 delete -f testdata/invalidsvc.yaml: (1.256918376s)
--- PASS: TestFunctional/serial/InvalidService (4.73s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-093300 config get cpus: exit status 14 (49.547193ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-093300 config get cpus: exit status 14 (51.533839ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-093300 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-093300 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1095603: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.99s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-093300 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-093300 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (137.260085ms)

                                                
                                                
-- stdout --
	* [functional-093300] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19011
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19011-1078924/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19011-1078924/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0603 12:40:11.339092 1094601 out.go:291] Setting OutFile to fd 1 ...
	I0603 12:40:11.339362 1094601 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:40:11.339372 1094601 out.go:304] Setting ErrFile to fd 2...
	I0603 12:40:11.339376 1094601 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:40:11.339556 1094601 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
	I0603 12:40:11.340062 1094601 out.go:298] Setting JSON to false
	I0603 12:40:11.341610 1094601 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12158,"bootTime":1717406253,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 12:40:11.341721 1094601 start.go:139] virtualization: kvm guest
	I0603 12:40:11.343936 1094601 out.go:177] * [functional-093300] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 12:40:11.345385 1094601 out.go:177]   - MINIKUBE_LOCATION=19011
	I0603 12:40:11.345375 1094601 notify.go:220] Checking for updates...
	I0603 12:40:11.347074 1094601 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 12:40:11.348456 1094601 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 12:40:11.349864 1094601 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 12:40:11.351319 1094601 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 12:40:11.352520 1094601 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 12:40:11.354083 1094601 config.go:182] Loaded profile config "functional-093300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:40:11.354505 1094601 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:40:11.354556 1094601 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:40:11.370007 1094601 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44655
	I0603 12:40:11.370383 1094601 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:40:11.370966 1094601 main.go:141] libmachine: Using API Version  1
	I0603 12:40:11.371012 1094601 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:40:11.371396 1094601 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:40:11.371627 1094601 main.go:141] libmachine: (functional-093300) Calling .DriverName
	I0603 12:40:11.371879 1094601 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 12:40:11.372161 1094601 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:40:11.372202 1094601 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:40:11.387322 1094601 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37035
	I0603 12:40:11.387782 1094601 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:40:11.388254 1094601 main.go:141] libmachine: Using API Version  1
	I0603 12:40:11.388274 1094601 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:40:11.388572 1094601 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:40:11.388733 1094601 main.go:141] libmachine: (functional-093300) Calling .DriverName
	I0603 12:40:11.421339 1094601 out.go:177] * Using the kvm2 driver based on existing profile
	I0603 12:40:11.422772 1094601 start.go:297] selected driver: kvm2
	I0603 12:40:11.422808 1094601 start.go:901] validating driver "kvm2" against &{Name:functional-093300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:functional-093300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:40:11.422978 1094601 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 12:40:11.425599 1094601 out.go:177] 
	W0603 12:40:11.426915 1094601 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0603 12:40:11.428312 1094601 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-093300 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-093300 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-093300 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (136.86065ms)

                                                
                                                
-- stdout --
	* [functional-093300] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19011
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19011-1078924/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19011-1078924/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0603 12:40:24.895631 1095015 out.go:291] Setting OutFile to fd 1 ...
	I0603 12:40:24.895756 1095015 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:40:24.895766 1095015 out.go:304] Setting ErrFile to fd 2...
	I0603 12:40:24.895772 1095015 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 12:40:24.896015 1095015 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
	I0603 12:40:24.896568 1095015 out.go:298] Setting JSON to false
	I0603 12:40:24.897640 1095015 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12172,"bootTime":1717406253,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 12:40:24.897707 1095015 start.go:139] virtualization: kvm guest
	I0603 12:40:24.899493 1095015 out.go:177] * [functional-093300] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0603 12:40:24.901053 1095015 out.go:177]   - MINIKUBE_LOCATION=19011
	I0603 12:40:24.901102 1095015 notify.go:220] Checking for updates...
	I0603 12:40:24.902345 1095015 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 12:40:24.903662 1095015 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 12:40:24.904929 1095015 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 12:40:24.906137 1095015 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 12:40:24.907334 1095015 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 12:40:24.909067 1095015 config.go:182] Loaded profile config "functional-093300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 12:40:24.909706 1095015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:40:24.909810 1095015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:40:24.925088 1095015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41311
	I0603 12:40:24.925483 1095015 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:40:24.925976 1095015 main.go:141] libmachine: Using API Version  1
	I0603 12:40:24.925998 1095015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:40:24.926308 1095015 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:40:24.926470 1095015 main.go:141] libmachine: (functional-093300) Calling .DriverName
	I0603 12:40:24.926755 1095015 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 12:40:24.927089 1095015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 12:40:24.927129 1095015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 12:40:24.942218 1095015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39501
	I0603 12:40:24.942617 1095015 main.go:141] libmachine: () Calling .GetVersion
	I0603 12:40:24.943123 1095015 main.go:141] libmachine: Using API Version  1
	I0603 12:40:24.943144 1095015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 12:40:24.943462 1095015 main.go:141] libmachine: () Calling .GetMachineName
	I0603 12:40:24.943665 1095015 main.go:141] libmachine: (functional-093300) Calling .DriverName
	I0603 12:40:24.975114 1095015 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0603 12:40:24.976622 1095015 start.go:297] selected driver: kvm2
	I0603 12:40:24.976654 1095015 start.go:901] validating driver "kvm2" against &{Name:functional-093300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:functional-093300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0603 12:40:24.976802 1095015 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 12:40:24.979440 1095015 out.go:177] 
	W0603 12:40:24.980792 1095015 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0603 12:40:24.982254 1095015 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-093300 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-093300 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-vvz5c" [44d68e1b-cb57-4cea-a11a-fdf2ceac71ec] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-vvz5c" [44d68e1b-cb57-4cea-a11a-fdf2ceac71ec] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.004814115s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.250:31246
functional_test.go:1671: http://192.168.39.250:31246: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-vvz5c

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.250:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.250:31246
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.64s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (44.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [89ec744f-db88-4b96-af8b-7fdbd2a5afe8] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005571154s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-093300 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-093300 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-093300 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-093300 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-093300 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [9f1f817a-ef56-48d9-afee-8e264cddd137] Pending
helpers_test.go:344: "sp-pod" [9f1f817a-ef56-48d9-afee-8e264cddd137] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [9f1f817a-ef56-48d9-afee-8e264cddd137] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 24.003819255s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-093300 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-093300 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-093300 delete -f testdata/storage-provisioner/pod.yaml: (1.448337749s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-093300 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b339bf75-007d-44fe-a392-dbaeff2d3ede] Pending
helpers_test.go:344: "sp-pod" [b339bf75-007d-44fe-a392-dbaeff2d3ede] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b339bf75-007d-44fe-a392-dbaeff2d3ede] Running
2024/06/03 12:40:40 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003998052s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-093300 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (44.99s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 ssh -n functional-093300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 cp functional-093300:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3133441646/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 ssh -n functional-093300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 ssh -n functional-093300 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-093300 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-b8ck5" [4de7577d-b138-4884-8102-6ccb8d096c07] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-b8ck5" [4de7577d-b138-4884-8102-6ccb8d096c07] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.004498831s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-093300 exec mysql-64454c8b5c-b8ck5 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-093300 exec mysql-64454c8b5c-b8ck5 -- mysql -ppassword -e "show databases;": exit status 1 (226.942937ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-093300 exec mysql-64454c8b5c-b8ck5 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-093300 exec mysql-64454c8b5c-b8ck5 -- mysql -ppassword -e "show databases;": exit status 1 (227.101966ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-093300 exec mysql-64454c8b5c-b8ck5 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.76s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1086251/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 ssh "sudo cat /etc/test/nested/copy/1086251/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1086251.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 ssh "sudo cat /etc/ssl/certs/1086251.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1086251.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 ssh "sudo cat /usr/share/ca-certificates/1086251.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/10862512.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 ssh "sudo cat /etc/ssl/certs/10862512.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/10862512.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 ssh "sudo cat /usr/share/ca-certificates/10862512.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-093300 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-093300 ssh "sudo systemctl is-active docker": exit status 1 (247.956229ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-093300 ssh "sudo systemctl is-active containerd": exit status 1 (262.105173ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-093300 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.1
registry.k8s.io/kube-proxy:v1.30.1
registry.k8s.io/kube-controller-manager:v1.30.1
registry.k8s.io/kube-apiserver:v1.30.1
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-093300
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-093300
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-093300 image ls --format short --alsologtostderr:
I0603 12:40:27.537949 1095555 out.go:291] Setting OutFile to fd 1 ...
I0603 12:40:27.538264 1095555 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0603 12:40:27.538280 1095555 out.go:304] Setting ErrFile to fd 2...
I0603 12:40:27.538287 1095555 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0603 12:40:27.538573 1095555 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
I0603 12:40:27.539360 1095555 config.go:182] Loaded profile config "functional-093300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0603 12:40:27.539510 1095555 config.go:182] Loaded profile config "functional-093300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0603 12:40:27.540094 1095555 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 12:40:27.540168 1095555 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 12:40:27.555958 1095555 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41901
I0603 12:40:27.556493 1095555 main.go:141] libmachine: () Calling .GetVersion
I0603 12:40:27.557192 1095555 main.go:141] libmachine: Using API Version  1
I0603 12:40:27.557221 1095555 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 12:40:27.557670 1095555 main.go:141] libmachine: () Calling .GetMachineName
I0603 12:40:27.557896 1095555 main.go:141] libmachine: (functional-093300) Calling .GetState
I0603 12:40:27.559855 1095555 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 12:40:27.559909 1095555 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 12:40:27.575726 1095555 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35189
I0603 12:40:27.576281 1095555 main.go:141] libmachine: () Calling .GetVersion
I0603 12:40:27.576947 1095555 main.go:141] libmachine: Using API Version  1
I0603 12:40:27.576976 1095555 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 12:40:27.577376 1095555 main.go:141] libmachine: () Calling .GetMachineName
I0603 12:40:27.577646 1095555 main.go:141] libmachine: (functional-093300) Calling .DriverName
I0603 12:40:27.577924 1095555 ssh_runner.go:195] Run: systemctl --version
I0603 12:40:27.577967 1095555 main.go:141] libmachine: (functional-093300) Calling .GetSSHHostname
I0603 12:40:27.580949 1095555 main.go:141] libmachine: (functional-093300) DBG | domain functional-093300 has defined MAC address 52:54:00:13:de:99 in network mk-functional-093300
I0603 12:40:27.581459 1095555 main.go:141] libmachine: (functional-093300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:de:99", ip: ""} in network mk-functional-093300: {Iface:virbr1 ExpiryTime:2024-06-03 13:37:22 +0000 UTC Type:0 Mac:52:54:00:13:de:99 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:functional-093300 Clientid:01:52:54:00:13:de:99}
I0603 12:40:27.581482 1095555 main.go:141] libmachine: (functional-093300) DBG | domain functional-093300 has defined IP address 192.168.39.250 and MAC address 52:54:00:13:de:99 in network mk-functional-093300
I0603 12:40:27.581788 1095555 main.go:141] libmachine: (functional-093300) Calling .GetSSHPort
I0603 12:40:27.581977 1095555 main.go:141] libmachine: (functional-093300) Calling .GetSSHKeyPath
I0603 12:40:27.582184 1095555 main.go:141] libmachine: (functional-093300) Calling .GetSSHUsername
I0603 12:40:27.582380 1095555 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/functional-093300/id_rsa Username:docker}
I0603 12:40:27.671680 1095555 ssh_runner.go:195] Run: sudo crictl images --output json
I0603 12:40:27.757733 1095555 main.go:141] libmachine: Making call to close driver server
I0603 12:40:27.757752 1095555 main.go:141] libmachine: (functional-093300) Calling .Close
I0603 12:40:27.758063 1095555 main.go:141] libmachine: Successfully made call to close driver server
I0603 12:40:27.758084 1095555 main.go:141] libmachine: Making call to close connection to plugin binary
I0603 12:40:27.758095 1095555 main.go:141] libmachine: Making call to close driver server
I0603 12:40:27.758103 1095555 main.go:141] libmachine: (functional-093300) Calling .Close
I0603 12:40:27.758348 1095555 main.go:141] libmachine: Successfully made call to close driver server
I0603 12:40:27.758362 1095555 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-093300 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/google-containers/addon-resizer  | functional-093300  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/kube-controller-manager | v1.30.1            | 25a1387cdab82 | 112MB  |
| docker.io/kindest/kindnetd              | v20240202-8f1494ea | 4950bb10b3f87 | 65.3MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/minikube-local-cache-test     | functional-093300  | 277c7d9163c91 | 3.33kB |
| localhost/my-image                      | functional-093300  | bbe3ebd3c64f1 | 1.47MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| registry.k8s.io/kube-scheduler          | v1.30.1            | a52dc94f0a912 | 63MB   |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/library/nginx                 | latest             | 4f67c83422ec7 | 192MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-apiserver          | v1.30.1            | 91be940803172 | 118MB  |
| registry.k8s.io/kube-proxy              | v1.30.1            | 747097150317f | 85.9MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-093300 image ls --format table --alsologtostderr:
I0603 12:40:31.154703 1095788 out.go:291] Setting OutFile to fd 1 ...
I0603 12:40:31.155002 1095788 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0603 12:40:31.155016 1095788 out.go:304] Setting ErrFile to fd 2...
I0603 12:40:31.155022 1095788 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0603 12:40:31.155369 1095788 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
I0603 12:40:31.156266 1095788 config.go:182] Loaded profile config "functional-093300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0603 12:40:31.156419 1095788 config.go:182] Loaded profile config "functional-093300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0603 12:40:31.156993 1095788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 12:40:31.157062 1095788 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 12:40:31.174941 1095788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36967
I0603 12:40:31.175410 1095788 main.go:141] libmachine: () Calling .GetVersion
I0603 12:40:31.175990 1095788 main.go:141] libmachine: Using API Version  1
I0603 12:40:31.176014 1095788 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 12:40:31.176379 1095788 main.go:141] libmachine: () Calling .GetMachineName
I0603 12:40:31.176628 1095788 main.go:141] libmachine: (functional-093300) Calling .GetState
I0603 12:40:31.178537 1095788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 12:40:31.178579 1095788 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 12:40:31.194210 1095788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43729
I0603 12:40:31.194640 1095788 main.go:141] libmachine: () Calling .GetVersion
I0603 12:40:31.195174 1095788 main.go:141] libmachine: Using API Version  1
I0603 12:40:31.195199 1095788 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 12:40:31.195512 1095788 main.go:141] libmachine: () Calling .GetMachineName
I0603 12:40:31.195730 1095788 main.go:141] libmachine: (functional-093300) Calling .DriverName
I0603 12:40:31.195949 1095788 ssh_runner.go:195] Run: systemctl --version
I0603 12:40:31.195979 1095788 main.go:141] libmachine: (functional-093300) Calling .GetSSHHostname
I0603 12:40:31.199573 1095788 main.go:141] libmachine: (functional-093300) DBG | domain functional-093300 has defined MAC address 52:54:00:13:de:99 in network mk-functional-093300
I0603 12:40:31.199961 1095788 main.go:141] libmachine: (functional-093300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:de:99", ip: ""} in network mk-functional-093300: {Iface:virbr1 ExpiryTime:2024-06-03 13:37:22 +0000 UTC Type:0 Mac:52:54:00:13:de:99 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:functional-093300 Clientid:01:52:54:00:13:de:99}
I0603 12:40:31.199999 1095788 main.go:141] libmachine: (functional-093300) DBG | domain functional-093300 has defined IP address 192.168.39.250 and MAC address 52:54:00:13:de:99 in network mk-functional-093300
I0603 12:40:31.200104 1095788 main.go:141] libmachine: (functional-093300) Calling .GetSSHPort
I0603 12:40:31.200322 1095788 main.go:141] libmachine: (functional-093300) Calling .GetSSHKeyPath
I0603 12:40:31.200467 1095788 main.go:141] libmachine: (functional-093300) Calling .GetSSHUsername
I0603 12:40:31.200622 1095788 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/functional-093300/id_rsa Username:docker}
I0603 12:40:31.300188 1095788 ssh_runner.go:195] Run: sudo crictl images --output json
I0603 12:40:31.358439 1095788 main.go:141] libmachine: Making call to close driver server
I0603 12:40:31.358461 1095788 main.go:141] libmachine: (functional-093300) Calling .Close
I0603 12:40:31.358761 1095788 main.go:141] libmachine: Successfully made call to close driver server
I0603 12:40:31.358785 1095788 main.go:141] libmachine: Making call to close connection to plugin binary
I0603 12:40:31.358796 1095788 main.go:141] libmachine: Making call to close driver server
I0603 12:40:31.358805 1095788 main.go:141] libmachine: (functional-093300) Calling .Close
I0603 12:40:31.359094 1095788 main.go:141] libmachine: Successfully made call to close driver server
I0603 12:40:31.359114 1095788 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-093300 image ls --format json --alsologtostderr:
[{"id":"4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988","docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"65291810"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-093300"],"size":"34114467"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"s
ize":"4631262"},{"id":"bbe3ebd3c64f15a7e5b61ed21de4b5755f3acac0a5f5f2522435ad20c7e41f99","repoDigests":["localhost/my-image@sha256:7e7e24b2142adb274484f88418cb7446413e86b5c1c0a72722d395e21f589215"],"repoTags":["localhost/my-image:functional-093300"],"size":"1468600"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"a8df5677062630f9b2070e48bb1d9feef78a481dd14fa0406cc6fbb91e41df00","repoDigests":["docker.io/library/695db8b2f25aaaa191d9e9c182db01267a5e79b899cb39762f6aa4a735fbaed0-tmp@sha256:55d4565e47005c76709b332ef37c8e696903e4405558e76561382b9fb2ae2eca"],"repoTag
s":[],"size":"1466018"},{"id":"4f67c83422ec747235357c04556616234e66fc3fa39cb4f40b2d4441ddd8f100","repoDigests":["docker.io/library/nginx@sha256:0f04e4f646a3f14bf31d8bc8d885b6c951fdcf42589d06845f64d18aec6a3c4d","docker.io/library/nginx@sha256:1445eb9c6dc5e9619346c836ef6fbd6a95092e4663f27dcfce116f051cdbd232"],"repoTags":["docker.io/library/nginx:latest"],"size":"191814165"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52","registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb1
04c0883fb4aceb894659787744abf115fcc56027"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.1"],"size":"112170310"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/
mysql:5.7"],"size":"519571821"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"277c7d9163c91ff9f5dc6249c0c65f581d346ea761af2decc83253dfa88be0a7","repoDigests":["localhost/minikube-local-cache-test@sha256:810ade57b5101c2bc9d1c9ae2b3121b2c7f51a841ba8311d4d3ccaa2364147ef"],"repoTags":["localhost/minikube-local-
cache-test:functional-093300"],"size":"3330"},{"id":"747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd","repoDigests":["registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa","registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.1"],"size":"85933465"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a","repoDigests":["registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea","registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28
a8f9913ea8c3bcc08e610c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.1"],"size":"117601759"},{"id":"a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035","repoDigests":["registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036","registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.1"],"size":"63026504"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-093300 image ls --format json --alsologtostderr:
I0603 12:40:30.869130 1095718 out.go:291] Setting OutFile to fd 1 ...
I0603 12:40:30.869281 1095718 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0603 12:40:30.869294 1095718 out.go:304] Setting ErrFile to fd 2...
I0603 12:40:30.869300 1095718 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0603 12:40:30.869576 1095718 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
I0603 12:40:30.870219 1095718 config.go:182] Loaded profile config "functional-093300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0603 12:40:30.870327 1095718 config.go:182] Loaded profile config "functional-093300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0603 12:40:30.870709 1095718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 12:40:30.870769 1095718 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 12:40:30.886452 1095718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40373
I0603 12:40:30.886925 1095718 main.go:141] libmachine: () Calling .GetVersion
I0603 12:40:30.887609 1095718 main.go:141] libmachine: Using API Version  1
I0603 12:40:30.887641 1095718 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 12:40:30.888033 1095718 main.go:141] libmachine: () Calling .GetMachineName
I0603 12:40:30.888263 1095718 main.go:141] libmachine: (functional-093300) Calling .GetState
I0603 12:40:30.890211 1095718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 12:40:30.890253 1095718 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 12:40:30.905392 1095718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46619
I0603 12:40:30.905900 1095718 main.go:141] libmachine: () Calling .GetVersion
I0603 12:40:30.906663 1095718 main.go:141] libmachine: Using API Version  1
I0603 12:40:30.906703 1095718 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 12:40:30.907078 1095718 main.go:141] libmachine: () Calling .GetMachineName
I0603 12:40:30.907334 1095718 main.go:141] libmachine: (functional-093300) Calling .DriverName
I0603 12:40:30.907557 1095718 ssh_runner.go:195] Run: systemctl --version
I0603 12:40:30.907589 1095718 main.go:141] libmachine: (functional-093300) Calling .GetSSHHostname
I0603 12:40:30.910821 1095718 main.go:141] libmachine: (functional-093300) DBG | domain functional-093300 has defined MAC address 52:54:00:13:de:99 in network mk-functional-093300
I0603 12:40:30.911246 1095718 main.go:141] libmachine: (functional-093300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:de:99", ip: ""} in network mk-functional-093300: {Iface:virbr1 ExpiryTime:2024-06-03 13:37:22 +0000 UTC Type:0 Mac:52:54:00:13:de:99 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:functional-093300 Clientid:01:52:54:00:13:de:99}
I0603 12:40:30.911282 1095718 main.go:141] libmachine: (functional-093300) DBG | domain functional-093300 has defined IP address 192.168.39.250 and MAC address 52:54:00:13:de:99 in network mk-functional-093300
I0603 12:40:30.911530 1095718 main.go:141] libmachine: (functional-093300) Calling .GetSSHPort
I0603 12:40:30.911708 1095718 main.go:141] libmachine: (functional-093300) Calling .GetSSHKeyPath
I0603 12:40:30.911874 1095718 main.go:141] libmachine: (functional-093300) Calling .GetSSHUsername
I0603 12:40:30.912082 1095718 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/functional-093300/id_rsa Username:docker}
I0603 12:40:31.011133 1095718 ssh_runner.go:195] Run: sudo crictl images --output json
I0603 12:40:31.102836 1095718 main.go:141] libmachine: Making call to close driver server
I0603 12:40:31.102857 1095718 main.go:141] libmachine: (functional-093300) Calling .Close
I0603 12:40:31.103125 1095718 main.go:141] libmachine: Successfully made call to close driver server
I0603 12:40:31.103154 1095718 main.go:141] libmachine: (functional-093300) DBG | Closing plugin on server side
I0603 12:40:31.103161 1095718 main.go:141] libmachine: Making call to close connection to plugin binary
I0603 12:40:31.103172 1095718 main.go:141] libmachine: Making call to close driver server
I0603 12:40:31.103180 1095718 main.go:141] libmachine: (functional-093300) Calling .Close
I0603 12:40:31.103390 1095718 main.go:141] libmachine: Successfully made call to close driver server
I0603 12:40:31.103400 1095718 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-093300 image ls --format yaml --alsologtostderr:
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 4f67c83422ec747235357c04556616234e66fc3fa39cb4f40b2d4441ddd8f100
repoDigests:
- docker.io/library/nginx@sha256:0f04e4f646a3f14bf31d8bc8d885b6c951fdcf42589d06845f64d18aec6a3c4d
- docker.io/library/nginx@sha256:1445eb9c6dc5e9619346c836ef6fbd6a95092e4663f27dcfce116f051cdbd232
repoTags:
- docker.io/library/nginx:latest
size: "191814165"
- id: 277c7d9163c91ff9f5dc6249c0c65f581d346ea761af2decc83253dfa88be0a7
repoDigests:
- localhost/minikube-local-cache-test@sha256:810ade57b5101c2bc9d1c9ae2b3121b2c7f51a841ba8311d4d3ccaa2364147ef
repoTags:
- localhost/minikube-local-cache-test:functional-093300
size: "3330"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036
- registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.1
size: "63026504"
- id: 4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
- docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "65291810"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-093300
size: "34114467"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea
- registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.1
size: "117601759"
- id: 25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52
- registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.1
size: "112170310"
- id: 747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd
repoDigests:
- registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa
- registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c
repoTags:
- registry.k8s.io/kube-proxy:v1.30.1
size: "85933465"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-093300 image ls --format yaml --alsologtostderr:
I0603 12:40:27.811695 1095579 out.go:291] Setting OutFile to fd 1 ...
I0603 12:40:27.812008 1095579 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0603 12:40:27.812023 1095579 out.go:304] Setting ErrFile to fd 2...
I0603 12:40:27.812029 1095579 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0603 12:40:27.812348 1095579 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
I0603 12:40:27.813278 1095579 config.go:182] Loaded profile config "functional-093300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0603 12:40:27.813462 1095579 config.go:182] Loaded profile config "functional-093300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0603 12:40:27.814079 1095579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 12:40:27.814154 1095579 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 12:40:27.830089 1095579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46103
I0603 12:40:27.830590 1095579 main.go:141] libmachine: () Calling .GetVersion
I0603 12:40:27.831164 1095579 main.go:141] libmachine: Using API Version  1
I0603 12:40:27.831186 1095579 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 12:40:27.831610 1095579 main.go:141] libmachine: () Calling .GetMachineName
I0603 12:40:27.831822 1095579 main.go:141] libmachine: (functional-093300) Calling .GetState
I0603 12:40:27.833657 1095579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 12:40:27.833700 1095579 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 12:40:27.849565 1095579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35183
I0603 12:40:27.850085 1095579 main.go:141] libmachine: () Calling .GetVersion
I0603 12:40:27.850698 1095579 main.go:141] libmachine: Using API Version  1
I0603 12:40:27.850721 1095579 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 12:40:27.851118 1095579 main.go:141] libmachine: () Calling .GetMachineName
I0603 12:40:27.851309 1095579 main.go:141] libmachine: (functional-093300) Calling .DriverName
I0603 12:40:27.851517 1095579 ssh_runner.go:195] Run: systemctl --version
I0603 12:40:27.851540 1095579 main.go:141] libmachine: (functional-093300) Calling .GetSSHHostname
I0603 12:40:27.854430 1095579 main.go:141] libmachine: (functional-093300) DBG | domain functional-093300 has defined MAC address 52:54:00:13:de:99 in network mk-functional-093300
I0603 12:40:27.854950 1095579 main.go:141] libmachine: (functional-093300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:de:99", ip: ""} in network mk-functional-093300: {Iface:virbr1 ExpiryTime:2024-06-03 13:37:22 +0000 UTC Type:0 Mac:52:54:00:13:de:99 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:functional-093300 Clientid:01:52:54:00:13:de:99}
I0603 12:40:27.854980 1095579 main.go:141] libmachine: (functional-093300) DBG | domain functional-093300 has defined IP address 192.168.39.250 and MAC address 52:54:00:13:de:99 in network mk-functional-093300
I0603 12:40:27.855180 1095579 main.go:141] libmachine: (functional-093300) Calling .GetSSHPort
I0603 12:40:27.855366 1095579 main.go:141] libmachine: (functional-093300) Calling .GetSSHKeyPath
I0603 12:40:27.855545 1095579 main.go:141] libmachine: (functional-093300) Calling .GetSSHUsername
I0603 12:40:27.855734 1095579 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/functional-093300/id_rsa Username:docker}
I0603 12:40:27.956623 1095579 ssh_runner.go:195] Run: sudo crictl images --output json
I0603 12:40:28.058309 1095579 main.go:141] libmachine: Making call to close driver server
I0603 12:40:28.058332 1095579 main.go:141] libmachine: (functional-093300) Calling .Close
I0603 12:40:28.058652 1095579 main.go:141] libmachine: Successfully made call to close driver server
I0603 12:40:28.058664 1095579 main.go:141] libmachine: (functional-093300) DBG | Closing plugin on server side
I0603 12:40:28.058669 1095579 main.go:141] libmachine: Making call to close connection to plugin binary
I0603 12:40:28.058679 1095579 main.go:141] libmachine: Making call to close driver server
I0603 12:40:28.058687 1095579 main.go:141] libmachine: (functional-093300) Calling .Close
I0603 12:40:28.058948 1095579 main.go:141] libmachine: Successfully made call to close driver server
I0603 12:40:28.058977 1095579 main.go:141] libmachine: (functional-093300) DBG | Closing plugin on server side
I0603 12:40:28.058979 1095579 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-093300 ssh pgrep buildkitd: exit status 1 (258.713176ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 image build -t localhost/my-image:functional-093300 testdata/build --alsologtostderr
E0603 12:40:29.388037 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.crt: no such file or directory
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-093300 image build -t localhost/my-image:functional-093300 testdata/build --alsologtostderr: (2.160638487s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-093300 image build -t localhost/my-image:functional-093300 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> a8df5677062
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-093300
--> bbe3ebd3c64
Successfully tagged localhost/my-image:functional-093300
bbe3ebd3c64f15a7e5b61ed21de4b5755f3acac0a5f5f2522435ad20c7e41f99
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-093300 image build -t localhost/my-image:functional-093300 testdata/build --alsologtostderr:
I0603 12:40:28.374932 1095643 out.go:291] Setting OutFile to fd 1 ...
I0603 12:40:28.375229 1095643 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0603 12:40:28.375240 1095643 out.go:304] Setting ErrFile to fd 2...
I0603 12:40:28.375245 1095643 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0603 12:40:28.375427 1095643 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
I0603 12:40:28.376065 1095643 config.go:182] Loaded profile config "functional-093300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0603 12:40:28.377190 1095643 config.go:182] Loaded profile config "functional-093300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0603 12:40:28.377628 1095643 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 12:40:28.377713 1095643 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 12:40:28.393540 1095643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44691
I0603 12:40:28.394010 1095643 main.go:141] libmachine: () Calling .GetVersion
I0603 12:40:28.394725 1095643 main.go:141] libmachine: Using API Version  1
I0603 12:40:28.394748 1095643 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 12:40:28.395155 1095643 main.go:141] libmachine: () Calling .GetMachineName
I0603 12:40:28.395388 1095643 main.go:141] libmachine: (functional-093300) Calling .GetState
I0603 12:40:28.397435 1095643 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 12:40:28.397490 1095643 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 12:40:28.412357 1095643 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36483
I0603 12:40:28.412861 1095643 main.go:141] libmachine: () Calling .GetVersion
I0603 12:40:28.413386 1095643 main.go:141] libmachine: Using API Version  1
I0603 12:40:28.413427 1095643 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 12:40:28.413752 1095643 main.go:141] libmachine: () Calling .GetMachineName
I0603 12:40:28.414000 1095643 main.go:141] libmachine: (functional-093300) Calling .DriverName
I0603 12:40:28.414242 1095643 ssh_runner.go:195] Run: systemctl --version
I0603 12:40:28.414286 1095643 main.go:141] libmachine: (functional-093300) Calling .GetSSHHostname
I0603 12:40:28.417364 1095643 main.go:141] libmachine: (functional-093300) DBG | domain functional-093300 has defined MAC address 52:54:00:13:de:99 in network mk-functional-093300
I0603 12:40:28.417788 1095643 main.go:141] libmachine: (functional-093300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:de:99", ip: ""} in network mk-functional-093300: {Iface:virbr1 ExpiryTime:2024-06-03 13:37:22 +0000 UTC Type:0 Mac:52:54:00:13:de:99 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:functional-093300 Clientid:01:52:54:00:13:de:99}
I0603 12:40:28.417823 1095643 main.go:141] libmachine: (functional-093300) DBG | domain functional-093300 has defined IP address 192.168.39.250 and MAC address 52:54:00:13:de:99 in network mk-functional-093300
I0603 12:40:28.417933 1095643 main.go:141] libmachine: (functional-093300) Calling .GetSSHPort
I0603 12:40:28.418125 1095643 main.go:141] libmachine: (functional-093300) Calling .GetSSHKeyPath
I0603 12:40:28.418280 1095643 main.go:141] libmachine: (functional-093300) Calling .GetSSHUsername
I0603 12:40:28.418431 1095643 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/functional-093300/id_rsa Username:docker}
I0603 12:40:28.543451 1095643 build_images.go:161] Building image from path: /tmp/build.1325296673.tar
I0603 12:40:28.543543 1095643 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0603 12:40:28.572154 1095643 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1325296673.tar
I0603 12:40:28.577874 1095643 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1325296673.tar: stat -c "%s %y" /var/lib/minikube/build/build.1325296673.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1325296673.tar': No such file or directory
I0603 12:40:28.577898 1095643 ssh_runner.go:362] scp /tmp/build.1325296673.tar --> /var/lib/minikube/build/build.1325296673.tar (3072 bytes)
I0603 12:40:28.616368 1095643 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1325296673
I0603 12:40:28.627734 1095643 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1325296673 -xf /var/lib/minikube/build/build.1325296673.tar
I0603 12:40:28.638925 1095643 crio.go:315] Building image: /var/lib/minikube/build/build.1325296673
I0603 12:40:28.639004 1095643 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-093300 /var/lib/minikube/build/build.1325296673 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0603 12:40:30.402560 1095643 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-093300 /var/lib/minikube/build/build.1325296673 --cgroup-manager=cgroupfs: (1.763519699s)
I0603 12:40:30.402635 1095643 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1325296673
I0603 12:40:30.426661 1095643 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1325296673.tar
I0603 12:40:30.478904 1095643 build_images.go:217] Built localhost/my-image:functional-093300 from /tmp/build.1325296673.tar
I0603 12:40:30.478955 1095643 build_images.go:133] succeeded building to: functional-093300
I0603 12:40:30.478961 1095643 build_images.go:134] failed building to: 
I0603 12:40:30.478994 1095643 main.go:141] libmachine: Making call to close driver server
I0603 12:40:30.479009 1095643 main.go:141] libmachine: (functional-093300) Calling .Close
I0603 12:40:30.479340 1095643 main.go:141] libmachine: Successfully made call to close driver server
I0603 12:40:30.479362 1095643 main.go:141] libmachine: Making call to close connection to plugin binary
I0603 12:40:30.479371 1095643 main.go:141] libmachine: Making call to close driver server
I0603 12:40:30.479376 1095643 main.go:141] libmachine: (functional-093300) DBG | Closing plugin on server side
I0603 12:40:30.479380 1095643 main.go:141] libmachine: (functional-093300) Calling .Close
I0603 12:40:30.479623 1095643 main.go:141] libmachine: Successfully made call to close driver server
I0603 12:40:30.479649 1095643 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-093300
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 image load --daemon gcr.io/google-containers/addon-resizer:functional-093300 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-093300 image load --daemon gcr.io/google-containers/addon-resizer:functional-093300 --alsologtostderr: (4.754691961s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 image load --daemon gcr.io/google-containers/addon-resizer:functional-093300 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-093300 image load --daemon gcr.io/google-containers/addon-resizer:functional-093300 --alsologtostderr: (2.874344177s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (10.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.07895466s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-093300
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 image load --daemon gcr.io/google-containers/addon-resizer:functional-093300 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-093300 image load --daemon gcr.io/google-containers/addon-resizer:functional-093300 --alsologtostderr: (9.08321247s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (10.76s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-093300 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-093300 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-96jmn" [b4782147-1655-492a-873c-ede982577681] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-96jmn" [b4782147-1655-492a-873c-ede982577681] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.009056009s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (4.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 image save gcr.io/google-containers/addon-resizer:functional-093300 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-093300 image save gcr.io/google-containers/addon-resizer:functional-093300 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (4.949181739s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (4.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 image rm gcr.io/google-containers/addon-resizer:functional-093300 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 service list -o json
functional_test.go:1490: Took "454.269773ms" to run "out/minikube-linux-amd64 -p functional-093300 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-093300 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.675731564s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.250:30109
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.250:30109
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-093300 /tmp/TestFunctionalparallelMountCmdany-port3057850368/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1717418425210708690" to /tmp/TestFunctionalparallelMountCmdany-port3057850368/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1717418425210708690" to /tmp/TestFunctionalparallelMountCmdany-port3057850368/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1717418425210708690" to /tmp/TestFunctionalparallelMountCmdany-port3057850368/001/test-1717418425210708690
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-093300 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (241.923679ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jun  3 12:40 created-by-test
-rw-r--r-- 1 docker docker 24 Jun  3 12:40 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jun  3 12:40 test-1717418425210708690
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 ssh cat /mount-9p/test-1717418425210708690
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-093300 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [5ad16329-d24a-461f-b518-363803ff60ac] Pending
helpers_test.go:344: "busybox-mount" [5ad16329-d24a-461f-b518-363803ff60ac] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [5ad16329-d24a-461f-b518-363803ff60ac] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [5ad16329-d24a-461f-b518-363803ff60ac] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004101406s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-093300 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-093300 /tmp/TestFunctionalparallelMountCmdany-port3057850368/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-093300
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 image save --daemon gcr.io/google-containers/addon-resizer:functional-093300 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-093300 image save --daemon gcr.io/google-containers/addon-resizer:functional-093300 --alsologtostderr: (1.141497545s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-093300
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "297.492468ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "47.622032ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "298.370009ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "57.879051ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-093300 /tmp/TestFunctionalparallelMountCmdspecific-port3416653639/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-093300 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (234.392459ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-093300 /tmp/TestFunctionalparallelMountCmdspecific-port3416653639/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-093300 ssh "sudo umount -f /mount-9p": exit status 1 (233.290915ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-093300 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-093300 /tmp/TestFunctionalparallelMountCmdspecific-port3416653639/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-093300 /tmp/TestFunctionalparallelMountCmdVerifyCleanup633915936/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-093300 /tmp/TestFunctionalparallelMountCmdVerifyCleanup633915936/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-093300 /tmp/TestFunctionalparallelMountCmdVerifyCleanup633915936/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-093300 ssh "findmnt -T" /mount1: exit status 1 (304.378463ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-093300 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-093300 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-093300 /tmp/TestFunctionalparallelMountCmdVerifyCleanup633915936/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-093300 /tmp/TestFunctionalparallelMountCmdVerifyCleanup633915936/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-093300 /tmp/TestFunctionalparallelMountCmdVerifyCleanup633915936/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.72s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-093300
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-093300
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-093300
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (203.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-220492 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0603 12:42:45.542287 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.crt: no such file or directory
E0603 12:43:13.229028 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-220492 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m22.571284902s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (203.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-220492 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-220492 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-220492 -- rollout status deployment/busybox: (2.333362584s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-220492 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-220492 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-220492 -- exec busybox-fc5497c4f-5z6j2 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-220492 -- exec busybox-fc5497c4f-m229v -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-220492 -- exec busybox-fc5497c4f-stmtj -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-220492 -- exec busybox-fc5497c4f-5z6j2 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-220492 -- exec busybox-fc5497c4f-m229v -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-220492 -- exec busybox-fc5497c4f-stmtj -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-220492 -- exec busybox-fc5497c4f-5z6j2 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-220492 -- exec busybox-fc5497c4f-m229v -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-220492 -- exec busybox-fc5497c4f-stmtj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-220492 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-220492 -- exec busybox-fc5497c4f-5z6j2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-220492 -- exec busybox-fc5497c4f-5z6j2 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-220492 -- exec busybox-fc5497c4f-m229v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-220492 -- exec busybox-fc5497c4f-m229v -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-220492 -- exec busybox-fc5497c4f-stmtj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-220492 -- exec busybox-fc5497c4f-stmtj -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (44.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-220492 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-220492 -v=7 --alsologtostderr: (43.752549456s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 status -v=7 --alsologtostderr
E0603 12:44:58.228898 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/functional-093300/client.crt: no such file or directory
E0603 12:44:58.234644 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/functional-093300/client.crt: no such file or directory
E0603 12:44:58.245786 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/functional-093300/client.crt: no such file or directory
E0603 12:44:58.266507 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/functional-093300/client.crt: no such file or directory
E0603 12:44:58.307311 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/functional-093300/client.crt: no such file or directory
E0603 12:44:58.387481 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/functional-093300/client.crt: no such file or directory
E0603 12:44:58.547858 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/functional-093300/client.crt: no such file or directory
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (44.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-220492 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
E0603 12:44:58.868287 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/functional-093300/client.crt: no such file or directory
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 status --output json -v=7 --alsologtostderr
E0603 12:44:59.508949 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/functional-093300/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 cp testdata/cp-test.txt ha-220492:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 ssh -n ha-220492 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 cp ha-220492:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3428699095/001/cp-test_ha-220492.txt
E0603 12:45:00.789548 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/functional-093300/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 ssh -n ha-220492 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 cp ha-220492:/home/docker/cp-test.txt ha-220492-m02:/home/docker/cp-test_ha-220492_ha-220492-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 ssh -n ha-220492 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 ssh -n ha-220492-m02 "sudo cat /home/docker/cp-test_ha-220492_ha-220492-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 cp ha-220492:/home/docker/cp-test.txt ha-220492-m03:/home/docker/cp-test_ha-220492_ha-220492-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 ssh -n ha-220492 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 ssh -n ha-220492-m03 "sudo cat /home/docker/cp-test_ha-220492_ha-220492-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 cp ha-220492:/home/docker/cp-test.txt ha-220492-m04:/home/docker/cp-test_ha-220492_ha-220492-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 ssh -n ha-220492 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 ssh -n ha-220492-m04 "sudo cat /home/docker/cp-test_ha-220492_ha-220492-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 cp testdata/cp-test.txt ha-220492-m02:/home/docker/cp-test.txt
E0603 12:45:03.349967 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/functional-093300/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 ssh -n ha-220492-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 cp ha-220492-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3428699095/001/cp-test_ha-220492-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 ssh -n ha-220492-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 cp ha-220492-m02:/home/docker/cp-test.txt ha-220492:/home/docker/cp-test_ha-220492-m02_ha-220492.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 ssh -n ha-220492-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 ssh -n ha-220492 "sudo cat /home/docker/cp-test_ha-220492-m02_ha-220492.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 cp ha-220492-m02:/home/docker/cp-test.txt ha-220492-m03:/home/docker/cp-test_ha-220492-m02_ha-220492-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 ssh -n ha-220492-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 ssh -n ha-220492-m03 "sudo cat /home/docker/cp-test_ha-220492-m02_ha-220492-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 cp ha-220492-m02:/home/docker/cp-test.txt ha-220492-m04:/home/docker/cp-test_ha-220492-m02_ha-220492-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 ssh -n ha-220492-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 ssh -n ha-220492-m04 "sudo cat /home/docker/cp-test_ha-220492-m02_ha-220492-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 cp testdata/cp-test.txt ha-220492-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 ssh -n ha-220492-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 cp ha-220492-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3428699095/001/cp-test_ha-220492-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 ssh -n ha-220492-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 cp ha-220492-m03:/home/docker/cp-test.txt ha-220492:/home/docker/cp-test_ha-220492-m03_ha-220492.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 ssh -n ha-220492-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 ssh -n ha-220492 "sudo cat /home/docker/cp-test_ha-220492-m03_ha-220492.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 cp ha-220492-m03:/home/docker/cp-test.txt ha-220492-m02:/home/docker/cp-test_ha-220492-m03_ha-220492-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 ssh -n ha-220492-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 ssh -n ha-220492-m02 "sudo cat /home/docker/cp-test_ha-220492-m03_ha-220492-m02.txt"
E0603 12:45:08.470881 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/functional-093300/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 cp ha-220492-m03:/home/docker/cp-test.txt ha-220492-m04:/home/docker/cp-test_ha-220492-m03_ha-220492-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 ssh -n ha-220492-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 ssh -n ha-220492-m04 "sudo cat /home/docker/cp-test_ha-220492-m03_ha-220492-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 cp testdata/cp-test.txt ha-220492-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 ssh -n ha-220492-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 cp ha-220492-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3428699095/001/cp-test_ha-220492-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 ssh -n ha-220492-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 cp ha-220492-m04:/home/docker/cp-test.txt ha-220492:/home/docker/cp-test_ha-220492-m04_ha-220492.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 ssh -n ha-220492-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 ssh -n ha-220492 "sudo cat /home/docker/cp-test_ha-220492-m04_ha-220492.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 cp ha-220492-m04:/home/docker/cp-test.txt ha-220492-m02:/home/docker/cp-test_ha-220492-m04_ha-220492-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 ssh -n ha-220492-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 ssh -n ha-220492-m02 "sudo cat /home/docker/cp-test_ha-220492-m04_ha-220492-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 cp ha-220492-m04:/home/docker/cp-test.txt ha-220492-m03:/home/docker/cp-test_ha-220492-m04_ha-220492-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 ssh -n ha-220492-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 ssh -n ha-220492-m03 "sudo cat /home/docker/cp-test_ha-220492-m04_ha-220492-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.488947284s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-220492 node delete m03 -v=7 --alsologtostderr: (16.325587147s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (354.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-220492 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0603 12:57:45.541651 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.crt: no such file or directory
E0603 12:59:58.228969 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/functional-093300/client.crt: no such file or directory
E0603 13:01:21.274762 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/functional-093300/client.crt: no such file or directory
E0603 13:02:45.541286 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-220492 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m53.91956193s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (354.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (77.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-220492 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-220492 --control-plane -v=7 --alsologtostderr: (1m16.741341447s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-220492 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (77.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.55s)

                                                
                                    
x
+
TestJSONOutput/start/Command (60.14s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-081144 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0603 13:04:58.228417 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/functional-093300/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-081144 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m0.142249908s)
--- PASS: TestJSONOutput/start/Command (60.14s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-081144 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-081144 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.69s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-081144 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-081144 --output=json --user=testUser: (7.693790265s)
--- PASS: TestJSONOutput/stop/Command (7.69s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-355697 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-355697 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (65.34846ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7941be0a-c6be-4d53-8613-d48696d6d654","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-355697] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ece2195b-5f6e-480c-8872-14fa2814f56b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19011"}}
	{"specversion":"1.0","id":"4a4d96e0-5f80-4a99-a4ac-ce7676d7c562","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3a7e37ff-675c-4fea-bbce-37e817fe3f97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19011-1078924/kubeconfig"}}
	{"specversion":"1.0","id":"c8ae9623-41a6-4acc-afa5-788952d5194e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19011-1078924/.minikube"}}
	{"specversion":"1.0","id":"84164b74-b2f8-44d9-8ed1-064f04a4d194","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"abbd52fd-8015-40f3-ad1f-5a7044eab9da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f547f4af-101e-4652-8622-b24dd92ae498","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-355697" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-355697
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (86.02s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-134423 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-134423 --driver=kvm2  --container-runtime=crio: (43.372001657s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-137554 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-137554 --driver=kvm2  --container-runtime=crio: (40.175392325s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-134423
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-137554
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-137554" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-137554
helpers_test.go:175: Cleaning up "first-134423" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-134423
--- PASS: TestMinikubeProfile (86.02s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.15s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-939864 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0603 13:07:45.542347 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-939864 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.149288867s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.15s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-939864 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-939864 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-956587 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-956587 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.997050706s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.00s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-956587 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-956587 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-939864 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-956587 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-956587 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-956587
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-956587: (1.284749674s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.3s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-956587
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-956587: (21.295558715s)
--- PASS: TestMountStart/serial/RestartStopped (22.30s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-956587 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-956587 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (94.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-101468 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0603 13:09:58.228993 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/functional-093300/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-101468 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m34.337748807s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-101468 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (94.75s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-101468 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-101468 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-101468 -- rollout status deployment/busybox: (1.831098171s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-101468 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-101468 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-101468 -- exec busybox-fc5497c4f-6lvmz -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-101468 -- exec busybox-fc5497c4f-7jrcp -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-101468 -- exec busybox-fc5497c4f-6lvmz -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-101468 -- exec busybox-fc5497c4f-7jrcp -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-101468 -- exec busybox-fc5497c4f-6lvmz -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-101468 -- exec busybox-fc5497c4f-7jrcp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.40s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-101468 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-101468 -- exec busybox-fc5497c4f-6lvmz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-101468 -- exec busybox-fc5497c4f-6lvmz -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-101468 -- exec busybox-fc5497c4f-7jrcp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-101468 -- exec busybox-fc5497c4f-7jrcp -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (37.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-101468 -v 3 --alsologtostderr
E0603 13:10:48.590673 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.crt: no such file or directory
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-101468 -v 3 --alsologtostderr: (36.529567899s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-101468 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (37.11s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-101468 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-101468 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-101468 cp testdata/cp-test.txt multinode-101468:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-101468 ssh -n multinode-101468 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-101468 cp multinode-101468:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2236251675/001/cp-test_multinode-101468.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-101468 ssh -n multinode-101468 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-101468 cp multinode-101468:/home/docker/cp-test.txt multinode-101468-m02:/home/docker/cp-test_multinode-101468_multinode-101468-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-101468 ssh -n multinode-101468 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-101468 ssh -n multinode-101468-m02 "sudo cat /home/docker/cp-test_multinode-101468_multinode-101468-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-101468 cp multinode-101468:/home/docker/cp-test.txt multinode-101468-m03:/home/docker/cp-test_multinode-101468_multinode-101468-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-101468 ssh -n multinode-101468 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-101468 ssh -n multinode-101468-m03 "sudo cat /home/docker/cp-test_multinode-101468_multinode-101468-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-101468 cp testdata/cp-test.txt multinode-101468-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-101468 ssh -n multinode-101468-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-101468 cp multinode-101468-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2236251675/001/cp-test_multinode-101468-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-101468 ssh -n multinode-101468-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-101468 cp multinode-101468-m02:/home/docker/cp-test.txt multinode-101468:/home/docker/cp-test_multinode-101468-m02_multinode-101468.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-101468 ssh -n multinode-101468-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-101468 ssh -n multinode-101468 "sudo cat /home/docker/cp-test_multinode-101468-m02_multinode-101468.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-101468 cp multinode-101468-m02:/home/docker/cp-test.txt multinode-101468-m03:/home/docker/cp-test_multinode-101468-m02_multinode-101468-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-101468 ssh -n multinode-101468-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-101468 ssh -n multinode-101468-m03 "sudo cat /home/docker/cp-test_multinode-101468-m02_multinode-101468-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-101468 cp testdata/cp-test.txt multinode-101468-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-101468 ssh -n multinode-101468-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-101468 cp multinode-101468-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2236251675/001/cp-test_multinode-101468-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-101468 ssh -n multinode-101468-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-101468 cp multinode-101468-m03:/home/docker/cp-test.txt multinode-101468:/home/docker/cp-test_multinode-101468-m03_multinode-101468.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-101468 ssh -n multinode-101468-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-101468 ssh -n multinode-101468 "sudo cat /home/docker/cp-test_multinode-101468-m03_multinode-101468.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-101468 cp multinode-101468-m03:/home/docker/cp-test.txt multinode-101468-m02:/home/docker/cp-test_multinode-101468-m03_multinode-101468-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-101468 ssh -n multinode-101468-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-101468 ssh -n multinode-101468-m02 "sudo cat /home/docker/cp-test_multinode-101468-m03_multinode-101468-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.37s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-101468 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-101468 node stop m03: (1.510917937s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-101468 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-101468 status: exit status 7 (418.217063ms)

                                                
                                                
-- stdout --
	multinode-101468
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-101468-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-101468-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-101468 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-101468 status --alsologtostderr: exit status 7 (429.280232ms)

                                                
                                                
-- stdout --
	multinode-101468
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-101468-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-101468-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0603 13:11:19.381526 1113249 out.go:291] Setting OutFile to fd 1 ...
	I0603 13:11:19.381762 1113249 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:11:19.381773 1113249 out.go:304] Setting ErrFile to fd 2...
	I0603 13:11:19.381777 1113249 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:11:19.381948 1113249 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
	I0603 13:11:19.382102 1113249 out.go:298] Setting JSON to false
	I0603 13:11:19.382127 1113249 mustload.go:65] Loading cluster: multinode-101468
	I0603 13:11:19.382277 1113249 notify.go:220] Checking for updates...
	I0603 13:11:19.382467 1113249 config.go:182] Loaded profile config "multinode-101468": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0603 13:11:19.382483 1113249 status.go:255] checking status of multinode-101468 ...
	I0603 13:11:19.383526 1113249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:11:19.383589 1113249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:11:19.399116 1113249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40963
	I0603 13:11:19.399540 1113249 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:11:19.400125 1113249 main.go:141] libmachine: Using API Version  1
	I0603 13:11:19.400152 1113249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:11:19.400474 1113249 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:11:19.400653 1113249 main.go:141] libmachine: (multinode-101468) Calling .GetState
	I0603 13:11:19.402366 1113249 status.go:330] multinode-101468 host status = "Running" (err=<nil>)
	I0603 13:11:19.402381 1113249 host.go:66] Checking if "multinode-101468" exists ...
	I0603 13:11:19.402670 1113249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:11:19.402723 1113249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:11:19.418178 1113249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43145
	I0603 13:11:19.418585 1113249 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:11:19.419045 1113249 main.go:141] libmachine: Using API Version  1
	I0603 13:11:19.419068 1113249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:11:19.419427 1113249 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:11:19.419626 1113249 main.go:141] libmachine: (multinode-101468) Calling .GetIP
	I0603 13:11:19.422721 1113249 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:11:19.423140 1113249 main.go:141] libmachine: (multinode-101468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:8e:40", ip: ""} in network mk-multinode-101468: {Iface:virbr1 ExpiryTime:2024-06-03 14:09:07 +0000 UTC Type:0 Mac:52:54:00:ab:8e:40 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-101468 Clientid:01:52:54:00:ab:8e:40}
	I0603 13:11:19.423169 1113249 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined IP address 192.168.39.141 and MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:11:19.423283 1113249 host.go:66] Checking if "multinode-101468" exists ...
	I0603 13:11:19.423609 1113249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:11:19.423659 1113249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:11:19.440743 1113249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35725
	I0603 13:11:19.441166 1113249 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:11:19.441742 1113249 main.go:141] libmachine: Using API Version  1
	I0603 13:11:19.441771 1113249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:11:19.442085 1113249 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:11:19.442221 1113249 main.go:141] libmachine: (multinode-101468) Calling .DriverName
	I0603 13:11:19.442458 1113249 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 13:11:19.442484 1113249 main.go:141] libmachine: (multinode-101468) Calling .GetSSHHostname
	I0603 13:11:19.445099 1113249 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:11:19.445506 1113249 main.go:141] libmachine: (multinode-101468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:8e:40", ip: ""} in network mk-multinode-101468: {Iface:virbr1 ExpiryTime:2024-06-03 14:09:07 +0000 UTC Type:0 Mac:52:54:00:ab:8e:40 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-101468 Clientid:01:52:54:00:ab:8e:40}
	I0603 13:11:19.445539 1113249 main.go:141] libmachine: (multinode-101468) DBG | domain multinode-101468 has defined IP address 192.168.39.141 and MAC address 52:54:00:ab:8e:40 in network mk-multinode-101468
	I0603 13:11:19.445688 1113249 main.go:141] libmachine: (multinode-101468) Calling .GetSSHPort
	I0603 13:11:19.445896 1113249 main.go:141] libmachine: (multinode-101468) Calling .GetSSHKeyPath
	I0603 13:11:19.446065 1113249 main.go:141] libmachine: (multinode-101468) Calling .GetSSHUsername
	I0603 13:11:19.446232 1113249 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/multinode-101468/id_rsa Username:docker}
	I0603 13:11:19.528993 1113249 ssh_runner.go:195] Run: systemctl --version
	I0603 13:11:19.535201 1113249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:11:19.549974 1113249 kubeconfig.go:125] found "multinode-101468" server: "https://192.168.39.141:8443"
	I0603 13:11:19.550012 1113249 api_server.go:166] Checking apiserver status ...
	I0603 13:11:19.550044 1113249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0603 13:11:19.564310 1113249 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1142/cgroup
	W0603 13:11:19.578646 1113249 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1142/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0603 13:11:19.578699 1113249 ssh_runner.go:195] Run: ls
	I0603 13:11:19.583069 1113249 api_server.go:253] Checking apiserver healthz at https://192.168.39.141:8443/healthz ...
	I0603 13:11:19.588196 1113249 api_server.go:279] https://192.168.39.141:8443/healthz returned 200:
	ok
	I0603 13:11:19.588222 1113249 status.go:422] multinode-101468 apiserver status = Running (err=<nil>)
	I0603 13:11:19.588235 1113249 status.go:257] multinode-101468 status: &{Name:multinode-101468 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0603 13:11:19.588258 1113249 status.go:255] checking status of multinode-101468-m02 ...
	I0603 13:11:19.588729 1113249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:11:19.588762 1113249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:11:19.604781 1113249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45837
	I0603 13:11:19.605273 1113249 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:11:19.605828 1113249 main.go:141] libmachine: Using API Version  1
	I0603 13:11:19.605849 1113249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:11:19.606179 1113249 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:11:19.606404 1113249 main.go:141] libmachine: (multinode-101468-m02) Calling .GetState
	I0603 13:11:19.607968 1113249 status.go:330] multinode-101468-m02 host status = "Running" (err=<nil>)
	I0603 13:11:19.607989 1113249 host.go:66] Checking if "multinode-101468-m02" exists ...
	I0603 13:11:19.608394 1113249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:11:19.608446 1113249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:11:19.623842 1113249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46545
	I0603 13:11:19.624225 1113249 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:11:19.624703 1113249 main.go:141] libmachine: Using API Version  1
	I0603 13:11:19.624726 1113249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:11:19.625061 1113249 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:11:19.625306 1113249 main.go:141] libmachine: (multinode-101468-m02) Calling .GetIP
	I0603 13:11:19.628087 1113249 main.go:141] libmachine: (multinode-101468-m02) DBG | domain multinode-101468-m02 has defined MAC address 52:54:00:15:28:3c in network mk-multinode-101468
	I0603 13:11:19.628496 1113249 main.go:141] libmachine: (multinode-101468-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:28:3c", ip: ""} in network mk-multinode-101468: {Iface:virbr1 ExpiryTime:2024-06-03 14:10:06 +0000 UTC Type:0 Mac:52:54:00:15:28:3c Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-101468-m02 Clientid:01:52:54:00:15:28:3c}
	I0603 13:11:19.628526 1113249 main.go:141] libmachine: (multinode-101468-m02) DBG | domain multinode-101468-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:15:28:3c in network mk-multinode-101468
	I0603 13:11:19.628676 1113249 host.go:66] Checking if "multinode-101468-m02" exists ...
	I0603 13:11:19.628963 1113249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:11:19.629002 1113249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:11:19.644644 1113249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36181
	I0603 13:11:19.645050 1113249 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:11:19.645607 1113249 main.go:141] libmachine: Using API Version  1
	I0603 13:11:19.645633 1113249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:11:19.645935 1113249 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:11:19.646140 1113249 main.go:141] libmachine: (multinode-101468-m02) Calling .DriverName
	I0603 13:11:19.646322 1113249 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0603 13:11:19.646338 1113249 main.go:141] libmachine: (multinode-101468-m02) Calling .GetSSHHostname
	I0603 13:11:19.648834 1113249 main.go:141] libmachine: (multinode-101468-m02) DBG | domain multinode-101468-m02 has defined MAC address 52:54:00:15:28:3c in network mk-multinode-101468
	I0603 13:11:19.649255 1113249 main.go:141] libmachine: (multinode-101468-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:28:3c", ip: ""} in network mk-multinode-101468: {Iface:virbr1 ExpiryTime:2024-06-03 14:10:06 +0000 UTC Type:0 Mac:52:54:00:15:28:3c Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-101468-m02 Clientid:01:52:54:00:15:28:3c}
	I0603 13:11:19.649291 1113249 main.go:141] libmachine: (multinode-101468-m02) DBG | domain multinode-101468-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:15:28:3c in network mk-multinode-101468
	I0603 13:11:19.649442 1113249 main.go:141] libmachine: (multinode-101468-m02) Calling .GetSSHPort
	I0603 13:11:19.649642 1113249 main.go:141] libmachine: (multinode-101468-m02) Calling .GetSSHKeyPath
	I0603 13:11:19.649791 1113249 main.go:141] libmachine: (multinode-101468-m02) Calling .GetSSHUsername
	I0603 13:11:19.649918 1113249 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/multinode-101468-m02/id_rsa Username:docker}
	I0603 13:11:19.732783 1113249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0603 13:11:19.746805 1113249 status.go:257] multinode-101468-m02 status: &{Name:multinode-101468-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0603 13:11:19.746842 1113249 status.go:255] checking status of multinode-101468-m03 ...
	I0603 13:11:19.747180 1113249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0603 13:11:19.747215 1113249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0603 13:11:19.763675 1113249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44293
	I0603 13:11:19.764102 1113249 main.go:141] libmachine: () Calling .GetVersion
	I0603 13:11:19.764608 1113249 main.go:141] libmachine: Using API Version  1
	I0603 13:11:19.764634 1113249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0603 13:11:19.764976 1113249 main.go:141] libmachine: () Calling .GetMachineName
	I0603 13:11:19.765216 1113249 main.go:141] libmachine: (multinode-101468-m03) Calling .GetState
	I0603 13:11:19.766762 1113249 status.go:330] multinode-101468-m03 host status = "Stopped" (err=<nil>)
	I0603 13:11:19.766779 1113249 status.go:343] host is not running, skipping remaining checks
	I0603 13:11:19.766788 1113249 status.go:257] multinode-101468-m03 status: &{Name:multinode-101468-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.36s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (26.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-101468 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-101468 node start m03 -v=7 --alsologtostderr: (26.07319741s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-101468 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (26.69s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-101468 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-101468 node delete m03: (1.915893525s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-101468 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.45s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (208.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-101468 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0603 13:19:58.228635 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/functional-093300/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-101468 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m27.888061071s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-101468 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (208.44s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (45.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-101468
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-101468-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-101468-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (66.984338ms)

                                                
                                                
-- stdout --
	* [multinode-101468-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19011
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19011-1078924/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19011-1078924/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-101468-m02' is duplicated with machine name 'multinode-101468-m02' in profile 'multinode-101468'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-101468-m03 --driver=kvm2  --container-runtime=crio
E0603 13:22:45.542636 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.crt: no such file or directory
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-101468-m03 --driver=kvm2  --container-runtime=crio: (44.821762814s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-101468
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-101468: exit status 80 (222.93229ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-101468 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-101468-m03 already exists in multinode-101468-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-101468-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (45.94s)

                                                
                                    
x
+
TestScheduledStopUnix (111.01s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-990977 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-990977 --memory=2048 --driver=kvm2  --container-runtime=crio: (39.384402698s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-990977 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-990977 -n scheduled-stop-990977
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-990977 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-990977 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-990977 -n scheduled-stop-990977
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-990977
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-990977 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0603 13:29:58.228893 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/functional-093300/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-990977
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-990977: exit status 7 (75.238677ms)

                                                
                                                
-- stdout --
	scheduled-stop-990977
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-990977 -n scheduled-stop-990977
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-990977 -n scheduled-stop-990977: exit status 7 (65.012919ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-990977" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-990977
--- PASS: TestScheduledStopUnix (111.01s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (217.4s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1386867306 start -p running-upgrade-439186 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1386867306 start -p running-upgrade-439186 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m9.304019767s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-439186 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-439186 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m26.399845128s)
helpers_test.go:175: Cleaning up "running-upgrade-439186" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-439186
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-439186: (1.242507379s)
--- PASS: TestRunningBinaryUpgrade (217.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-541206 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-541206 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (79.991001ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-541206] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19011
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19011-1078924/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19011-1078924/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (96.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-541206 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-541206 --driver=kvm2  --container-runtime=crio: (1m36.466115024s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-541206 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (96.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (42.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-541206 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-541206 --no-kubernetes --driver=kvm2  --container-runtime=crio: (41.215009324s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-541206 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-541206 status -o json: exit status 2 (255.647021ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-541206","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-541206
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-541206: (1.030729311s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (42.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (27.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-541206 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-541206 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.363555344s)
--- PASS: TestNoKubernetes/serial/Start (27.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-021279 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-021279 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (106.291767ms)

                                                
                                                
-- stdout --
	* [false-021279] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19011
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19011-1078924/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19011-1078924/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0603 13:32:37.756708 1123225 out.go:291] Setting OutFile to fd 1 ...
	I0603 13:32:37.756950 1123225 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:32:37.756959 1123225 out.go:304] Setting ErrFile to fd 2...
	I0603 13:32:37.756963 1123225 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0603 13:32:37.757135 1123225 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
	I0603 13:32:37.757746 1123225 out.go:298] Setting JSON to false
	I0603 13:32:37.758738 1123225 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":15305,"bootTime":1717406253,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0603 13:32:37.758810 1123225 start.go:139] virtualization: kvm guest
	I0603 13:32:37.761381 1123225 out.go:177] * [false-021279] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0603 13:32:37.762638 1123225 out.go:177]   - MINIKUBE_LOCATION=19011
	I0603 13:32:37.763895 1123225 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0603 13:32:37.762641 1123225 notify.go:220] Checking for updates...
	I0603 13:32:37.766510 1123225 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19011-1078924/kubeconfig
	I0603 13:32:37.767888 1123225 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19011-1078924/.minikube
	I0603 13:32:37.769313 1123225 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0603 13:32:37.770786 1123225 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0603 13:32:37.772641 1123225 config.go:182] Loaded profile config "NoKubernetes-541206": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0603 13:32:37.772745 1123225 config.go:182] Loaded profile config "kubernetes-upgrade-423965": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0603 13:32:37.772823 1123225 config.go:182] Loaded profile config "running-upgrade-439186": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0603 13:32:37.772909 1123225 driver.go:392] Setting default libvirt URI to qemu:///system
	I0603 13:32:37.810495 1123225 out.go:177] * Using the kvm2 driver based on user configuration
	I0603 13:32:37.811809 1123225 start.go:297] selected driver: kvm2
	I0603 13:32:37.811832 1123225 start.go:901] validating driver "kvm2" against <nil>
	I0603 13:32:37.811845 1123225 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0603 13:32:37.813921 1123225 out.go:177] 
	W0603 13:32:37.815278 1123225 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0603 13:32:37.816762 1123225 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-021279 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-021279

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-021279

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-021279

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-021279

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-021279

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-021279

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-021279

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-021279

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-021279

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-021279

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-021279"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-021279"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-021279"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-021279

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-021279"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-021279"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-021279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-021279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-021279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-021279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-021279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-021279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-021279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-021279" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-021279"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-021279"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-021279"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-021279"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-021279"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-021279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-021279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-021279" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-021279"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-021279"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-021279"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-021279"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-021279"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 03 Jun 2024 13:32:36 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.39.157:8443
name: running-upgrade-439186
contexts:
- context:
cluster: running-upgrade-439186
user: running-upgrade-439186
name: running-upgrade-439186
current-context: running-upgrade-439186
kind: Config
preferences: {}
users:
- name: running-upgrade-439186
user:
client-certificate: /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/running-upgrade-439186/client.crt
client-key: /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/running-upgrade-439186/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-021279

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-021279"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-021279"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-021279"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-021279"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-021279"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-021279"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-021279"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-021279"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-021279"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-021279"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-021279"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-021279"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-021279"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-021279"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-021279"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-021279"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-021279"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-021279"

                                                
                                                
----------------------- debugLogs end: false-021279 [took: 2.706795668s] --------------------------------
helpers_test.go:175: Cleaning up "false-021279" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-021279
--- PASS: TestNetworkPlugins/group/false (2.96s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.36s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (148.73s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3003754658 start -p stopped-upgrade-259751 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3003754658 start -p stopped-upgrade-259751 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (54.842274383s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3003754658 -p stopped-upgrade-259751 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3003754658 -p stopped-upgrade-259751 stop: (2.142893013s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-259751 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-259751 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m31.744891986s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (148.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-541206 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-541206 "sudo systemctl is-active --quiet service kubelet": exit status 1 (214.036345ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (26.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (13.725078596s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (12.875880943s)
--- PASS: TestNoKubernetes/serial/ProfileList (26.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-541206
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-541206: (1.279719204s)
--- PASS: TestNoKubernetes/serial/Stop (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (21.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-541206 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-541206 --driver=kvm2  --container-runtime=crio: (21.198262363s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (21.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-541206 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-541206 "sudo systemctl is-active --quiet service kubelet": exit status 1 (234.042472ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                    
x
+
TestPause/serial/Start (98.46s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-374510 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
E0603 13:34:58.229131 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/functional-093300/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-374510 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m38.464275881s)
--- PASS: TestPause/serial/Start (98.46s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.85s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-259751
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (64.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-021279 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-021279 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m4.527267247s)
--- PASS: TestNetworkPlugins/group/auto/Start (64.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (116.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-021279 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-021279 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m56.606802598s)
--- PASS: TestNetworkPlugins/group/calico/Start (116.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-021279 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-021279 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-xkr97" [c5740855-096b-4a9f-868c-f87aec7a0ca0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-xkr97" [c5740855-096b-4a9f-868c-f87aec7a0ca0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.005257989s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-021279 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-021279 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-021279 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (81.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-021279 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-021279 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m21.506111518s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (81.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (93.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-021279 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-021279 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m33.610342999s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (93.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (123.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-021279 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-021279 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (2m3.264600505s)
--- PASS: TestNetworkPlugins/group/flannel/Start (123.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-24rhk" [68dcf958-c921-480f-a1a3-28ebe9bd2c80] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005619642s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-021279 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-021279 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-d8666" [aec06cad-eed9-427b-b729-4f5b37fcd9ff] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-d8666" [aec06cad-eed9-427b-b729-4f5b37fcd9ff] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.003816619s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-021279 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-021279 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-021279 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (73.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-021279 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-021279 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m13.942097306s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (73.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-021279 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (14.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-021279 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-hllr8" [15c0e572-180e-47b5-aeea-67f08062c0c7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-hllr8" [15c0e572-180e-47b5-aeea-67f08062c0c7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 14.005839395s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (14.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-021279 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-021279 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-021279 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-jzlgz" [5be2604a-ca1a-45b3-af50-a7c5327f40ef] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005317515s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-021279 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-021279 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-cn4zs" [b854a75d-a9a0-4527-a286-0d397496c5be] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-cn4zs" [b854a75d-a9a0-4527-a286-0d397496c5be] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.005915482s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (68.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-021279 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-021279 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m8.068473682s)
--- PASS: TestNetworkPlugins/group/bridge/Start (68.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-021279 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-021279 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-021279 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-mkg6x" [948d8d4c-ce0b-41bd-9572-0ed0202017ec] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.007720013s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-021279 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (16.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-021279 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-kwvw2" [d4d1c49c-692a-4cbb-b55e-591ec38c4b04] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-kwvw2" [d4d1c49c-692a-4cbb-b55e-591ec38c4b04] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 16.004872134s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (16.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-021279 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-021279 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-njrwp" [d1f333d7-501d-4ec0-9c22-4f1cb0629e8b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-njrwp" [d1f333d7-501d-4ec0-9c22-4f1cb0629e8b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.004201207s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-021279 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-021279 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-021279 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-021279 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-021279 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-021279 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (79.6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-817450 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-817450 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (1m19.60021838s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (79.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (131.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-223260 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-223260 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (2m11.206398251s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (131.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-021279 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-021279 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-tmhhl" [14409d8b-62d7-4ff2-9d08-8429bb1b89a1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-tmhhl" [14409d8b-62d7-4ff2-9d08-8429bb1b89a1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004271208s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-021279 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-021279 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-021279 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (129.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-030870 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-030870 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (2m9.399239417s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (129.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-817450 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0327e5ff-d87f-41eb-89ff-f43a8195653e] Pending
helpers_test.go:344: "busybox" [0327e5ff-d87f-41eb-89ff-f43a8195653e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0327e5ff-d87f-41eb-89ff-f43a8195653e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.005896755s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-817450 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-817450 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-817450 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.176982625s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-817450 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-223260 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [281b59a6-05da-460b-a9de-353a33f7d95c] Pending
helpers_test.go:344: "busybox" [281b59a6-05da-460b-a9de-353a33f7d95c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [281b59a6-05da-460b-a9de-353a33f7d95c] Running
E0603 13:43:08.900714 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/auto-021279/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004288016s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-223260 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-223260 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-223260 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.035947562s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-223260 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-030870 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f50f4fd8-3455-456e-805d-c17087c1ca83] Pending
helpers_test.go:344: "busybox" [f50f4fd8-3455-456e-805d-c17087c1ca83] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0603 13:43:32.255255 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/calico-021279/client.crt: no such file or directory
helpers_test.go:344: "busybox" [f50f4fd8-3455-456e-805d-c17087c1ca83] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 7.004342269s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-030870 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-030870 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-030870 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (695.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-817450 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
E0603 13:44:50.604917 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kindnet-021279/client.crt: no such file or directory
E0603 13:44:54.195105 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/custom-flannel-021279/client.crt: no such file or directory
E0603 13:44:58.228287 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/functional-093300/client.crt: no such file or directory
E0603 13:45:11.085622 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kindnet-021279/client.crt: no such file or directory
E0603 13:45:11.526531 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/flannel-021279/client.crt: no such file or directory
E0603 13:45:11.531806 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/flannel-021279/client.crt: no such file or directory
E0603 13:45:11.542109 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/flannel-021279/client.crt: no such file or directory
E0603 13:45:11.562453 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/flannel-021279/client.crt: no such file or directory
E0603 13:45:11.602783 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/flannel-021279/client.crt: no such file or directory
E0603 13:45:11.683167 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/flannel-021279/client.crt: no such file or directory
E0603 13:45:11.782480 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/auto-021279/client.crt: no such file or directory
E0603 13:45:11.843689 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/flannel-021279/client.crt: no such file or directory
E0603 13:45:12.164104 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/flannel-021279/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-817450 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (11m35.421007476s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-817450 -n no-preload-817450
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (695.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (525.8s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-223260 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
E0603 13:45:52.046630 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/kindnet-021279/client.crt: no such file or directory
E0603 13:45:52.489296 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/flannel-021279/client.crt: no such file or directory
E0603 13:45:54.354803 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/bridge-021279/client.crt: no such file or directory
E0603 13:45:54.360087 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/bridge-021279/client.crt: no such file or directory
E0603 13:45:54.370353 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/bridge-021279/client.crt: no such file or directory
E0603 13:45:54.390613 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/bridge-021279/client.crt: no such file or directory
E0603 13:45:54.430912 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/bridge-021279/client.crt: no such file or directory
E0603 13:45:54.511323 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/bridge-021279/client.crt: no such file or directory
E0603 13:45:54.672115 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/bridge-021279/client.crt: no such file or directory
E0603 13:45:54.992849 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/bridge-021279/client.crt: no such file or directory
E0603 13:45:55.633079 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/bridge-021279/client.crt: no such file or directory
E0603 13:45:56.359934 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/enable-default-cni-021279/client.crt: no such file or directory
E0603 13:45:56.913606 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/bridge-021279/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-223260 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (8m45.546380262s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-223260 -n embed-certs-223260
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (525.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (519.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-030870 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
E0603 13:46:14.835568 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/bridge-021279/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-030870 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (8m39.177775346s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-030870 -n default-k8s-diff-port-030870
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (519.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-151788 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-151788 --alsologtostderr -v=3: (2.301507479s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-151788 -n old-k8s-version-151788
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-151788 -n old-k8s-version-151788: exit status 7 (62.829712ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-151788 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (54.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-937150 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
E0603 14:10:11.525585 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/flannel-021279/client.crt: no such file or directory
E0603 14:10:15.397795 1086251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/enable-default-cni-021279/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-937150 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (54.770743226s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (54.77s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-937150 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-937150 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.075518519s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-937150 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-937150 --alsologtostderr -v=3: (7.355595906s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-937150 -n newest-cni-937150
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-937150 -n newest-cni-937150: exit status 7 (63.514985ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-937150 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (36.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-937150 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-937150 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (35.938470768s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-937150 -n newest-cni-937150
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (36.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-937150 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.51s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-937150 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-937150 -n newest-cni-937150
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-937150 -n newest-cni-937150: exit status 2 (237.258591ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-937150 -n newest-cni-937150
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-937150 -n newest-cni-937150: exit status 2 (236.799086ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-937150 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-937150 -n newest-cni-937150
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-937150 -n newest-cni-937150
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.51s)

                                                
                                    

Test skip (37/312)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.1/cached-images 0
15 TestDownloadOnly/v1.30.1/binaries 0
16 TestDownloadOnly/v1.30.1/kubectl 0
20 TestDownloadOnlyKic 0
34 TestAddons/parallel/Olm 0
41 TestAddons/parallel/Volcano 0
48 TestDockerFlags 0
51 TestDockerEnvContainerd 0
53 TestHyperKitDriverInstallOrUpdate 0
54 TestHyperkitDriverSkipUpgrade 0
105 TestFunctional/parallel/DockerEnv 0
106 TestFunctional/parallel/PodmanEnv 0
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
126 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
127 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
129 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
130 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
154 TestGvisorAddon 0
176 TestImageBuild 0
203 TestKicCustomNetwork 0
204 TestKicExistingNetwork 0
205 TestKicCustomSubnet 0
206 TestKicStaticIP 0
238 TestChangeNoneUser 0
241 TestScheduledStopWindows 0
243 TestSkaffold 0
245 TestInsufficientStorage 0
249 TestMissingContainerUpgrade 0
257 TestNetworkPlugins/group/kubenet 3.23
265 TestNetworkPlugins/group/cilium 3.22
278 TestStartStop/group/disable-driver-mounts 0.41
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:871: skipping: crio not supported
--- SKIP: TestAddons/parallel/Volcano (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-021279 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-021279

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-021279

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-021279

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-021279

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-021279

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-021279

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-021279

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-021279

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-021279

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-021279

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-021279"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-021279"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-021279"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-021279

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-021279"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-021279"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-021279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-021279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-021279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-021279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-021279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-021279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-021279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-021279" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-021279"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-021279"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-021279"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-021279"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-021279"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-021279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-021279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-021279" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-021279"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-021279"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-021279"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-021279"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-021279"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 03 Jun 2024 13:32:36 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.39.157:8443
name: running-upgrade-439186
contexts:
- context:
cluster: running-upgrade-439186
user: running-upgrade-439186
name: running-upgrade-439186
current-context: running-upgrade-439186
kind: Config
preferences: {}
users:
- name: running-upgrade-439186
user:
client-certificate: /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/running-upgrade-439186/client.crt
client-key: /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/running-upgrade-439186/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-021279

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-021279"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-021279"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-021279"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-021279"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-021279"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-021279"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-021279"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-021279"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-021279"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-021279"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-021279"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-021279"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-021279"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-021279"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-021279"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-021279"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-021279"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-021279"

                                                
                                                
----------------------- debugLogs end: kubenet-021279 [took: 3.092949177s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-021279" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-021279
--- SKIP: TestNetworkPlugins/group/kubenet (3.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-021279 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-021279

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-021279

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-021279

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-021279

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-021279

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-021279

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-021279

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-021279

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-021279

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-021279

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021279"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021279"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021279"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-021279

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021279"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021279"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-021279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-021279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-021279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-021279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-021279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-021279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-021279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-021279" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021279"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021279"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021279"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021279"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021279"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-021279

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-021279

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-021279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-021279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-021279

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-021279

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-021279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-021279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-021279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-021279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-021279" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021279"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021279"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021279"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021279"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021279"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 03 Jun 2024 13:32:36 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.39.157:8443
name: running-upgrade-439186
contexts:
- context:
cluster: running-upgrade-439186
user: running-upgrade-439186
name: running-upgrade-439186
current-context: running-upgrade-439186
kind: Config
preferences: {}
users:
- name: running-upgrade-439186
user:
client-certificate: /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/running-upgrade-439186/client.crt
client-key: /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/running-upgrade-439186/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-021279

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021279"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021279"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021279"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021279"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021279"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021279"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021279"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021279"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021279"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021279"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021279"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021279"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021279"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021279"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021279"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021279"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021279"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-021279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-021279"

                                                
                                                
----------------------- debugLogs end: cilium-021279 [took: 3.082339353s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-021279" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-021279
--- SKIP: TestNetworkPlugins/group/cilium (3.22s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-069000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-069000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.41s)

                                                
                                    
Copied to clipboard